id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
216084436 | pes2o/s2orc | v3-fos-license | Anti-SARS-CoV-2 virus antibody levels in convalescent plasma of six donors who have recovered from COVID-19
Background: Anti-SARS-CoV-2 virus antibody levels in convalescent plasma (CP), which may be useful in severe Anti-SARS-CoV-2 virus infections, have been rarely reported. Results: A total of eight donors were considered for enrollment; two of them were excluded because of ineligible routine check. Of the six remaining participants, five samples were tested weakly positive by the IgM ELISA. Meanwhile, high titers of IgG were observed in five samples. The patient treated with CP did not require mechanical ventilation 11 days after plasma transfusion, and was then transferred to a general ward. Conclusions: Our serological findings in convalescent plasma from recovered patients may help facilitate understanding of the SARS-CoV-2 infection and establish CP donor screening protocol in COVID-19 outbreak. Methods: Anti-SARS-CoV-2 antibodies including IgM and IgG were measured by two enzyme-linked immunosorbent assays (ELISA) in convalescent plasma from six donors who have recovered from coronavirus disease 2019 (COVID-19) in Nanjing, China. CP was also utilized for the treatment of one severe COVID-19 patient.
INTRODUCTION
By late 2019 the outbreak of coronavirus disease 2019 (COVID-19) was unchecked in China [1,2]. Apart from supportive care, specific drugs for this disease are still being researched [3,4]. The absence of efficacy-proven antiviral treatment has led to attempts to treat severe SARS-CoV-2 infection with convalescent plasma containing SARS-CoV-2 specific antibodies from recovery patients-a precedent established with pathogen-specific immunoglobulin therapy for Ebola virus disease, influenza, severe acute respiratory syndrome, and severe fever and thrombocytopenia syndrome [5][6][7][8].
AGING Previous reports on other viral infections have suggested that convalescent plasma with higher antibody levels may have great effect on virus load [9,10], and our study was designed to test anti-SARS-CoV-2 virus antibody levels to select those with high titers, desiring a meaningful serologic response after CP infusion.
In accordance with CP infusion therapeutics guidelines approved by the National Health Commission of People's Republic of China, we used ELISA to screen for anti-SARS-CoV-2 IgM and IgG. In this report, we present our preliminary findings of anti-SARS-CoV-2 antibody levels in convalescent plasma obtained from six donors and clinical effects of one case treated with CP in Nanjing, China.
Characteristics of the six CP donors
We recruited a total of six donors including four males and two females, aged from 30 to 50 years old, with laboratory confirmed SARS-CoV-2 infection during the COVID-19 outbreak and the subsequent recovery certificated by two consecutively negative SARS-CoV-2 PCR assays and resolution of clinical symptoms. All the donors had fever and cough during the course of COVID-19. None of the donors were currently smoking. Donor D had a history of brain surgery due to a benign tumor. The other five donors did not have any underlying comorbidities. The baseline blood examinations of the donors, when they were admitted to the hospital due to COVID-19, were summarized in Table 1. At the time of admission, two donors had lymphocytopenia (lymphocyte counts<0.8×10 9 /L), one donor had increased alanine aminotransferase level (144 IU/L), one donor had elevated creatine kinase level (490 U/L), three donors had abnormal lactate dehydrogenase (ranged from 261 to 286 IU/L) and four donors had a Creactive protein level of more than 10 mg/L (Table 1). Chest CT scans demonstrated bilateral pneumonia in all six donors.
During hospitalization, all donors were routinely given antiviral therapy with interferon-α (500 WU, twice a day, aerosol inhalation) and lopinavir/ritonavir (400/100mg, twice a day). Donor B, C, D, and E also received intravenous immunoglobulin. A 3-day course of corticosteroids (methylprednisolone 40 mg per day) was administered to donor B, D and F. None of donor needed mechanical ventilation or required to be transferred to the intensive care unit. The time from onset of symptoms to clearance of virus, defined as two consecutive negative nucleic acid tests from throat swab samples, were varied from 8 to 18 days. The donors were discharged after virus clearance and substantially improvement of their pneumonia.
Plasma samples were collected at times ranging from 29 to 46 days after symptom onset, and 13 to 27 days after their discharge, respectively (Table 2). At the time of blood donation, the donors were free of any symptom. The complete blood count, liver and renal function, lactate dehydrogenase, and C-reactive protein were within the normal range. The lymphocyte subsets counts were summarized in Table 3. All ABO types were involved in the study except AB type. Additionally, as part of the routine check, the donated plasma was confirmed free of hepatitis B and C virus, human immunodeficiency virus (HIV) and residual SARS-CoV-2 by RT-PCR and serologic negative for hepatitis B and C virus, HIV, and syphilis.
Serological findings of anti-SARS-CoV-2 antibodies detected by ELISA
The anti-SARS-CoV-2 IgM antibody was weakly reactive (OD ratio from 1.22 to 2.01) for all donors except donor F, with a slightly higher OD ratio of 5.63, and IgG ELISA assay were also positive (OD ratio from 3.92 to 8.36) for all six donors who had IgM reactive plasma samples ( Table 2).
All donors but one had high IgG titers (≥1:320) (Figure 1), meeting the criteria (≥1:160) sponsored by the National Health Commission. However, donor D had a low IgG titer (1:40) (Figure 1), therefore this donor was not considered as an eligible donor. This donor, a 42-yearold man had the longest duration (46 days) from symptom onset to plasma collection and he had the longest duration (19 days) of hospital stay. Also, this donor had the lowest CD19+ B-cell count as well as percentage in the lymphocyte subsets analysis (Table 3).
Clinical utility of CP in a critically ill patient
The recipient for CP was a 64-year-old female. The patient was admitted to the hospital because of fever, fatigue, nausea and vomiting for 3 days, and was then confirmed of COVID-19. The underlying commodities included hypertension and diabetes. There was a fast progression of the clinical condition. On day 4 of hospitalization, the patient was transferred to Intensive Care Unit (ICU) and 1 week later received invasive mechanical ventilation. SARS-CoV-2 was undetectable from throat swab sample by nucleic acid test at the time of intubation. On day 17 of hospitalization, while the patient was still receiving invasive mechanical ventilation with a PaO2/FiO2 of 166 mmHg, she was given 200 mL CP from donor B. At the time of plasma transfusion, the lymphocyte count was 0.44×10 9 /L. a Negative controls and positive controls were included in every run. b Serial tests were performed (Figure 1). Other blood examinations, including renal and liver function, prothrombin time, creatine kinase, lactate dehydrogenase and myocardial enzymes, did not significantly changed, although the D-dimer was increased (2.31mg/L). There was no transfusion related adverse event. Lymphocyte count remained below 0.5×10 9 /L for 1 week. The patient did not require mechanical ventilation 11 days after plasma transfusion, and was then transferred to a general ward.
DISCUSSION
We reported the serological findings of SARS-CoV-2 infection in a CP donor population. Our preliminary findings suggest that recently recovered COVID-19 patients may be suitable potential donors, provided they meet other blood donation criteria.
Although our experience is limited in a few cases, a possibility could be suggested that, different from other viruses like MERS-CoV infection [11], antibody to SARS-CoV-2 in serum or plasma was frequently reactive by ELISA. All of the six donors showed positive IgM results, indicating that a negative result for IgM, a serologic marker which usually represents a recent or current infection [12][13][14], may not be suitable to be taken as a mandatory requirement for CP donor selection of limited availability of eligible potential donors in a COVID-19 outbreak. Of the six donors, only one donor had IgG titers of 1:40, which did not meet the criteria 1:160 recommended by the National Health Commission. Of note, compared with other donors, he experienced a severe disease, and had the longest duration from symptom onset to plasma collection, we suspected whether this phenomenon was related to his low CD19+ B-cell count or he had experienced a viral reactivation-an observation that requires further investigation.
However, due to limitations imposed by sample size, reactivity of ELISA tests may also be affected by the timing of plasma collection, severity of illness or corticosteroids administration. In addition, although this life-threatening disease appear to be under control following nationwide efforts and implementation of quarantine policy in China, but it is still developing in the other parts of the world. As yet no reference materials of anti-SARS-CoV-2 antibodies has been made available to evaluate the performance of the kits. Our study highlights the need for prospective serology studies and good laboratory quality assurance to better understand the humoral response to SARS-CoV-2 infection.
The weakness of the study should be noted that the clinical relevance of antibody titers in protecting against subsequent SARS-CoV-2 infection is uncertain. Compared to ELISAs, neutralization assays require virus culture, are much more labor-intensive, and need to be conducted in laboratories with higher biosafety levels [15,16]. We are currently conducting neutralization studies to further investigate whether ELISA results were correlated with neutralization results so far as to substitute for the neutralization test in resource-limited situations.
Although a favorable outcome was achieved in one patient after CP transfusion, the efficacy of CP remains inconclusive due to the very small sample size and other concomitant treatments, which might confound the result.
In summary, we presented serologic findings from six CP donors recovered from COVID-19 and one case treated with CP. This report may help facilitate understanding of the SARS-CoV-2 infection and establish donor screening protocol for CP infusion therapeutics in the COVID-19 outbreak.
Study design
Under the first and second edition of CP infusion therapeutics guidelines approved by the National AGING Health Commission, we developed a protocol for donor screening, plasma collection and specimen analysis to screen potential donors and collect hightiter plasma. Donor screening, specimen collecting and convalescent plasma collecting were conducted at the Second Hospital of Nanjing, a designated medical institution for COVID-19. The antibody testing was conducted in Nanjing Red Cross Blood Center, and its Department of Laboratory Medicine is accredited by China National Accreditation Service for Conformity Assessment. This study was approved by the ethics committee of the Second Hospital of Nanjing (reference number: 2020-LS-ky003). Written informed consent was obtained from all the donors and the recipient.
Donor population
We screened potential convalescent plasma donors from patients who were confirmed SARS-CoV-2 infection by PCR and had recovered at least four weeks from symptom onset. A total of eight volunteers were recruited as potential plasma donors for assessment. Two were excluded because of elevated alanine transaminase for one case and unexpected hemoglobin levels for the other case. The remaining six provided written, informed consent to become qualified donors.
Collection of specimens for antibody levels
Convalescent plasma was collected by apheresis from COVID-19 recovered donors, and specimen for antibody testing were collected from an integrated bypass collection reserved sample bag. Plasma for determination of anti-SARS-CoV-2 IgG antibody levels was collected in EDTA tubes and serum for anti-SARS-CoV-2 IgM antibody levels was collected in tubes with coagulation accelerators. Samples were delivered to Nanjing Red Cross Blood Center immediately after collecting, followed by sample centrifuging and antibody testing.
The first kit was a capture enzyme-linked immunosorbent assay for IgM antibody using horseradish peroxidase (HRP)-labeled SARS-CoV-2 antigens. To reveal IgM, serum samples were diluted 1:100 in dilution buffer and allowed to incubate for 60 min with plates coated by anti-human IgM μ chain. Plates were washed and HRPlabeled antigens were added. After 30 min incubation, unbound components were washed away, following adding of TMB substrate with its buffer. For a further 15 min incubation, stop buffer was added and absorbance values were measured at 450nm and 630nm dual-wavelength using a microplate reader.
The second kit was an indirect enzyme-linked immunosorbent assay designed for IgG antibody. After a formulated 1:20 predilution according to the ELISA manufacturer's instructions, plasma specimens were serially titrated 1:1, 1:10, 1:20, 1:40, 1:80, 1:160 and 1:320 in microplates by plasma from unexposed donors and added to plates coated with SARS-CoV-2 antigens. Following 60 min incubation at 37°C, plates were washed and incubated with horseradish peroxidaselabeled anti-human IgG secondary antibody. Again, plates were washed following 30 min incubation at 37°C and TMB substrate was added with its buffer. 15 min later, stop buffer was added and absorbance values were measured at 450nm and 630nm dualwavelength using a microplate reader.
Results were reported as the optical density (OD) ratio, which was calculated as the OD value of the donor's sample divided by the cutoff OD value. We used cutoff values recommended by the ELISA kit manufacturer: a ratio of <1 was considered negative, and ≥1 was considered positive.
Statistical methods
All data from measurements were displayed as tables and a histogram. | 2020-04-23T09:07:22.571Z | 2020-04-22T00:00:00.000 | {
"year": 2020,
"sha1": "9b3d20e386ff9f03c05b6b70eea62c3f090b776c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.18632/aging.103102",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "91ce95ce8542fab8df2fde1f0b077cb77c7061f5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251963251 | pes2o/s2orc | v3-fos-license | Quantitative Evaluation Method of Physical Fitness Factor Indicators in Youth Endurance Running Events
Adolescents are in a critical period of physical and intellectual development, and their growth represents the future of a country. However, with the rapid development of social economy and science and technology, sports and health-related education has not been fully developed, and due to some deviations in the current school curriculum, the physical quality of young people generally declines. Endurance running is a comprehensive index tomeasure a person’s physical fitness. It reflects the basic motor function of the matrix. It is a must-test item in the physical fitness test of young people. However, the level of endurance running has shown a downward trend in recent years. In the current endurance running training, there are many disadvantages such as extensive training methods, low efficiency, and human errors during detection. In order to improve the performance of endurance running, this paper establishes the index system of endurance running elements by introducing the concept of healthy physical fitness. Based on the elements of endurance running, the article made a detection system and compared it with the standard test method. )e data showed that P< 0.001, indicating that the test results of the two were consistent. )e detection system in this paper is suitable for the detection of physical fitness index elements. )en, the endurance running performance of the selected 124 adolescents was combined with the physical fitness index elements, and the correlation was analyzed, indicating that the endurance running level is closely related to the human body shape, cardiopulmonary function, muscle strength, and endurance level. Systematic testing and quantitative results showed that bodymass index was significantly correlated with endurance running performance in adolescents (P< 0.01). Also, the number of vertical jumps in place was significantly correlated with the number of sit-ups completed (r� 0.55, P< 0.01). )is strongly suggests that it is important to quantitatively evaluate the fitness factor indicators of endurance running in adolescents.
Introduction
In recent years, the physical quality of Chinese adolescent students has generally declined. Some scholars believe that this may be due to the lack of research on related theories of physical health education and countermeasures, resulting in the current situation of adolescent physical decline. Endurance running, as a compulsory item in the physical education examination in China, comprehensively considers the cardiorespiratory function of students. However, endurance running has a high risk of exercise, the training method is single, and young people have certain resistance to endurance running. At the same time, the current youth physical fitness testing system lacks detailed indicators that reflect students' endurance running level, resulting in untargeted training in endurance running and slow performance improvement. In order to solve the problem of low training efficiency at present, this paper introduces the concept of healthy physical fitness, and establishes the index of physical fitness elements of endurance running, which can help students evaluate the level of endurance running and improve the effect of training.
Physical fitness is a new concept in sports health that has had a major impact on the field of sports and health. From an exercise perspective, physical fitness is now considered a comprehensive measure of health. Fitness is the effective performance of the human body in performing its functions effectively and efficiently. In a word, physical fitness is a test index of physical health from the perspective of human function and skills, and it is closely related to the ability to deal with emergencies.
Based on the above thinking, this paper evaluates the physical fitness factor indicators of adolescents' endurance running, hoping to obtain the best test indicators of the physical fitness of adolescents' endurance running on the basis of experimental investigation. In addition, in order to avoid the influence of human error, a detection model is also established in this paper. By comparing with the data results of conventional detection methods, it is found that the system in this paper is suitable for physical fitness detection, and then, it is applied to the detection of endurance running events. e correlation analysis was carried out in combination with the endurance running performance of 124 adolescents, showing that BMI, VO2maxde, and endurance running performance were significantly correlated, with P < 0.01. e effective number of vertical jumps in place and the number of completed sit-ups belong to the comprehensive reflection of muscle strength and endurance. Its correlation coefficient is around 0.5, and P < 0.01, indicating that it has a significant correlation with endurance running performance, and has a strong linear correlation.
Related Work
As an important concept in the field of sports theory, physical fitness has always been a hot spot tracked by relevant researchers. Firstly, Huang H took the lead in establishing the development process of physical fitness assessment for Chinese children and adolescents. Secondly, according to the specific program design, the children and adolescents' grade indicators are used and optimized to verify the children and adolescents' physical condition grade model [1]. Due to the poor physical fitness of current children, Kozakevych V K's experiment aimed to examine the physical health of school-age children and to identify risk factors for their interference. It was found that more than 60% of teens now have low and below-average levels of physical fitness. According to the multivariate model, the level of physical fitness was positively affected by the level of material wealth (�+0.251), mother's education level (�+0.295), nutritional balance (�+0.204), and residence time in fresh air (�+0.106), and negatively affected by parental harmful habits (� −0.167) [2]. Youm S has developed an automated radiofrequency identification (RFID)-based scoring system for the Progressive Aerobic Cardiovascular Endurance Run (PACER) and 6-minute walk tests. e proposed system is able to accurately test many students or candidates on a large scale and can significantly reduce the burden on test administrators [3]. Yassine designed to examine the effects of plyometric training on the physical performance of prepubertal soccer players on stable (SPT) versus unstable (UPT) surfaces. If the goal is to further enhance the static balance, UPT has advantages over SPT [4]. e above-mentioned research on physical fitness test has a limited entry point, and most of them are based on the health level, and the guidance for the article is relatively general.
For the research on the relevant physical fitness index elements of adolescents, relevant explorations have been carried out in various fields. e primary objective of Man X was to examine associations between adolescent health-related PF, skills-related PF, depression, and academic achievement. Findings have suggested that people who are physically fit and exhibit positive mental functioning may achieve better academic achievement in adolescence [5]. Gontarev S aimed to analyze the relationship between cardiorespiratory fitness and obesity, blood pressure, and hypertension in adolescents. In conclusion, these results should be considered when developing strategies and recommendations to improve adolescents' lifestyle and health [6]. e purpose of Ucok K was to compare maximal aerobic capacity (VO2 max), muscle strength, trunk flexibility, total energy expenditure, daily physical activity, resting metabolic rate (RMR), and body composition and body fat distribution in diabetic patients and healthy controls [7]. Tan S explored the effects of exercise training on body composition, cardiovascular function, and physique in obese and lean 5-yearold children. Well-trained obese children improved performance in the long jump, the 10-meter 4 shuttle run, and the 3-meter balance beam walk, while well-trained lean children improved more physical activity [8]. e abovementioned related researches on the elements of physical fitness indicators are mostly from the perspective of disease and health, and their relevance to the article is low.
Physical Fitness Required for Endurance Running.
Endurance running, also known as middle-and long-distance running, is an effective method to evaluate the cardiorespiratory function and endurance level of students [9,10]. Additionally, running is associated with physical flexibility, coordination, balance, and other qualities. When a running motion as shown in Figure 1 occurs, the movement and coordination of human muscles, bones, and joints are required [11]. From the perspective of related research, endurance running is a complex exercise that integrates the human movement system, respiratory energy supply system, nervous system, and endocrine system, and these factors are closely related to the body [12,13]. Exploring the relationship between the physical fitness and long-distance running, and constructing a physical fitness index system for long-distance running, is extremely important for improving the level of long-distance running, and has a certain value for cultivating students [14]. At the same time, the study of the fitness factor in endurance running can also contribute to the promotion of sports and generate a national sporting boom.
Physical fitness is defined as an individual's ability to perform adequate daily tasks, enjoy leisure time, and adapt to emergencies and stress [15,16]. When classified by type, the physique can be divided into healthy physique and sports physique. As the name suggests, physical fitness is the physical fitness related to the body's sensitivity, regulation, balance, and other physical capabilities [17,18]. Figure 2 shows how each element of physical fitness works.
rough optimal fitness training, students gain insight into how to acquire healthy fitness and healthy fitness acquisition skills, as well as ways to apply fitness principles into practice [19]. In addition, from the above operating principles we can also see that good physical performance cannot be achieved without the close cooperation of all body parts.
Preliminary Construction of the Physical Fitness Factor
Index System. Cardiorespiratory endurance, strength and body composition, and physical flexibility are four commonly used test methods for healthy physique in the United States [20]. Maintaining a good state in these areas means that a person's physical level is good. In other words, you have the ability to exercise safely [21]. In recent years, the government has determined different inspection items for citizens of different ages to fully understand people's health status. At present, the physical fitness-level test items of Chinese adolescents are shown in Table 1.
Quantitative Detection Experiment of Physical Fitness Factors
In order to better detect the indicators of physical fitness factors required for endurance running, this paper builds a measurement system from the detection of physical fitness factors that affect endurance running performance. e Kinect sensor and the force measuring platform based on the pressure sensor are used to build an information collection module, and an intelligent youth physical fitness factor index test platform is constructed.
Construction of the Hardware Part of the System.
e system detection platform built in this paper adopts the JHBM-7-V-type load cell. Its working principle is based on the piezoresistive principle. With the increase in the force on the sensor, the resistance value basically decreases linearly. e detection platform is mainly composed of signal acquisition and its amplification module, A/D conversion module, communication module, main control chip, and host computer. It has the function of collecting the signal of the weighing sensor and uploading it to the host computer. e overall block diagram is shown in Figure 3.
e software of the lower computer is based on the Keil MDK integrated development environment, and is developed using the C language. Combined with the hardware circuit, it realizes the acquisition and processing of sensor data, and the communication with the upper computer. e main functions of the upper computer software include the following: sending pressure information collection instructions to the upper computer, receiving the pressure signal obtained by the lower computer, and calculating the center position of the sole pressure according to the pressure value of the pressure sensor.
Construction of the System Software Part.
e detection station in this paper uses the Kinect sensor, and the depth image obtained by the Kinect can extract the human skeleton model in real time. is system uses the Kinect for Windows SDK2.0 as the development tool for driving the Kinect and related data acquisition. During use, the application must detect and discover the Kinect sensors linked to the device, and before these sensors can be used, they must be initialized and only then can data be generated. It should be pointed out that the origin positions of the image coordinate system and the actual space coordinate system are not uniform, and the spatial positions of the depth camera and the color camera are not completely coincident, so coordinate conversion is required during use. However, the Kinect sensor script provides a conversion method for the depth image coordinate system, the color image coordinate system, and the bone space coordinate system. e conversion relationship is shown in Figure 4, and it can also be converted according to the knowledge of space geometry.
When we stand behind the Kinect, facing away from it, the right side is positive on the x-axis, the top is positive on the y-axis, and the z-axis is pointing towards us, which is the same as the definition of a normal coordinate system. e depth image obtained by the Kinect contains a lot of jitter noise; that is, there is random noise in the depth value of the image pixel position, which is called the flicker effect. is phenomenon causes certain errors in the measurement using depth information, so the depth map needs to be filtered in real time.
e extracted joints have jitter in a certain range; especially, the jitter of the joints is large. In order to obtain more stable bone data, this paper firstly performs smooth filtering on the joint position, which is the premise of using bone data. e smoothing algorithm for skeletal data is International Transactions on Electrical Energy Systems 3 described in detail below. Considering the smoothing effect and filtering real-time requirements, this paper uses the Kalman filtering algorithm to filter the bone data. Its idea is to update the state variable information iteratively and recursively when new data are obtained, which is an optimal estimation method. e Kalman filter mainly contains the equation of state transition.
Among them, � M X−1 represents the estimated value of the bone data at time x-1, � M X is the estimated value of the bone data at time x, and D is the transition matrix of the state, which is also the basis for the algorithm to predict the state variables. H X−1 is the estimated error value. International Transactions on Electrical Energy Systems e calculation expression of the observed value is as follows: Among them, G Z represents the observed value of the skeleton data at time X, and U X represents the measurement error. F is the observation matrix.
Iterative process: according to the state prediction at time X-1, the state at time X is expressed as follows: � M X represents the prior state estimate of the skeleton data at time x-1, and � M X represents the posterior state estimate of the skeleton data at time x. e X−1 represents the input quantity that can be selected and controlled, and C represents the gain. However, in practical applications, there is generally no control input, so these two items can be ignored.
Mean squared error prediction is given by the following equation: In the prediction equation, Q X−1 is the a priori estimated covariance of the data at time X, � Q X is the a posteriori estimated covariance of the data at time X, and P is the covariance of the excitation noise in the process, that is, the error between the transition matrix and the actual process.
Filter gain expression is given by the following equation: N represents the covariance when measuring noise. Filter estimation expression is given by the following equation: e mean squared error follows the mean: e first step is to confirm the transition state matrix D, which is obtained according to the formula: Among them, s is the representation value of displacement, c is velocity, and i is acceleration.
Assuming a value Δs is 1, the matrix expression for the above equation is as follows: en, the state estimator of the system in this paper is given by the following equation: � M X � K ax , K bX , K nX , C aX , C bX , C nX , i aX , i bX , i nX . (10) e observations are as follows: So, the transition matrix D is expressed as e measurement matrix F is expressed as follows: e other parameters are as follows: e Kalman filtering allows the optimal estimation of the system state from the system input and output observations. Taking hand joints as experimental samples, the effect of the Kalman filtering is shown in Figure 5. is enables further smoothing of the hand joints.
It can be seen from Figure 5 that although there are certain fluctuations in the data curve between the effect of the Kalman filtering and the observed value, the data difference is small. On the whole, the filtering effect of the Kalman algorithm is consistent with the effect of the observed value. It shows that the Kalman filter algorithm filters out the hand joint jitter, smoothes the joint position information, and provides a guarantee for the accuracy of the subsequent index measurement.
Determination of Physical Fitness and Body Mass Index.
Determination of height and weight: generally speaking, the measurement of body mass index is mainly carried out through the detection platform constructed in this paper. e height measurement method can be obtained using the Kinect.
Determination of waist, abdomen, and lower limb muscle fitness indicators: muscle fitness is a very important physical fitness in endurance running, of which waist, abdomen, and lower limb muscle fitness play an important role. In this paper, the number of sit-ups completed is used to measure the strength and endurance of the waist and abdominal muscles, while the muscle strength of the lower body is measured by the maximum height of jumping in place, and the muscle endurance of the lower body is measured by the number of jumps in place.
Determination of flexibility index: sitting and standing body forward flexion are the international common methods for evaluating flexibility, which mainly reflect the extension of the hamstrings, tendons, muscles, and joints of the trunk and the back of the thigh. Flexibility is not only an important part of healthy physical fitness, but also promotes the International Transactions on Electrical Energy Systems explosion of strength and speed, which plays an important role in improving athletic ability and preventing sports injuries. Using the Kinect sensor, one end of the sensor is fixed to the ground and the other end is set at the start of forward bending in the station stereo.
Balance ability index determination: balance ability includes static and dynamic balance ability. Static balance refers to the ability of a limb to maintain a fixed posture, and dynamic balance refers to the ability to return to its own balance under external disturbances. e quality of balance ability reflects the functional level of receptors and nervous system on the one hand, and the development level of executive organs such as skeletal muscles on the other hand.
In this paper, two sensors are used to detect the same vertical jumping action. If both of them detect valid results, the average of the two test results is taken as the tester's score for this jump. If the Kinect has a false detection or missed detection, the result obtained by the force tester will be used as the tester's jump result.
Data Sources and Basic Information.
In this paper, 124 adolescents were selected as the measurement objects for the indicators of physical fitness elements required for endurance running, and they were divided into two groups. e experimental group adopted the testing platform established in this paper. e platform is equipped with two kinds of sensors, and the control group uses conventional sports testing equipment, such as height scales, sitting body flexion tester, vertical jump height test device, and stopwatch, and other equipment takes turns to measure. All test subjects were tested in the experimental group and the control group in the same place. In order to ensure the physical recovery of the experimental subjects, the interval between each test is about 30 minutes. All the scores of all subjects in the two tests were recorded, and the basic conditions of the subjects are shown in Table 2.
Due to the differences in the age distribution and gender of the subjects, more accurate results can be obtained for subsequent experiments. e subjects' height, weight, and BMI were statistically analyzed this time, and the results are shown in Figure 6.
In Figure 6, the average age of the subjects is 12.95 ± 1.96 years, the height is 1.6 ± 0.09 m, the weight is 53.5 ± 10.2 kg, and the BMI is 20 ± 3.0 kg/m 2 . ere were significant differences in height and weight among different age groups of the same gender, height (P < 0.001 for males, P < 0.001 for females), and weight (P < 0.001 for males, P < 0.01 for females). BMI increased slightly with age (P > 0.05 for males, P > 0.05 for females). From the analysis of different genders in the same age group, there was no statistical difference in height and weight between men and women at the age of 11 (P > 0.05). However, from the age of 12, the two indicators of males were significantly higher than those of females. In terms of height, the 12-year-old male was 1.63 ± 0.06 m and the female was 1.57 ± 0.04 m (P < 0.05); then 13-year-old male was 1.71 ± 0.05 m and the female was 1.61 ± 0.04 m (P < 0.001); and the 14-year-old male was 1.67 ± 0.06 m and the female was 1.58 ± 0.04 m (P < 0.001). In terms of body weight, the 12-year-old male was 52 ± 6.9 kg and the female was 50 ± 6.7 kg (P < 0.05); the 13-year-old male was 60 ± 8.1 kg and the female was 52 ± 11 kg (P < 0.05); and the 14-year-old male was 61 ± 13 kg and the female was 51 ± 5.9 kg (P < 0.01).
e BMI of males in the same age group was slightly higher than that of females, but there was no significant difference between genders (P > 0.05).
Muscle Strength and Endurance Indicators of Waist,
Abdomen, and Lower Limbs. In this paper, the physical fitness index of waist, abdomen, and lower limbs is measured, and the number of completed sit-ups is used to measure the strength and endurance of waist and abdomen muscles. e muscle strength of the lower body is measured by the maximum height of jumping in place, and the muscle endurance of the lower body is measured by the number of jumps in place. e test results of waist, abdomen, and lower limb muscle strength and endurance index are shown in Table 3. e data in Figure 7 show that there is a significant difference between the number of sit-ups and the strength and endurance of the lumbar and abdominal muscles (r� 0.96, P < 0.01), which indicates that sit-ups can increase the endurance index of adolescents to some extent. Also, by comparing other training programs, we found significant differences between lumbar, abdominal, and lower limb muscle strength and the endurance index of the adolescents.
Determination of the Flexibility Index.
e flexibility index in this paper is measured by sitting and standing forward flexion scores. e test results of the two groups of members are shown in Table 4. e data in Figure 8 show that by examining sitting forward and standing forward bending, we found an extremely statistically significant difference between the flexibility index and the endurance index of the adolescents (r 0.98, P < 0.01; r 0.93, P < 0.01).
Balance Ability Index.
In this paper, the measurement of balance ability, one of the physical fitness indicators, is measured by the standing time with one foot and eyes closed.
e test results are shown in Table 5. Table 5 shows the test results of the balance ability index, where P < 0.001, indicating that there is no significant difference in the test data of the two groups of subjects, which indicates that the consistency of the two groups of data is very good.
Simulation Case.
e endurance running performance of 124 adolescents is taken as a sample to explore the relationship between physical fitness and endurance running. Among them, the system equipment collects the index data of body composition, lower limb muscle strength and endurance, waist and abdominal muscle endurance, flexibility, and balance ability. e endurance running performance of all experimental subjects was graded according to relevant indicators and divided into four grades: excellent, good, passing, and failing, corresponding to the numbers 4, 3, 2, and 1, respectively. e obtained endurance running data are graded, and the test graded data are shown in Figure 9.
e Pearson correlation coefficient is calculated for all test results and endurance running results, and the correlation analysis results are shown in Table 6.
It can be seen from Table 6 that BMI and VO2max are significantly correlated with endurance running performance, and the height and number of jumps in place, sit-ups, and endurance running performance are significantly correlated. erefore, it can be concluded that if the BMI of the tester is in the range of thin to overweight, BMI and endurance running performance are positively correlated, indicating that the impact of body shape on endurance running performance is significant, but the two do not completely belong to the same category. ere was a significant negative correlation between VO2max and endurance running performance, which was consistent with the findings of the literature. Due to differences in personal physique, psychological state, skills, and the characteristics of the race schedule, the relationship between the two is not significantly linear. e maximum height of vertical jump in situ measures the explosive power of human muscles, and it has a nonlinear negative correlation with endurance running performance. Explosive power plays a lesser role in longdistance running. It can be seen that the effective number of vertical jumps in situ and the number of completed sit-ups belong to the comprehensive reflection of muscle strength and endurance, and have a significant correlation with endurance running performance, and a strong linear correlation.
e bivariate correlation analysis of the influencing factors of endurance running found that the number of vertical jumps in place and the number of completed situps were significantly correlated (r 0.55, P < 0.01), and the two were not statistically independent. From a physiological point of view, the muscles of the waist and abdomen and the muscles of the lower limbs work in coordination in many movements. However, the measurement actions and indicators selected in this paper cannot accurately reflect the difference between lower limb muscle fitness and core muscle fitness. erefore, this paper will keep one of the two, and choose the number of sit-ups that is more related to endurance running as the test index of muscle strength and endurance. To sum up, for healthy middle school students, the index system constructed in this paper includes the following: body mass index (BMI), maximum oxygen uptake (VO 2max ), jump height in place, and number of
Conclusions
e level of endurance running in young people is closely related to cardiorespiratory fitness, muscle strength, and endurance levels. At present, the problem of the physical decline of middle school students is prominent, and the performance indicators of endurance running projects are continuously lowered, which reflects that the current youth's endurance quality is not optimistic. However, the training of endurance running is boring, the intensity is relatively high, the exercise risk is relatively high, the training effect is not significant, and it is difficult to improve performance. Based on the physical fitness theory, starting from the physical fitness detection indicators required for endurance running, and by referring to the relevant index system, this paper initially establishes the physical fitness detection element indicators required for endurance running, and verifies its scientificity through correlation analysis. In addition, a detection model is also constructed. By comparing with the conventional detection methods, the data results show that the two are consistent, and the detection model in this paper is suitable for daily detection.
Data Availability
e data that support the findings of this study are available from the corresponding author upon reasonable request.
Conflicts of Interest
e authors declare that they have no conflicts of interest. International Transactions on Electrical Energy Systems 9 | 2022-09-01T15:17:31.532Z | 2022-08-30T00:00:00.000 | {
"year": 2022,
"sha1": "cef31be6e481781b9bebd40878aeda7d0fca39d3",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/itees/2022/1994263.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2028565d466208f715f564afb508ca38941e5d7a",
"s2fieldsofstudy": [
"Education",
"Medicine"
],
"extfieldsofstudy": []
} |
234411982 | pes2o/s2orc | v3-fos-license | Comparison of volume transport in the Halmahera Sea between La Nina 2011 and El Nino 2015 events based on numerical model
Variations of volume transport in the Halmahera Sea are strongly influenced by the El Nino Southern Oscillation (ENSO). Based on the Southern Oscillation Index (SOI), in 2011 La Nina event took place with a strength of 3.02, while in 2015 El Nino occurred with a strength of -2.6. This paper discusses the variation of transport volume caused by the ENSO phenomenon based on the results of the Regional Ocean Model System (ROMS). On the Halmahera Sea at a latitude of 0.3°S with a width of 67 km and a depth of down to 200 m, net volume transport always moves southward. The largest volume transport in La Nina 2011 occurred in September-October, which was -8.9 Sv. Meanwhile, in El Nino 2015 the largest volume transport occurred in July-August, which was equal to -4.9 Sv. The cross correlation coefficient between volume transport and SOI in 2011 and 2015 was r = 0.55 and r = 0.61 respectively, where these results indicate a strong relationship.
Introduction
Indonesia, which is adjacent to two large seas namely the Pacific Ocean in the north and northeast and the Indian Ocean in the south and southwest, Indonesia is the connecting link of the two Oceans known as the Indonesian throughflow (ITF). Water mass flow that occurs as a result of the difference in pressure between the two oceans. The water sources carried by the ITF come from the northern and southern of the Pacific Ocean. The waters of the Makassar and Flores Sea Strait are more influenced by the mass of the North Pacific sea, while the Seram Sea and Halmahera Sea are more influenced by the mass of water from the South Pacific [1]. According to the theory of Clarke and Liu [2], the volume transport of throughflow is expected to vary during the E1 Nino Southern Oscillation (ENSO) cycle, with larger than normal transport during the La Nina (cold) phase, when strong easterlies along the equatorial Pacific build up high sea level in the western Pacific. The pressure difference from the Pacific to the Indian Ocean is the driving force for throughflow [1], so it seems intuitively reasonable to expect a larger transport at this time. In the theory, the Pacific sea level is transmitted to the north-western coast of Australia and influences throughflow by geostrophy. However, it is not yet possible to estimate the amount of water mass carried through this eastern path due to other water mass inputs on the eastern path, through the Halmahera Sea. Apart from the measurement results, the estimated transport passing through the Halmahera Sea from the modelling analysis is also done by Morey et al [3]. using ROMS (Regional Ocean Modelling System) transport volume within the thermocline and the intermediate layer in 2011 (La Nina Event) occurred in September-October -8.9 Sv. While in 2015 (El Nino event) the largest transport volume occurred in July-August -4.9 Sv.
The Halmahera Sea is the first route from the ITF before entering the Seram Sea and Maluku Sea, so the value of volume transport in the Halmahera Sea affects the total volume transport of the mass water through Indonesian waters. Therefore in this study, the influence of ENSO on the variability of volume transport in the Halmahera Sea will be assessed using the ROMS model. The purpose of this study is to see the direct effect of ENSO on changes in transport volume within the thermocline and the intermediate layer that occur in the Halmahera Sea.
Tides
Forcing from tides is put to ROMS, and represented with eight tidal constituents of the M2, S2, N2, K2, K1, O1, P1, and Q1. They were imposed on the lateral open boundaries with constituents obtained from global model of ocean tides TPXO8-atlas TOPEX/Poseidon. TPXO8-atlas is current version of tides model with complex amplitudes harmonic constituents of earth relative sea-surface elevation with ¼ degree resolution global grid and used a least-squares sense, the Laplace Tidal Equations and along track averaged data from TOPEX/Poseidon and Jason (on TOPEX/Poseidon tracks since 2002) [4,5]. As a model verification, we used sea surface elevation, field observation data from Compact TD instrument, and compared to model result of 2017. We verified model elevation at latitude of 00°13' 26,8"S and longitude of 117° 25' 11,2"E with date of verification is from January 29th to February 19th, 2017.
Temperature, Salinity, and Current
Global reanalysis assimilation data sets from Hybrid Coordinate Ocean Model (HYCOM) and Navy Coupled Ocean Data Assimilation (NCODA Global 1/12 o ), e.g sea water salinity, sea water potential temperature, sea water velocity (u and v), and sea surface elevation, used as initial condition to the model (http://hycom.coaps.fsu.edu/thredds/catalog.html). During the simulation, the interior temperature and salinity were nudged to the tracer fields from HYCOM with a time scale of 1 day for each year simulation.
Atmospheric Forcing
Atmospheric forcing such as surface wind, air pressure, air temperature, air humidity, net fresh water flux, rainfall rate, net long-wave radiation flux, and solar shortwave radiation flux extracted from European Centre for Medium-Range Weather Forecasts (ECMWF) (http://apps.ecmwf.int/datasets/) and applied to model for each scenario. Surface wind is velocity of wind 10 meters above sea surface elevation and used as generating force for surface current circulation. Surface wind was imposed every 3 hours for a-year long simulation. Atmospheric forcing were computed by ROMS using bulk-fluxes formulation internally and turbulent fluxes for wind, heat, and moisture are computed using Monin-Obukhov similarity theory [6]. These atmospheric forcing were imposed to the model every 3 hours time step. Figure 1 showed bathymetry that was used for the model area. It was extracted from global topography data fusion of NASA Shuttle Radar Topography Mission (SRTM) [7] land topography with measured and estimated seafloor topography (SRTM15_PLUS) (ftp://topex.ucsd.edu/pub/srtm15_plus/). This data is corrected by sounding [8] and gravity data [9] and modified from SRTM 30 product distributed by USGS EROS data center. The grid resolution is 30 second which is roughly one kilometer. Land data are based on the 1-km averages of topography derived from the USGS SRTM 30 gridded DEM data product created with data from the NASA Shuttle Radar Topography Mission. The ocean data are the same as SRTM30_PLUS but required more extensive editing to remove the bad points mostly along edges of the swath data. For model, depth minimum settled to -5 m and deepest depth is -2734 m.
Methods
As a part of future goal research, in this study we focused to apply ROMS model to determine characteristic transport volume in Halmahera Sea. We forced this model using ECMWF data and HYCOM data as initial condition to the model. To determine ENSO effect in Halmahera Sea, we compiled 2 different scenarios hence represented La Niña 2011 and El Niño 2015 year. The correlation coefficient between the two variables is used as the closeness of the linear relationship between the variables involved, in this study a linear relationship between the two variables in question is a linear relationship between the volume transport in the Halmahera Sea and Southern Oscillation Index (SOI). Correlation coefficient is in the range of -1 <Rxy <1, if Rxy = 1, the variables x and y are positively correlated perfectly and all possible x and y lie in a straight line with a positive slope in xy fields, if Rxy = 0 then the two variables are said to be uncorrelated, the meanings are not linearly related to each other, and if Rxy = -1, then the two variables are perfectly negatively correlated and the variable values lie in a straight line in the xy plane but with a negative slope [11]. The SOI measures the difference in surface air pressure between Tahiti and Darwin. The index is best represented by monthly (or longer) averages as daily or weekly SOI values can fluctuate markedly due to short-lived, day-to-day weather patterns, particularly if a tropical cyclone is present. We compared model results from year 2017 model using field observation data and used the setting to be applied to the three scenarios of model. To see the correlation between model and observation data, we used Root Mean Square Error (RMSE) and Mean Absolute Percentage Error (MAPE) formula that shown by equation (1) and (2)
Hydrodynamic Model ROMS
The simulation of hydrodynamic process in domain area were conducted by using the numerical model Regional Ocean Modeling System (ROMS). This model was quite popular among modeler, scientist and researcher who wanted to study the coastal application and developed from Princeton Ocean Model (POM). Many literatures described this model capability especially in the regional ocean domain. ROMS is a threedimensional, free surface, terrain-following numerical model that solves finite difference approximation of the Reynolds-averaged Navier-Stokes (RANS) equation using the hydrostatic and Boussinesq assumption with a split-explicit time stepping algorithm [13,14,15]. It uses a horizontal curvilinear Arakawa "C" grid and vertical stretched terrain-following coordinates [15]. This model also can be configured depending of user application which has several choices for advection schemes, pressure gradient algorithms, turbulent closure, and many types of boundary condition. The governing equations used in ROMS were presented in flux form on the Cartesian horizontal coordinates and sigma vertical coordinates. For the momentum equations on the x-and y-axis (equation 3 and 4) directions are: with the continuity equation: and scalar transport: These equations are closed by parameterizing the Reynolds stresses and turbulent tracer fluxes as where KM is the eddy viscosity for momentum and KH is the eddy diffusivity. Eddy viscosities and eddy diffusivities are calculated using one of five options for turbulence-closure models in ROMS: (i) BruntVäisälä frequency mixing in which mixing is based on the stability frequency; (ii) a user-provided analytical expression such as a constant or parabolic shape; (iii) the K-profile parameterization [16], expanded to include both surface and bottom-boundary layers [17]; (iv) Mellor-Yamada level 2.5 (MY2.5) method [18]; and (v) the generic length-scale (GLS) method [19] as implemented in [20] that also includes the option for surface fluxes of turbulent kinetic energy due to wave breaking. and for this study, we applied option (v) to calculated eddy viscosities and eddy diffusivities. Zonal currents reach the minimum velocity in the east direction which occurs in June. and reaching its maximum speed in the west which occurred in November. Whereas the velocity component of the meridional current reaches a minimum to the south in April and reaches a maximum speed to the north in March [28]. The results of the model also show similarities to previous studies where in March the surface zonal currents point north and in April point south. The movement of currents to the north near the bottom of the Halmahera Sea indicates that the mass of water is moving towards the Pacific Ocean, while the movement of currents to the south states that the inflow from the Pacific Ocean via the Halmahera Sea into Indonesian territorial waters [27]. Ocean pattern circulation in Halmahera Sea The main driving force of the ITF flow in the upper 200 m layer was the differences in the strong sea level pressure between the Pacific Ocean and Indian Ocean so that the ocean current flows to the south throughout the year [25]. 13 the maximum monthly average velocity that occurs up to -0.7 m/s from the surface to a depth of 150 m with a width of 33,3 km. the significant difference in transport volume occurred in these two conditions. In La Nina condition, low pressure in the Western Pacific and intense trade wind causing the surface currents strengthen. Therefore, the ITF is increase. Negative indicates southward. On the contrary positive value of transport is northward. In addition to currents that lead to the south, also found currents that lead to the north, the results of the model show currents that lead to a maximum of 0.5 m / s. Previous research conducted in the Halmahera Sea [27], after measuring with the mooring current meter at a depth of 400 m, 700 m and 900 m revealed that currents at each depth have different velocities and directions. However, current changes in the Halmahera Sea were dominantly controlled by NGCC and NGCUC. The NGCC (New Guinea Coastal Current) and NGCUC (New Guinea Under Current) flow along the New Guinea (Papua) coast, and they are part of the South Equatorial Current. NGCC flows northwestward, and NGCUC flows northwestward and then turns eastward to Halmahera Island, joining the Mindanao Current and flows eastward as the North Equatorial Counter Current. The cyclonic ME (Mindanao Eddy) and anticyclonic HE (Halmahera Eddy) were found at the confluence region of Mindanao and NGCUC [26]. NGCC is a surface current caused by seasonal influences [29]. In the boreal summer characterized by the southeasterly monsoon, westward currents of over 60 cm/s were dominant in the surface layer. In the boreal winter, an eastward surface current developed to 100 cm/s extending down to 100 m depth in response to the northwesterly monsoonal winds. During the Southeast Monsoon, NGCUC flows strongly northwestward [30]. Increased transport volume also occurs along with the changing seasons in that year. Variability also occurs with seasonal changes, where when the January-March (South-West monsoon) in the La Nina (2011) transport volume decreases the number of transports from -6.1 sv to -4.9 sv and when May-September (North-east monsoon) in the La Nina (2011) transport volume has increased the number of transports from -5.3 sv to -8.9 sv. In January-March (South-West monsoon) in the year of El Nino (2011) transport volume also decreased the number of transports from -2.4 sv to -2.1 sv and when May-September (North-east monsoon) in The year of El Nino (2015) transport volume also experienced an increase in the number of transports from -3.5 sv to -4.7 sv (see Table 1). During November to March (South-West monsoon), the equatorial currents in the Indian Ocean flowed strongly and donated water masses to the southwest of Sumatra and south of Java-Sumbawa which were the ITF outflow areas thus increasing sea levels. As a result the gradient the pressure from the Pacific Ocean to the Indian Ocean becomes smaller and ITF transport flow becomes minimum [22]. In contrast, from May-September (North-east monsoon), currents in the Indian Ocean were replaced by southern equatorial currents that spread northwards, pushing water masses away from the eastern Indian Ocean. The low sea level in the region compared to the Pacific Ocean produces the maximum ITF transport flow [22]. Maximum transport volume occurs in La Nina (2011) in September-October when Northeast monsoon and the weakest occurred in El Nino (2015) in March when the South-West monsoon. Volume transport cross-correlation and SOI Figure 10 shows a strong relation, where in La Nina the value is 0.5513 and in El Nino the value is 0.6174. This shows the amount of transport volume that enters the ITF waters due to changes in surface height that occur in the Indian Ocean and the Pacific Ocean. The oceanic response to wind forcing is often accomplished through wave processes that propagate along the equatorial and coastal wave guides within the Indonesian Archipelago, and impact the water properties, thermocline and sea level on all timescales. The equatorial winds that produced free equatorial Rossby waves whose signals indicated reached the study region. Equatorial Pacific Rossby waves excited coastally trapped waves off the western tip of New Guinea that propagated poleward along the Arafura/ Australian shelf break. As well, Pacific energy radiated westward into the southeast Indian Ocean via the Banda Sea [23,24].
Conclusion
The current circulation pattern that occurs in the Halmahera Sea has the similar pattern throughout the year which is towards the south, but the currents will experience strengthening and attenuation along with the strengthening of tidal winds, monsoon, differences in the water level of the Pacific Ocean and | 2020-12-24T09:12:14.175Z | 2020-12-22T00:00:00.000 | {
"year": 2020,
"sha1": "6c5308a61f4c6424a96e3da5684920c58e774a6a",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/618/1/012019",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "322320aba65bf098a41dc4f9cfa70a3b49127cc4",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geology"
]
} |
247053224 | pes2o/s2orc | v3-fos-license | Estimation of the Maternal Investment of Sea Turtles by Automatic Identification of Nesting Behavior and Number of Eggs Laid from a Tri-Axial Accelerometer
Simple Summary During the reproduction period, female sea turtles come several times onto the beaches to lay their eggs. Monitoring of the nesting populations is therefore important to estimate the state of a population and its future. However, measuring the clutch size and frequency of sea turtles is tedious work that requires rigorous monitoring of the nesting site throughout the breeding season. In order to support the fieldwork, we propose an automatic method to remotely record the behavior on land of the sea turtles from animal-attached sensors; an accelerometer. The proposed method estimates, with an accuracy of 95%, the behaviors on land of sea turtles and the number of eggs laid. This automatic method should therefore help researchers monitor nesting sea turtle populations and contribute to improving global knowledge on the demographic status of these threatened species. Abstract Monitoring reproductive outputs of sea turtles is difficult, as it requires a large number of observers patrolling extended beaches every night throughout the breeding season with the risk of missing nesting individuals. We introduce the first automatic method to remotely record the reproductive outputs of green turtles (Chelonia mydas) using accelerometers. First, we trained a fully convolutional neural network, the V-net, to automatically identify the six behaviors shown during nesting. With an accuracy of 0.95, the V-net succeeded in detecting the Egg laying process with a precision of 0.97. Then, we estimated the number of laid eggs from the predicted Egg laying sequence and obtained the outputs with a mean relative error of 7% compared to the observed numbers in the field. Based on deployment of non-invasive and miniature loggers, the proposed method should help researchers monitor nesting sea turtle populations. Furthermore, its use can be coupled with the deployment of accelerometers at sea during the intra-nesting period, from which behaviors can also be estimated. The knowledge of the behavior of sea turtle on land and at sea during the entire reproduction period is essential to improve our knowledge of this threatened species.
Introduction
Estimation of parental investment in sea turtles relies primarily on the measurement of reproductive outcomes of females. Without parental care, female sea turtles favor energy investment in pre-ovipositional allocations and lay several nests of 50 to 130 eggs per 2 of 14 breeding season depending on the species [1]. Inter and intra-individual variations in the number of clutches and of eggs laid during a breeding season have been observed within populations suggesting variation in energy invested in the offspring [2][3][4]. Therefore, measuring clutch size (i.e., number of eggs laid) and clutch frequency (i.e., number of clutches per breeding individual) can be used as indicator of maternal investment in sea turtles. However, both of these parameters are difficult to obtain by long-term population monitoring.
Measuring the clutch size and frequency of sea turtles is tedious work that requires rigorous monitoring of the nesting sites throughout the breeding season. The most common method is based on capture-mark-recapture design: patrols of at least eight hours are carried out every night to survey the nesting sites and identify every female that comes ashore, using a Personal Integrated Transponder(PIT) tag or an unique numbered flipper tag [5][6][7][8]. However, this method requires a consequent number of observers performing long continuous trips to cover the entire beach and ensure that no individuals are missed, and thus is an important logistic with expensive costs. Moreover, since it is difficult not to miss any sea turtle, the observed number of clutches deposited by sea turtles is generally lower than the real number [5,9,10]. The number of eggs laid is even more complicated to obtain, as it requires observers to stay with one turtle for almost the entire nesting process counting the deposited eggs [11]. Finally, the capture-mark-recapture monitoring method is impractical for a large population or extensive area. Therefore, there is a crucial need to develop an efficient method to remotely record reproductive outcomes of sea turtles in order to support the intense monitoring effort of field observation.
Few studies have explored the use of new technologies to record reproductive outcomes of nesting sea turtle populations. For example, Blanco et al. [12] used ultrasonography of females' ovaries to visualize their reproductive stage. Ultrasound scans allowed them to identify the remaining number of clutches of every scanned female and thus obtain a more accurate clutch-frequency estimation. However, it was not possible to estimate the number of eggs laid from this method and night patrols were still required [12]. In addition, ultrasonography requires direct and repeated interference with the turtles, which may disturb the animals and affect the nesting process while making it difficult to apply over large geographic areas. Another way to estimate clutch frequency of sea turtles relies on deployment of animal-attached tags throughout the breeding season [8,13,14]. Therefore, Weber et al. [8] tested a combination of Very High Frequency (VHF) radio-telemetry and Argos-linked Fastloc Global Positioning System (GPS) tags. Although VHF transmitters are low cost, they still required direct observations of the females and were ineffective at distance > 1 km. On the other hand, GPS tags allowed remote monitoring and were accurate enough to locate individuals on the beach. However, the location appearing on the beach does not guarantee successful nesting, given the possible abortion of nesting without laying eggs and the large number of U-turns (also known as false crawls) undertaken by sea turtles, especially green turtles (Chevallier, personal observation) [10,15]. In addition, the high cost of Argos-linked Fastloc GPS tags limits their use and the number of equipped females [8].
Accelerometer is a low-cost miniature sensor that can provide high-frequency information about the body movements and postures of animals to which it is attached. It measures static and dynamic acceleration and enables researchers to remotely deduce behaviors for animals that are difficult to observe. Over the past few years, there has been an explosion of its use on both terrestrial and marine species [16], for which direct observations are impracticable. Therefore, a few studies monitored the underwater behavior of sea turtles from accelerometers [17][18][19][20], but their interpretation remains difficult without rigorous validation and limits their use on these species [21,22]. Only one study refers to the identification of the nesting behavior of sea turtles from accelerometer [23], while visual validation of acceleration signals is easier to achieve on land than at sea. Such method could be complementary to lighter population monitoring by indicating when and how Animals 2022, 12, 520 3 of 14 many times an equipped sea turtle would have come to nest on the beach throughout the breeding period.
The aim of this experimental study is to evaluate the extent to which the accelerometer can remotely measure reproductive output of sea turtles. First, we deployed accelerometers on 14 nesting green turtles and visually monitored their behavior simultaneously. Next, we used this dataset to validate the identification of their nesting behavior from acceleration signals and train a powerful supervised learning algorithm to perform it automatically.
For this purpose, we tested a fully convolutional neural network that had already proven effective in automatically identifying the underwater behavior of green turtles [24]. Finally, we tested whether it is possible to estimate the clutch size from the acceleration signal.
Data Collection
The field work was carried out in April 2019 at Awala-Yalimapo beach (5.7 • N, -53.9 • W), French Guiana, South America. We deployed CATS (Customized Animal Tracking Solutions, Oberstdorf, Germany) devices including tri-axial accelerometers on 14 free-ranging adult female green turtles during the nesting process. The acceleration was recorded at a frequency of 20 Hz for the three body axes of the sea turtle (AccX: back-to-front axis, AccY: left-to-right axis and AccZ: bottom-to-top axis). The devices were fixed to the turtle's carapace by four suction-cups allowing us to rapidly operate with minimum disturbance. It took less than a minute to attach the device. In most case, we spotted the turtle going up the beach and waited for its first sand-sweeping to start (see Section 2.2 for further description of sand-sweeping and other nesting behaviors). If the turtle did not seem stressed or was not surrounded by group of humans (adding a source of stress), we quickly set the device during this step on the front of the carapace. Otherwise, we waited until the turtle began digging or even laying their eggs. For the 14 turtles (Table 1), and during the laying process, we checked, using a manual reader (GR250, TROVAN ® , Douglas, Isle of Man, British Isles), the presence of a Passive Integrated Transponder (PIT) or injected a new one into the right triceps of the unknown turtles. We measured their Curved Carapace Length (CCL) and Curved Carapace Width (CCW) as described in Bonola et al. [25]. In parallel, the behaviors were visually monitored by an assigned person who recorded the corresponding executed time on a voice recorder. For eight nesting green turtles, for whom a good visualization of the eggs allowed it, an observer counted the exact number of eggs laid per contraction and dictated it to a second person who recorded it with the exact observation time in a voice recorder. The position of a few of the turtles did not allow us to record the number of eggs without disrupting them. So for them, we did not count the laid eggs.
Labelling of Nesting Behaviors
The nesting behaviors of the sea turtle are similar between the species and the different phases and action patterns were precisely described in several ethograms [26][27][28][29]. In this study, we focused on the action patterns that resulted in different acceleration signals and thus identified five behaviors: Sand-sweeping, Digging, Egg laying, Covering, and Walking. Based on the definitions and the characteristics given by Lindborg et al. [28], Sand-sweeping corresponds to the "Body Pitting" and "Camouflaging" phases described in their article since both behaviors encompass the same movements, Digging includes the "Transition period", and Walking represents all the forward movements, as described in the "Ascent" phase in their article. We synchronized the observation time of the behaviors with the acceleration data and visualized them using a rblt package ( [30], Figure 1). Throughout the nesting process, green turtles expressed numerous latency periods inter-cutting the behaviors with easily noticeable flat acceleration signals. Therefore, we labelled them from the visualisation of the acceleration signal with an additional behavior: Motionless ( Figure 1).
Labelling of Nesting Behaviors
The nesting behaviors of the sea turtle are similar between the species and the different phases and action patterns were precisely described in several ethograms [26][27][28][29]. In this study, we focused on the action patterns that resulted in different acceleration signals and thus identified five behaviors: Sand-sweeping, Digging, Egg laying, Covering, and Walking. Based on the definitions and the characteristics given by Lindborg et al. [28], Sand-sweeping corresponds to the "Body Pitting" and "Camouflaging" phases described in their article since both behaviors encompass the same movements, Digging includes the "Transition period", and Walking represents all the forward movements, as described in the "Ascent" phase in their article. We synchronized the observation time of the behaviors with the acceleration data and visualized them using a rblt package ( [30], Figure 1). Throughout the nesting process, green turtles expressed numerous latency periods intercutting the behaviors with easily noticeable flat acceleration signals. Therefore, we labelled them from the visualisation of the acceleration signal with an additional behavior: Motionless ( Figure 1).
Figure 1.
Acceleration signals corresponding to the five behavioral categories of nesting green turtle: Digging (A); Covering (B); Sand-sweeping (C); Walking (D); and Egg laying (E). We also represent the X-axis of the acceleration of Egg Laying. AccX corresponds to acceleration of the back -to-front body axis, AccY to the left-to-right axis and AccZ to the bottom-to-top axis. . We also represent the X-axis of the acceleration of Egg Laying. AccX corresponds to acceleration of the back -to-front body axis, AccY to the left-to-right axis and AccZ to the bottom-to-top axis.
Automatic Behavioral Identification through Deep Learning
In order to automatically identify the six nesting behaviors from the accelerometer, we trained a fully convolutional neural network: a V-net. The latter was originally developed by Milletari et al. [31] for biomedical 3D image segmentation and an adapted version for the behavioral identification from time series data was tested on underwater free-ranging green turtles and revealed to be efficient [24]. A precise description of the algorithm as well as the processing steps are detailed in Jeantet et al. [24]. Before training the algorithm, we reduced the noise of the acceleration signals on the three axes (AccX, AccY, and AccZ) with a low pass band butterworth filter at 2 Hz and computed the Dynamic Body Acceleration (DBA) from the smoothed signals as described in Jeantet et al. [22]. We randomly split the 14 green turtles into three distinct groups to perform the training/validation/testing datasets. Firstly, when fed with the four previously described descriptors (the smoothed AccX, AccY, AccZ and DBA), the V-net is trained and tuned on eight randomly chosen green turtles and validated on three other individuals. We balanced the behavioral labels in the data batch through a biased random draw of the windows. In particular, we chose a lower probability of randomly drawing Motionless, which is the most frequent behavior. The training and tuning process allowed us to set up the hyper-parameters of the algorithms (depth = 12, window-size = 40, batch = 200 and learning rate = 0.01) and revealed some important confusion between Egg laying and Motionless. Further tests on the effect of each feature suggested that this confusion is mainly induced by AccZ (it adds some noninformative noise). Thus, we removed it and finally trained the neural network with three descriptors: AccX, AccY and DBA. Finally, we tested the model on three green turtles, never seen by the model before, computing the confusion matrix, the global accuracy, the Recall and Precision indicators relative to each of the behaviors as in Jeantet et al. [24].
Estimation of Laid Eggs
Once the V-net has predicted the six behavioral categories, it became possible to automatically extract the predicted Egg laying stage and to estimate the number of laid eggs. The laying process is associated with a very slight back and forth movement of the sea turtle's body which can be visualized on the X-axis of the accelerometer. Its visualization synchronized with the observed number of laid eggs in the field suggested that a peak on the X-axis acceleration signal corresponded to a contraction ( Figure 2). Thus, the number of eggs, related to the number of contractions, was estimated by detecting the number of peaks expressed on the X-axis acceleration signal. Some contractions expressed by the green turtles may be associated with the absence of egg deposition, but they were in the minority and occurred mostly at the end of the egg laying process. Due to their low number, we did not consider these contractions. The hypothesis that the number of eggs laid during one contraction depending on the intensity of that contraction, and thus the associated peak, was also considered, though was not conclusive ( Figure 2).
Automatic Behavioral Identification Through Deep Learning
In order to automatically identify the six nesting behaviors from the accelerometer, we trained a fully convolutional neural network: a V-net. The latter was originally developed by Milletari et al. [31] for biomedical 3D image segmentation and an adapted version for the behavioral identification from time series data was tested on underwater free-ranging green turtles and revealed to be efficient [24]. A precise description of the algorithm as well as the processing steps are detailed in Jeantet et al. [24]. Before training the algorithm, we reduced the noise of the acceleration signals on the three axes (AccX, AccY, and AccZ) with a low pass band butterworth filter at 2 Hz and computed the Dynamic Body Acceleration (DBA) from the smoothed signals as described in Jeantet et al. [22]. We randomly split the 14 green turtles into three distinct groups to perform the training/validation/testing datasets. Firstly, when fed with the four previously described descriptors (the smoothed AccX, AccY, AccZ and DBA), the V-net is trained and tuned on eight randomly chosen green turtles and validated on three other individuals. We balanced the behavioral labels in the data batch through a biased random draw of the windows. In particular, we chose a lower probability of randomly drawing Motionless, which is the most frequent behavior. The training and tuning process allowed us to set up the hyper-parameters of the algorithms (depth = 12, window-size = 40, batch = 200 and learning rate = 0.01) and revealed some important confusion between Egg laying and Motionless. Further tests on the effect of each feature suggested that this confusion is mainly induced by AccZ (it adds some non-informative noise). Thus, we removed it and finally trained the neural network with three descriptors: AccX, AccY and DBA. Finally, we tested the model on three green turtles, never seen by the model before, computing the confusion matrix, the global accuracy, the Recall and Precision indicators relative to each of the behaviors as in Jeantet et al. [24].
Estimation of Laid Eggs
Once the V-net has predicted the six behavioral categories, it became possible to automatically extract the predicted Egg laying stage and to estimate the number of laid eggs. The laying process is associated with a very slight back and forth movement of the sea turtle's body which can be visualized on the X-axis of the accelerometer. Its visualization synchronized with the observed number of laid eggs in the field suggested that a peak on the X-axis acceleration signal corresponded to a contraction ( Figure 2). Thus, the number of eggs, related to the number of contractions, was estimated by detecting the number of peaks expressed on the X-axis acceleration signal. Some contractions expressed by the green turtles may be associated with the absence of egg deposition, but they were in the minority and occurred mostly at the end of the egg laying process. Due to their low number, we did not consider these contractions. The hypothesis that the number of eggs laid during one contraction depending on the intensity of that contraction, and thus the associated peak, was also considered, though was not conclusive (Figure 2).
Cutting off the Egg Laying Period
To automatically extract the accurate Egg laying part from the V-net predictions, we first discarded the false positive identifications, which generally corresponded to very short sequences distributed in the nesting sequence. For this purpose, we performed the next algorithm with each step depicted in Figure 3: 1.
Binarize the behaviors sequence: label "1" is assigned to the behavior Egg laying while all the others are labelled as "0" (Figure 3a); 2.
Perform a convolution of the binarized sequence with a Gaussian mask whose standard deviation is empirically chosen. The convolved signal is represented in blue as the 'Smoothed density' (Figure 3b); 3.
Choose a minimal threshold (threshold = 0.7), and extract the acceleration values associated to the part of the convolved signal which is greater than it (Figure 3b).
turtle associated with the number of laid eggs counted in the field (in orange).
Cutting off the Egg Laying Period
To automatically extract the accurate Egg laying part from the V-net predictions, we first discarded the false positive identifications, which generally corresponded to very short sequences distributed in the nesting sequence. For this purpose, we performed the next algorithm with each step depicted in Figure 3: 1. Binarize the behaviors sequence: label "1" is assigned to the behavior Egg laying while all the others are labelled as "0" (Figure 3a); 2. Perform a convolution of the binarized sequence with a Gaussian mask whose standard deviation is empirically chosen. The convolved signal is represented in blue as the 'Smoothed density' (Figure 3b); 3. Choose a minimal threshold (threshold = 0.7), and extract the acceleration values associated to the part of the convolved signal which is greater than it (Figure 3b).
Peak Detection
At this point, as it has been concluded that X-axis acceleration contained the largest amount of information for estimating the number of eggs laid, the next analysis was only performed on this axis. In order to augment the precision of peak detection, we firstly smoothed the extracted Egg laying signal using a narrow Gaussian mask. Moreover, we observed a decrease of the average values of the signal all over the laying process, with lower peaks at the end, making their identification difficult compared to the higher peaks at the beginning. We corrected this by subtracting from the trend from its signal, estimated by a second-degree polynomial, adjusted by least-squares approximation. The data are also centered with respect to its average values inside the Egg laying category.
To estimate the number of peaks over the X-axis, assumed to be related to the number of turtle contractions, we ran over the signal a rolling window with a width approximatively equal to the distance between two picks and detected the local maximum for each window. To avoid detecting the same maximum several times, we kept the value only if it was located in the very middle of the rolling window. Finally, a threshold parameter
Peak Detection
At this point, as it has been concluded that X-axis acceleration contained the largest amount of information for estimating the number of eggs laid, the next analysis was only performed on this axis. In order to augment the precision of peak detection, we firstly smoothed the extracted Egg laying signal using a narrow Gaussian mask. Moreover, we observed a decrease of the average values of the signal all over the laying process, with lower peaks at the end, making their identification difficult compared to the higher peaks at the beginning. We corrected this by subtracting from the trend from its signal, estimated by a second-degree polynomial, adjusted by least-squares approximation. The data are also centered with respect to its average values inside the Egg laying category.
To estimate the number of peaks over the X-axis, assumed to be related to the number of turtle contractions, we ran over the signal a rolling window with a width approximatively equal to the distance between two picks and detected the local maximum for each window. To avoid detecting the same maximum several times, we kept the value only if it was located in the very middle of the rolling window. Finally, a threshold parameter (represented in dotted red in Figure 4) was chosen as a proportion of the median of the signal. Every found local maximum under this threshold was discarded (Figure 4). (represented in dotted red in Figure 4) was chosen as a proportion of the median of the signal. Every found local maximum under this threshold was discarded (Figure 4).
Estimation of the Number of Eggs
We used the estimated number of contractions to calculate the number of laid eggs. From the egg numbers per contraction recorded in the field (from one to four eggs), we calculated the mean number of eggs laid per contraction per turtle and obtained an average of 1.6 (standard deviation = 0.05). For each turtle, we multiplied the estimated number of contractions by this mean to obtain the estimated number of eggs laid. The mean number of eggs laid per contraction should be reconsidered in a larger population to improve its accuracy.
We tested the entire procedure (from the V-net identification to the estimation of number of laid eggs) on the eight green turtles distributed in the training/validation/testing dataset for which the number of laid eggs has been observed.
Results
The V-net predicted the six behaviors (Sand-sweeping, Digging, Egg laying, Covering, Walking and Motionless) with an accuracy of 95%. It correctly identified 97% of the Egg laying dots, corresponding to the highest Recall index ( Figure 5, Table 2). The lower Precision index for this behavior (0.79%) was due to Motionless dots being wrongly predicted as Egg laying. However, since the latter occured one time during the nesting process and was very well identified by the V-net, the Egg Laying period clearly differed from the other behaviors when visualizing the activity budget ( Figure 6). The misidentifications from the V-net concerned more Covering and Walking that were confused with Sand-sweeping, leading to the lowest Recall and Precision indexes for these two behaviors ( Figure 5, Table 2). The visualisation of the activity budget revealed that it was mostly the end of the Covering process that was confused with Sand-sweeping. (Figure 6).
The correct identification of Egg laying allowed its automatic extraction with sufficient precision to estimate the number of contractions. Thus, from the V-net
Estimation of the Number of Eggs
We used the estimated number of contractions to calculate the number of laid eggs. From the egg numbers per contraction recorded in the field (from one to four eggs), we calculated the mean number of eggs laid per contraction per turtle and obtained an average of 1.6 (standard deviation = 0.05). For each turtle, we multiplied the estimated number of contractions by this mean to obtain the estimated number of eggs laid. The mean number of eggs laid per contraction should be reconsidered in a larger population to improve its accuracy.
We tested the entire procedure (from the V-net identification to the estimation of number of laid eggs) on the eight green turtles distributed in the training/validation/testing dataset for which the number of laid eggs has been observed.
Results
The V-net predicted the six behaviors (Sand-sweeping, Digging, Egg laying, Covering, Walking and Motionless) with an accuracy of 95%. It correctly identified 97% of the Egg laying dots, corresponding to the highest Recall index ( Figure 5, Table 2). The lower Precision index for this behavior (0.79%) was due to Motionless dots being wrongly predicted as Egg laying. However, since the latter occured one time during the nesting process and was very well identified by the V-net, the Egg Laying period clearly differed from the other behaviors when visualizing the activity budget ( Figure 6). The misidentifications from the V-net concerned more Covering and Walking that were confused with Sand-sweeping, leading to the lowest Recall and Precision indexes for these two behaviors ( Figure 5, Table 2). The visualisation of the activity budget revealed that it was mostly the end of the Covering process that was confused with Sand-sweeping. (Figure 6). predictions, we succeeded in estimating the number of eggs with a mean relative error of 7% (standard deviation = 0.06, Table 3).
Discussion
This study provides the first method to automatically determine the reproductive outputs of the nesting process of green turtles, from animal-attached accelerometers. Using deep learning, we firstly identify the six behaviors expressed by the individuals (Sandsweeping, Digging, Egg laying, Covering, Walking and Motionless) with an accuracy of 0.95 and a precise detection of the Egg Laying process (Recall index: 0.97). In a second step, we estimate the number of laid eggs from the predicted Egg Laying sequence and find the reproductive outputs with a mean relative error of 7%. The main aim of this method is to support field monitoring of nesting sea turtles by providing a remote method and thus reducing the monitoring effort. In the interests of improving our knowledge of sea turtles, we expect that this method will be a valuable tool for measuring maternal investment in sea turtles and understanding the parameters that influence it. The correct identification of Egg laying allowed its automatic extraction with sufficient precision to estimate the number of contractions. Thus, from the V-net predictions, we succeeded in estimating the number of eggs with a mean relative error of 7% (standard deviation = 0.06, Table 3). Table 3. Estimations of the number of laid eggs for eight green turtles from the Egg laying period identified by the V-net and/or manually extracted from the acceleration visualization compared to the actual observed numbers on the field.
Individual
Nb
Discussion
This study provides the first method to automatically determine the reproductive outputs of the nesting process of green turtles, from animal-attached accelerometers. Using deep learning, we firstly identify the six behaviors expressed by the individuals (Sandsweeping, Digging, Egg laying, Covering, Walking and Motionless) with an accuracy of 0.95 and a precise detection of the Egg Laying process (Recall index: 0.97). In a second step, we estimate the number of laid eggs from the predicted Egg Laying sequence and find the reproductive outputs with a mean relative error of 7%. The main aim of this method is to support field monitoring of nesting sea turtles by providing a remote method and thus reducing the monitoring effort. In the interests of improving our knowledge of sea turtles, we expect that this method will be a valuable tool for measuring maternal investment in sea turtles and understanding the parameters that influence it.
Automatic Identification of Nesting Behaviors
The V-net is a powerful algorithm that successfully identifies the six behaviors of the nesting process of the green turtles from the accelerometer with an accuracy of 0.95. Similarly, Nishizawa et al. [23] performed the same task using a Classification and Regression Tree (CART) and obtained an accuracy of 0.86 for the same behavioral categories, but without Motionless. Thus, the V-net represents a major improvement as it does not require pre−processing (no segmentation and hand−crafted feature extraction), while having a better accuracy than the CART. Moreover, this study is the second one to use a V-net to perform behavioral identification from the acceleration signals of green turtles (at sea, [24]). The fact that we used the same architecture, and the same hyper−parameters, on similar but not identical data was a positive time saver, which is also promising for future works using loggers.
The main confusion from the V-net concerns Covering and Sand-sweeping. The visualisation of the activity budget shows that this misclassification appears between the end of Covering and the beginning of Sand-sweeping. This confusion is mainly due that nesting turtles express rear flipper sweeping movements in the two stages [28]. In fact, Covering ends with rear flipper sweeps consecutively to rear knead movements, while the following Sand-sweeping stage begins with simultaneous both rear and front flipper sweeps and is characterised by sweeps of the front flippers alone at the end. Nishizawa and al. [23] also obtained the lowest Recall index associated with Covering. Confusions on behavioral identification from supervised learning algorithms were also revealed on other species for which different behaviors encompass similar mechanistic movements [32][33][34]. More generally, the automatic behavioral identification from accelerometer are based on the animals' posture and the movements and thus requires the precise definition of the behavioral categories based on these, rather than the function or action of the behaviors. In our case, a more precise identification and annotation of the movements involved in Covering and Sand-sweeping in the field (such as 'rear flipper sweeping', 'front flipper sweeping' and 'covering') would probably improve the precision of the V-net for those two behaviors. However, the main challenge in remote monitoring of sea turtles during the breeding season is to detect the egg laying process because in marine turtles, and more markedly in green turtles, individuals come ashore several times in the same night before laying eggs [10,15]. This is why it is important to detect with certainty if the turtle has laid eggs or not and to understand the reasons for these U−turns. Our study allowed us not only to detect the six behavioral categories of the nesting process, but also a more accurate detection of the Egg laying process by the V-net (Recall index = 0.97).
After this step, the second challenge was to automatically estimate the number of eggs laid, which would thus make it possible to determine the maternal investment during one nesting season.
Automatic Identification of Number of Eggs Laid
This study is the first to propose a fully automatic method to remotely estimate the number of laid eggs from a bio-logger. The precise detection of the Egg laying process allowed us to automatically extract the associated acceleration signals and estimate the number of eggs laid. We succeeded in estimating the number of eggs laid with a mean relative error of only 7%. However, it remains difficult to identify the main causes of error considering underestimates of the number of eggs laid for some individuals and overestimates for others ( Table 3). The parameters that may lead to over-or underestimation are the accuracy of the associated acceleration sequence extraction, the thresholds fixed to identify the number of peaks and the mean number of eggs laid per contraction obtained from field observation (1.6 ± 0.05). The latter is rather constant with an exact value between 1.57 and 1.59 for the three individuals associated with a relative error above 10%. In all cases, these estimation errors remain low with relative errors below 15% for most individuals and highlight the potential of this method for remote monitoring of sea turtles on land during nesting season.
Perspective of Application
The main aim of the proposed method is, therefore, to support field nesting sea turtles' monitoring while reducing the monitoring effort, via the remote monitoring of nesting sea turtles for estimation of maternal investment. In particular in French Guyana, given that we know the average number of spawns per individual per season for green turtles and the average delay between two successive nesting processes (Chevallier, personal observations), it would become possible to equip several dozen females with bio-loggers at the start of the breeding season and recover them at the estimated end of their nesting season. Therefore, we would go from exhaustive monitoring 7 days a week during 6 months to 30 days of patrols (5 days to equip and 25 days to recover the materials with a large margin of error on the last return of the green turtles to avoid missing them). Although further research is needed to determine the impact of equipment attached to turtles, the miniaturization of the accelerometer allows for miniature loggers (weight less than 5 g and size 22 × 13 × 8 mm, http://www.technosmart.eu, accessed on 15 February 2022) making this long tracking feasible. Therefore, this long term monitoring of sea turtles from bio-loggers during the whole breeding period would allow researchers to know precisely the clutch frequency, its clutch size and variation during the breeding season for a representative part of a population, and therefore the estimation of their maternal investment, while reducing the patrol time.
Furthermore, the estimation of the reproductive effort of nesting females on land is complementary to the use of the accelerometer on green turtles at sea. Indeed, the proposed method is part of a more general framework where a validation and automatic identification of the underwater behaviors of green turtle from accelerometer data have already been achieved [22,24]. It would then be possible, using accelerometers deployed over the entire breeding season, to describe the underwater behaviors expressed by green turtles, during two successive nesting processes, i.e., the intra−nesting period, and to estimate the number of laid eggs on land. All this information is essential to study the cause−effect relationships between the energy strategy undertaken at sea and the maternal investment. Indeed, inter-and intra-population variations in reproductive outputs have been observed suggesting the influence of the environmental resource availability and the fitness of the individuals [2,4,35]. Whereas the clutch frequency and size are indicative of the success or failure of the individual's energetic strategy in response to the environmental conditions, the identification of the underwater behaviors enable the identification of this strategy during the inter-nesting period. Combined with environmental data (food availability, water temperature, and ocean current), it could help researchers to identify the extent to which environmental factors influence this energetic strategy and thus the maternal investment. The parallel monitoring at sea and on land could be a key parameter for understanding the adaptive capacities of marine turtles in the context of climate change.
Conclusions
This experimental study initiates the first steps towards an efficient method of the sea turtles' reproductive outputs recording from low-cost miniature sensors. Such an approach allows noticeable reduction of monitoring effort and minimizing of human error.
Recovery of bio-loggers, few weeks later, can still be tedious work, but the development of satellite-relay data tags with on-board processing represents a promising alternative. Indeed, it is already possible to remotely transmit a summary of the tri-axial acceleration from satellite-relay data tags [36][37][38] and to implement the learning algorithm into the logger [39]. This next step would enable the researchers to remotely, and almost in real time, follow the nesting behaviors of the equipped individuals (with the estimation of the number of eggs laid) and to relate this information with their behaviors at sea over long periods (pre−nuptial migration, breeding season, post−nuptial migration).
All of these associated technologies will allow the acquisition of acquire knowledge that has never been obtained until now, of the influence of marine environmental parameters on individual's behavior at sea over long periods (migrations) and the consequences on their maternal investment during reproduction periods. This challenge seems very accessible in the near future.
Author Contributions: D.C. contributed conception and design of the study; D.C., L.J., F.K. and N.P. contributed to data acquisition; V.V. built the V-net architecture and adapted it to the 1D data; L.J. and V.H. performed the data acceleration analysis and applied the V-net on the sea turtle dataset; L.J., V.H. and V.V. wrote the first draft of the manuscript; and D.C., F.K. and N.P. contributed critically to subsequent versions. All authors have read and agreed to the published version of the manuscript. | 2022-02-23T16:16:00.083Z | 2022-02-01T00:00:00.000 | {
"year": 2022,
"sha1": "0ecc87f15f3f617c8951398e88c2e0a543b3e1f9",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2615/12/4/520/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "00ffc1b92e759cea7f41be30923f6472998a2d54",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
13896830 | pes2o/s2orc | v3-fos-license | A Finite-State Approach to Translate SNOMED CT Terms into Basque Using Medical Prefixes and Suffixes
This paper presents a system that generates Basque equivalents to terms that describe disorders in SNOMED CT. This task has been performed using Finite-State transducers and a medical prefixes and suffixes lexicon. This lexicon is composed of English-Basque translation pairs, and it is used both for the identification of the af-fixes of the English term and for the translation of them into Basque. The translated affixes are composed using morphotactic rules. We evaluated the system with a Gold Standard obtaining promising results (0.93 of precision). This system is part of a more general system which aim is the translation of SNOMED CT into Basque.
Introduction
SNOMED Clinical Terms (SNOMED CT) (College of American Pathologists, 1993) is considered the most comprehensive, multilingual clinical healthcare terminology in the world. It does not exist in Basque language, and we think that the semi-automatic translation of SNOMED CT terms into Basque will help to fill the gap of this type of medical terminology in our language. By its translation we have a double objective: i) to offer a medical lexicon in Basque to the bio-medical personnel to try to enforce its use in the bio-sanitary area, and ii) to access multilingual medical resources as the UMLS (Unified Medical Language System) (Bodenreider, 2004) in our language.
Basque is a minority language in its standardization process and persists between two powerful languages, Spanish and French. Although today Basque holds co-official language status in the Basque Autonomy Community, during centuries it was out of educational and sanitary systems, media, and industry.
We have defined a general algorithm (see section 2) based on Natural Language Processing (NLP) resources that tries to achieve the translation with an incremental approach. The first step of the algorithm is based on the mapping of some lexical resources and has been already developed. Considering the huge size of SNOMED CT (296,000 active concepts and around 1,000,000 descriptions in the English version dated 31-01-2012) the contribution of the specialized dictionaries has been limited. In the second step that is specified in this paper, we have used Finite State Machines (FSM) in the form of transducers to generate one-word-terms in Basque taking as a basis terms from the English release of SNOMED CT mentioned before. The generation is based on the translation by means of medical suffixes (i.e. -dipsia, -megaly) and prefixes (i.e. episio-, aesthesi-) and in their correct composition, considering morphotactic rules. (Lovis et al., 1995) stated that a big group of medical terms can be created by neologisms, that is, concatenations of existing morphosemantic units understood by anybody. This units usually have Greek and Latin origins and their meaning is known by the specialists. (Banay, 1948) specified that about three-fourths of the medical terminology is of Greek origin.
In this work we take advantage of these features to try to translate terms from the Disorder subhierarchy of SNOMED CT. This corresponds to one of the 19 top level hierarchies of SNOMED CT, to the one called Clinical Finding/Disorder. In our general approach, we prioritized the translation of the most populated hierarchies: Clinical Finding/Disorder (139,643 concepts), Procedure (75,078 concepts) and Body Structure (26,960 concepts). Using lexical resources, we obtained the equivalents in Basque of the 19.32 % of the disorders. In this work we will try to obtain the one-word-terms that are not found in dictionaries.
In the rest of the paper the translation algorithm is briefly described in section 2. The use of finite state machines in order to obtain Basque equivalents is explained in section 3. Finally, some conclusions and future work are listed in section 4.
Translation of SNOMED CT
The general algorithm (see figure 1) is languageindependent. It could be used to translate any term if the linguistic resources for the input and output languages are available. In the first step of the algorithm (see numbers 1-2-4 in Figure 1), some specialized dictionaries and the English, Spanish and Basque versions of the 1 http://www.nooj4nlp.net/NooJManual.pdf International Statistical Classification of Diseases and Related Health in its 10th version (ICD-10) are used. For example for the input term "abortus" all its Basque equivalents "abortu", "abortatze" and "hilaurtze" are obtained.
The second phase of the algorithm is described in this paper in section 3. When a term is not found in the dictionaries (number 3 in Figure 1) generation-rules are used to create the translation.
In the case that an output is not obtained in the previous phases (number 8 in the algorithm), chunk-level generation rules are used. Our hypothesis is that some chunks of the term will be already translated. The application should generate the entire term using the translated components.
In the last step, we want to adapt a rulebased automatic translation system called Matxin (Mayor et al., 2011) to the medical domain.
We want to remark that all the processes finish in the 4th step. That is, we store the generated translations with the intention of using them to translate new terms.
Finite-State Models and Translation
This section exposes the system that obtains Basque equivalent terms from English one-wordterms based on FSMs.
Translation process
The generation of Basque equivalents is performed in two phases: the identification of the affixes first, and the translation and composition of the affixes secondly. All the linguistic information is stored in lexica and 31 rules are written for the process (1 for identification, 1 for translation and 28 for morphotactics). Figure 2 shows the Finite State Transducer for the identification of the affixes. The lexica of the affixes is loaded (1-6) and then any prefix (the "*" symbol indicates 0 or more times) followed by one unique suffix is identified. The letter "o" may be also identified as it is used to join medical affixes. The "+" symbol is used for splitting the term. The combination of the finite state transducers for the translation and for the composition using morphotactics is shown in Figure 3. First, the lexica for the translation task is loaded (1-4), then 28 rules for the morphotactics are defined (simplified in the rule numbered 5). The translation rule (shown in rule number 6) is composed of the wordstart mark (theˆsymbol), the prefix followed by the optional linking "o" letter zero or more times, and a single compulsory suffix; finally the transducer combines the translation and the morphotactic finite state transducers (7). Figure 4 shows the whole process with an example. First, we identify the prefixes and suffixes of the English input term by means of the transducer that marks those affixes (schiz+encephal+y). Then, we obtain the corresponding Basque equivalent for each part and we form the term (es-kiz+entzefal+ia).
Input term: schizencephaly Identified affixes: schiz+encephal+y Translated affixes: eskiz+entzefal+ia Output. Basque term: eskizentzefalia As we said before, in order to obtain a well formed Basque term, we apply different morphotactic rules. For example, in Basque, words starting with the "r" letter are not allowed, and an "e" is needed at the beginning. Figure 5 shows an example where the translated prefix "radio" needs of the mentioned rule, obtaining "erradio".
Resources
In order to identify the English medical suffixes and prefixes we have joined two lists: the "Med-ical Prefixes, Suffixes, and Combining Forms" from Stedman's Medical Dictionary (Stedman's, 2005) and the "List of medical roots, suffixes and prefixes" from Wikipedia (Wikipedia, 2013). We obtained a list of 826 prefixes and 143 suffixes.
By means of checking the behavior of the prefixes and suffixes in the English and Basque terms we have manually deduced the appropriate Basque equivalent. Table 1 shows an example of obtaining the equivalent of the "encephal" prefix, deducing that "entzefal" is the most appropriate equivalent.
English terms
Basque terms echoencephalogram ekoentzefalograma encephalitis entzefalitis encephalomyelitis entzefalomielitis leukoencephalitis leukoentzefalitis ... ... From all the prefixes and suffixes listed, we are able to deduce 812 prefixes and 139 suffixes for Basque. Those are currently being supervised by an expert to give them the highest confidence possible. This technique allows the inferring of new medical terms not appearing in dictionaries.
Results
We selected the one-word-terms of the Disorder sub-hierarchy of SNOMED CT. This subhierarchy with terms representing disorders or diseases is formed by 107,448 descriptions, being 3,979 one-word-terms. Even this last quantity is low considering the whole sub-hierarchy, we must take into account that the influence of those oneword-terms is very high, appearing around 79,000 times among all the descriptions.
The total one-word-term set has been split into two sets, one for defining and developing the system and another one for evaluating it. The evaluation set is composed of the 885 one-word-terms that have been previously translated in the first 101 step of the algorithm (see section 2). That is we have the correct English-Basque pairs as Gold Standard. For the development set we have selected the remaining 3,094 one-word-terms.
As mentioned before, in this paper we show the results obtained from the translation of the medical prefixes and suffixes forming the terms. That is, we have only translated the terms that have been completely identified with the medical prefixes and suffixes. For example, terms with the suffix "thorax" have not been translated as it does not appear in the prefixes and suffixes list. That is, the "hydropneumothorax" term has not been translated even though the "hydro" and "pneumo" prefixes have been identified.
In Table 2 we show the quantities and percentages of the terms that have been completely identified in both sets. Our set of the one-word-terms has not been cleaned up to remove the words without any medical affix. Thus, the percentages from the table will never reach 100 per cent. From the 885 terms in the evaluation set, 728 terms contain at least one medical prefix or suffix, being 309 completely identified. The results obtained in this fist approach are shown in Table 3 by means of True Positives (TP), False Negatives (FN), False Positives (FP), Precision (Prec.), Recall (Rec.) and F-measure (F-M). A recall of 0.41 is obtained (287 correctly identified from 706 TP and FN) and a precision of 0.93 (287 out of 309). The recall will be increased in the future, including not completely identified terms in the system. Thus, we can conclude that the results obtained are very good concerning precision. Moreover, the quality of the results obtained is also very good. We have been able to give correct equivalents to complex terms such as "hyperprolactinemia", that has five medical prefixes and suffixes ("hyper+pro+lact+in+emia").
We have also analyzed the incorrect results in order to be able to improve the system. For example, the prefix "myc" has been translated as "miz", but we realized that whenever the prefix is followed by an "o", it should be "mik" in order to generate a correct Basque term. Many of the mistakes are easily rectifiable for the final purpose of translating SNOMED CT.
Conclusions and future work
We implemented an application that generates Basque terms for diseases in English, by means of finite-state transducers. This application is one of the phases in the way to translate SNOMED CT into Basque. In order to translate the medical prefixes and suffixes, we have manually generated the translation pairs for 951 prefixes and suffixes, obtaining a very useful resource for Basque. The FSTs exposed in this paper could be easily applicably to other languages whether an affix lexicon with its translation is defined and the morphotactic rules adapted to the target language.
As we have seen in section 3.3, most of the English terms have not been identified completely and that prevented the translation of them. To cope with this problem we have two developing paths: the deduction of new suffixes and prefixes from specialized dictionaries (Hulden et al., 2011); and the implementation of transliteration transformations to those parts (Alegria et al., 2006).
We have only applied the transducers to the Disorder sub-hierarchy, and we will have to check the results we can obtain applying it to the Finding sub-hierarchy and to the Procedure and Body Structure hierarchies. We found terms such as "electroencephalography" or "oligomenorrhea" in those hierarchies, formed with medical prefixes and suffixes identified for this task.
The promising results obtained will contribute to the translation of the whole SNOMED CT, but also to the normalization of Basque in the biosanitary domain, as new terms are generated. | 2014-10-01T00:00:00.000Z | 2013-07-01T00:00:00.000 | {
"year": 2013,
"sha1": "d3f1cb2c2f06e94358634774762a831595861d01",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "59da1fe5e8d46e2aab94eb112312dadf4be2debf",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
218974367 | pes2o/s2orc | v3-fos-license | CTAP for Italian: Integrating Components for the Analysis of Italian into a Multilingual Linguistic Complexity Analysis Tool
Linguistic complexity research being a very actively developing field, an increasing number of text analysis tools are created that use natural language processing techniques for the automatic extraction of quantifiable measures of linguistic complexity. While most tools are designed to analyse only one language, the CTAP open source linguistic complexity measurement tool is capable of processing multiple languages, making cross-lingual comparisons possible. Although it was originally developed for English, the architecture has been ex-tended to support multi-lingual analyses. Here we present the Italian component of CTAP, describe its implementation and compare it to the existing linguistic complexity tools for Italian. Offering general text length statistics and features for lexical, syntactic, and morpho-syntactic complexity (including measures of lexical frequency, lexical diversity, lexical and syntactical variation, part-of-speech density), CTAP is currently the most comprehensive linguistic complexity measurement tool for Italian and the only one allowing the comparison of Italian texts to multiple other languages within one tool.
Introduction
Linguistic complexity is a core construct in Second Language Acquisition (SLA) research, where Complexity, Accuracy, and Fluency, also known as the CAF triad, are often used to characterize language performance (Housen and Kuiken, 2009). Over the last decade, a broad variety of complexity measures has been proposed to characterize language proficiency and its development, text readability, and writing quality (Vajjala and Meurers, 2012;Bulté and Housen, 2014;Crossley and McNamara, 2014;De Clercq and Housen, 2019). Many of these linguistic complexity measures are applicable across various languages. At the same time, advances in Natural Language Processing (NLP) allow to automatically extract the measures for a variety of languages. In the face of these advances, a series of automatic complexity analysis approaches have been presented. There are approaches for English (McNamara and Graesser, 2012;Chen and Meurers, 2016), German Meurers, 2019a), French (Francois andMiltsakaki, 2012), Swedish (Pilan et al., 2016) and Portuguese (Aluisio et al., 2010) containing similar -yet not completely overlapping -sets of complexity measures. With the creation of quantifiable operationalizations for aspects of linguistic complexity (e.g. lexical diversity) that can be calculated automatically for various languages, cross-lingual analyses can be envisioned. However, analyses using the same measures for texts in various languages are still rare. While the comparability of the measures used needs to be ensured from a theoretical perspective, often the extraction of linguistic complexity indices for various languages is also technically limited. Although various language-dependent tools exist that extract linguistic complexity indices from text, they usually provide very different feature sets. Besides, they are often based on different assumptions and use different technologies that ultimately lead to different values even for the same features. Using one single tool to extract those features would, however, substantially simplify analysis workflows. For this reason, we extended the Common Text Analysis Platform (CTAP) (Chen and Meurers, 2016) to support the analysis of Italian. We transferred the linguistic complexity measures already provided for English and German by integrating an Italian text processing pipeline and added further features frequently used in the Italian context (e.g. the Gulpease readability measure). With these developments, the platform now supports three European languages (Italian, German, and English), with a total of 154 linguistic complexity features that can be extracted for all three of them. The Italian component of CTAP includes with 253 features more complexity measures than the other existing tools for Italian, providing a flexible feature extraction that is not limited to specific research questions. In this article, we describe the Italian component of CTAP. After a brief review of related complexity research in Section 2, we situate CTAP among the already existing tools for Italian linguistic complexity measurement, introducing the main aims and research focuses of the tools (Section 3). Subsequently, we present the general architecture of CTAP, explain some implementation details of its Italian component, list the linguistic complexity measures it offers and describe the quality control mechanisms we used (Section 4). Next, we compare the characteristics of CTAP with the characteristics of other linguistic complexity measurement tools for Italian (Section 5), before we conclude our article also pointing out future work (Section 6).
Linguistic Complexity Research
As a central dimension of (second) language performance, complexity has been extensively researched in the context of assessing second language proficiency and development (Crossley and McNamara, 2014;Kyle, 2016;Bulté and Housen, 2014). However, complexity measures have also been shown to be beneficial for other tasks such as readability assessment (Vajjala and Meurers, 2012;Feng et al., 2010;Chen and Meurers, 2018), first language academic writing acquisition (Crossley et al., 2011;Weiss and Meurers, 2019b), and the evaluation of teachers grading behaviour (Vögelin et al., 2019;Weiss et al., 2019). Most of this work has focused on English, but especially in recent years, the scope of complexity research has been broadened towards other languages such as German Meurers, 2019a), Swedish (Pilan et al., 2016), Russian (Reynolds, 2016), French (Francois andMiltsakaki, 2012), and Italian (Brezina and Pallotti, 2019). With this broadening across languages, the types of complexity measures that are being investigated have also been extended, thus overcoming the so far often reductionist approach to complexity (cf. Housen et al., 2019), which focuses nearly exclusively on lexical and syntactic complexity. Overcoming this reductionist approach has become one of the central goals in complexity research Paquot, 2019). New complexity measures are being proposed and tested for various languages. For example, new measures of morphological complexity have been used to characterize the developmental trajectories of second language (L2) spoken French and English and to distinguish between native and L2 speech for Italian and English (Brezina and Pallotti, 2019). Both studies find their measures of morphological complexity to be highly informative for the respective non-English languages. However, these advances in broadening the scope of complexity research are not necessarily accompanied by efforts to make the newly proposed measures accessible to a broader audience. Researchers trying to navigate through the increasing collection of complexity measures find themselves often at a loss. Few tools provide comprehensive collections of complexity measures and these are typically language dependent rather than facilitating complexity analyses across languages.
Linguistic Complexity Measurement Tools for Italian
In the context of Italian language research, a number of feature extraction tools has been developed over the last years, differing in their intended research aims and domains. To our knowledge, there are three tools for measuring linguistic complexity of Italian texts apart from CTAP introduced in this article: READ-IT (Dell'Orletta et al., 2011), Coease (Tonelli et al., 2012), and Tint 2.0 (Aprosio and Moretti, 2018). All three tools originated in computational linguistic research on text readability and text simplification. Early readability research has focused on a small set of superficial text characteristics such as word and sentence length which could be employed in simple readability formulae, see DuBay (2004) for an overview. However, the use of linguistically more informed complexity measures has been shown to be more appropriate to model readability (Vajjala and Meurers, 2012;Feng et al., 2010;Chen and Meurers, 2018) and since then, complexity measures have become an important component in readability assessment research. READ-IT was designed to study text simplification approaches for readers with low literacy skills or mild cognitive impairment and has been used, for example, to analyse the readability of informed consent forms in the public health sector (Venturi et al., 2015), or to explore linguistic features of Italian fictional prose across textual genres and readability levels (Dell'Orletta et al., 2013 By adapting CTAP to Italian, we aimed to provide a complexity feature extraction tool with a broader and more generic set of features than the three existing tools for Italian, without focusing on aggregate readability indices. Both READ-IT and Coease make extensive use of such measures and besides offer an interpretation of the measures obtained for a text in terms of how they compare with a representative sample of a specific text type. CTAP does not use such reference corpora for giving interpretations but allows researchers to use their own corpora for comparison. This keeps the tool as flexible as possible serving a wide range of research purposes. With its flexible and easily extendible architecture, CTAP furthermore allows to exchange individual parts of the processing pipeline, reconfigure settings and parameters and integrate new features if needed. Finally, the tool allows to extract the same measures for different languages, making it interesting for cross-lingual analysis.
CTAP and Its Extension to Italian
The Common Text Analysis Platform CTAP (Chen and Meurers, 2016) is a web-based quantitative linguistic feature extraction tool for measures of linguistic complexity. Contrary to other tools that provide pre-defined analysis set-ups for individual texts, CTAP is fully configurable. It is not limited to any specific task but can be used in any project that requires the extraction of quantitative linguistic features out of written texts.
General Architecture of CTAP
CTAP is based on the Unstructured Information Management (UIMA) framework (Ferrucci et al., 2004) that facilitates the addition of new components to the already existing software architecture. The analysis pipeline for the complexity measures is separated into two components: (1) Annotators of basic linguistic structures such as letters, syllables, tokens, lemmas, POS categories, sentences, and syntactic structures. These take plain text or the output of other annotators as input and generate annotations.
(2) Analysis engines that generate complexity features' values. These take as input the annotations produced by the annotators of linguistic structures and generate the values for individual complexity measures. This division of the analysis architecture makes the integration of new languages as well as new complexity measures very feasible. As the complexity analysis engines use the output of linguistic annotators, by adding linguistic analysers for a new language, a wide range of complexity measures can be obtained without further modifications, except for inserting the language code into the corresponding feature descriptors. On the other hand, when a new complexity measure analysis engine is implemented, it can be applied to all the languages that already have their linguistic annotators included into the platform. However, when the output of a linguistic annotator is language-specific (as that of a POS tagger or a syntactic parser, for example), new parameters need to be provided to the complexity analysis engines that use their values through XML feature descriptors. Furthermore, certain complexity measures depend on language specific external resources such as word lists or reference corpora (e.g. lexical sophistication features), which also have to be integrated into CTAP for every new language. Originally developed for analysing English by Chen and Meurers (2016), the platform was later extended to support multilingual analysis by Zarah Weiss who also integrated a series of German complexity features into CTAP, which have been successfully used for broad linguistic modelling of German in a variety of contexts Meurers, 2018, 2019a,b). Our contribution consists in integrating the linguistic annotators for Italian into the tool and in adapting the existing feature sets to Italian. We also implemented several new analysis engines including: • MTLD and HD-D, two commonly used measures of linguistic diversity (Jarvis, 2007) that can be used for all three languages, • The Flesch-Kincaid grade level (Kincaid et al., 1975) and the Gulpease index (Lucisano and Piemontese, 1988) as readability measures for English and Italian respectively.
NLP Components Integrated into CTAP for the Analysis of the Italian Text
Like in the English and German components of CTAP, we use Open NLP for sentence splitting. For tokenisation and lemmatisation, we use Tint 0.2, a Maven distribution of the all-inclusive NLP suite for Italian in its first version (Aprosio and Moretti, 2016). As for the part-of-speech (POS) tagging, we use the Open NLP POS Maxent tagger 1 that reports 97.56% of accuracy. For the syntactic analysis, we use Tint 0.2 that produces Universal Dependency trees and is reported to give 84.67 LAS (labelled attachment score) and 87.05 UAS (unlabelled attachment score) 2 . As there was no syllable annotation available in the latest version of Tint referenced in the Maven repository, we wrote our syllable annotator transcribing and extending the code of the Perl module Lingua::IT:Hyphenate by Aldo Calpini 3 .
Complexity Measures for Italian Available in CTAP
In its current state, the Italian component of CTAP contains 253 indices of linguistic complexity, 154 of which are also available for English and German 4 . The implemented measures are distributed among the following groups, four in total:
Lexical Features
There are various types of lexical features in CTAP: • Number and percentage of tokens and word types with two or more syllables (4 features) • Mean token length and its standard deviation in letters and syllables (4 features) • Lexical sophistication (74 features) • Lexical diversity (or richness) (9 features) • Lexical variation (9 features) The lexical sophistication features are calculated separately for all words, lexical words and function words, and each of them is based on both the SUBTLEX-IT (Crepaldi et al., 2015) and the Google Books 2012 (Lin et al., 2012) reference corpora. Lexical sophistication features include: • 36 word frequency features: normal, logarithmic and logarithmic per million words • 12 informativeness per million words features • 12 familiarity per million words features • 6 logarithmic contextual diversity features In addition, six lexical sophistication features are based on imageability, concreteness and age of acquisition values provided by Burani et al. (2001) for 626 Italian nouns: each of these three values is calculated both for all lemmas and for unique lemmas of the text. We also implemented a wide-spread measure of lexical sophistication for Italian which consists in calculating the proportion of words of a text that are listed in the De Mauro dictionary of basic Italian (De Mauro, 2016). The lexical diversity (or richness) features include: • 5 types of type-token ratio (TTR): normal TTR, root TTR, log TTR, corrected TTR, Uber TTR • 2 types of MTLD: for tokens and lemmas • 2 types of HD-D: for tokens and lemmas The lexical variation features calculate the ratio of the number of different word types of a certain morpho-syntactic category to the number of all lexical tokens: nouns, verbs, adjectives, adverbs, modifiers, all lexical word types together. Verbs receive special attention and benefit from more different formulae of lexical variation, also proportionally to the number of verbs and not only to the number of all lexical tokens.
4.2.2
Syntactic Features Syntactic features implemented in CTAP include the mean sentence length and its standard deviation in letters, syllables and tokens (6 features) and the number of syntactic constituents (40 features). The features regarding the number of syntactic constituents calculate the total number of specific syntactic constituents of a text, for example, the number of dependent clauses or conjunctions. 10 features give numbers relative to the number of sentences: the number of dependent clauses, coordinations, adjectival clause modifiers, adjectival modifiers, adverbial clauses, adverbial modifiers, appositional modifiers, attributives, auxiliaries, auxiliary passives per sentence. We plan to add more features of this type.
4.2.3
Morpho-Syntactic Features Morpho-syntactic features implemented in CTAP are POS density features that calculate the ratio between the number of tokens belonging to certain morpho-syntactic categories to the total number of tokens, for example, the ratio of adjectives in a text.
4.2.4
Text Length Features Basic text statistics implemented are the number of letters, syllables, tokens, word types, lemmas and sentences in the text (6 features).
4.2.5
Traditional Readability Indices The Gulpease readability index has been implemented for the Italian component of CTAP as an instance of a traditional readability index. Traditional readability indices aim to give a numerical indication of how difficult it is for an intended target group of readers to understand a given text. Gulpease (Lucisano and Piemontese, 1988) is a readability index similar to, e.g., the Flesh index (Flesch, 1948) but calibrated to model the difficulty of Italian texts for Italian native speakers at different educational levels. Contrary to other indices for Italian that are mostly adaptations of the Flesch index (e.g. the Flesch-Vacca index (Franchina and Vacca, 1986), the Gulpease index is calculated on character basis instead of syllables, to make automatic extraction of the index easier and more reliable.
Quality Control
In order to ensure the quality of the code, we implemented unit tests, comparing freshly obtained values against precalculated values for a sample text. This allowed us to manually verify the performance of CTAP and to guarantee the non-degradation of code during future modifications. However, as there is to date no gold standard or evaluation methodology for complexity measures, we relied on benchmark evaluations of the underlying NLP tools.
Comparing Linguistic Complexity Analysis Tools for Italian
In the following we describe the main differences between the available linguistic complexity measurement tools for Italian: READ-IT, Coease, Tint 2.0, and CTAP. We compare the tools along the following dimensions: first, we present the scope of the measures implemented in different tools, secondly, we give information about their source code availability and usage, next, we discuss the tools' extendibility and the difference in their units of analysis, as well as the transparency of the intermediate analysis steps.
Scope of the Implemented Measures
Because of their different underlying research aims, which were pointed out in Section 3, the set of implemented features differs substantially from one tool to the other. Whereas READ-IT and Coease focus strongly on readability and text simplification, CTAP, not being tailored to any specific research goal, is more generic and comprehensive than the other two. Tint 2.0 offers the smallest number of complexity measures (21 in total), followed by READ-IT with 32 and Coease with 46 measures. CTAP is with 253 complexity measures for Italian the most comprehensive of the three tools. Since the features included in Tint 2.0 are a subset of the features included in READ-IT and Coease, we will not discuss Tint 2.0 individually in the remainder of the comparison. Only five complexity measures are present in all three tools: those are simple textual statistics, percentage of lemmas belonging to the basic vocabulary, and the Gulpease readability formula. 13 measures are offered by two tools out of three (highlighted in bold). The vast majority of measures are, however, only present in one tool. Below we give an overview of the biggest differences in the implemented features. Table 2 gives a more detailed overview of supported features in the three investigated tools.
5.1.1
Basic counts CTAP offers more fine-grained basic counts than the other two tools. Coease is the only one supporting paragraph counts.
5.1.2
Lexical complexity Whereas in Coease and especially in READ-IT lexical complexity measures largely serve the purpose of defining to what extent a text may be understood by a less prepared reader, CTAP offers a wider range of generic measures for lexical complexity. The main differences between the tools are: • overall lexical readability index, see also Section 5.2 for details on this.
5.1.3
Morpho-syntactic complexity CTAP offers more fine-grained morpho-syntactic complexity indices than READ-IT (Coease providing only one) thus allowing more in-depth morpho-syntactic analysis of corpora.
5.1.4
Syntactic complexity In terms of syntactic complexity measurement, READ-IT focuses on an in-depth analysis of subordination, Coease provides numerous indices for cohesion, causality and syntactic similarity, and CTAP specialises in calculating the number of different types of syntactic constituents and connectives.
5.1.5
Readability indices and overall textual complexity While READ-IT and Coease both offer various readability indices and aggregated measures for overall textual complexity (e.g. lexical, syntactic, global, and base difficulty of the test), CTAP does not aim to provide such aggregate evaluation scores. Being offered a wide range of very fine-grained complexity measures, the users of CTAP have to draw their own conclusion as for the general complexity of a text. For that reason, only the popular Gulpease readability index for Italian has been implemented in CTAP.
Interpretation of Results
Both READ-IT and Coease provide task-specific interpretation utilities in their graphical user interfaces. READ-IT tells the user whether the feature values obtained for the analysed text are significantly higher or lower than for texts from a general newspaper corpus or a corpus of simplified texts. Coease does a similar comparison using texts of different educational levels as reference corpora. CTAP purposefully reports only numerical feature values, leaving the choice of selecting a reference corpus for comparison to the users, while offering them the possibility to compare not only single texts but also values obtained for whole corpora of their choice. In addition to the feature values for each complexity measure, READ-IT presents results in form of aggregated scores judging the lexical, syntactic, global, and base difficulty. It uses readability models trained to distinguish between texts from the reference corpora with different feature sets. With the Gulpease readability index, it also provides another global measure of text readability that was obtained using reference texts. CTAP does not provide a global readability estimate based on reference corpora or a similar interpretation of results with regard to external reference data. The tool exclusively calculates individual complexity measures and leaves it to the user to put these in an interpretative context. This is motivated by the fact that the interpretation of complexity measures can be heavily influenced by task effects. This accounts for the fact that complexity is a multi-faceted construct whose interpretation is highly context dependent. In particular, language production tasks have been shown to heavily influence complexity (e.g. Vajjala, 2018;Alexopoulou et al., 2017;Yoon, 2017), making single aggregate scores of complexity notoriously unreliable in general purpose contexts. However, we decided to include the Gulpease readability index as an additional measure in the set of Italian complexity features which may be used to gauge the overall complexity of the texts, if users have reason to assume that this measure is a good approximation of the global readability of the texts they analyze.
Source Code Availability and Usage
Among the existing linguistic complexity measurement tools for Italian, only Tint 2.0 and CTAP are open source. READ-IT and Coease are proprietary tools. However, they provide a browser-based online demo version with a graphical user interface. Tint 2.0 on the other hand provides an open source NLP pipeline for Italian, usable via the command line or as Java library, that also offers a restricted set of complexity measures borrowed from READ-IT and Coease. The Italian component of CTAP is available open source at https://github.com/commul/ctap under the BSD license. Additionally, we maintain an online version of CTAP for free public use 7 .
Extendibility
With regards to the extendibility of the tools, only CTAP is fully extendible. The proprietary tools READ-IT and Coease are not designed to be extendible in terms of features or other languages. Tint 2.0 is open source, but it is not specifically designed for being extended by external collaborators. Furthermore, the extension to other languages is not foreseen, given the tools specialization on Italian. The architecture of CTAP on the other hand allows to extend the tool to further languages and makes it possible to easily integrate new features for one or various of the supported languages.
Unit of Analysis
Apart from the complexity measures themselves, the tools differ in their flexibility regarding the unit of analysis. The graphical user interfaces for the available online demo versions of Coease and READ-IT only allow the analysis of one text at a time. While Tint 2.0 can be programmed to process various strings, CTAP was intentionally designed to analyse (sub)corpora consisting of multiple texts. Thus, its graphical interface allows to download comparative result spreadsheets and to display diagrams visualising the complexity measurements' values for different texts or corpora.
Transparency of Results
While the user interface of READ-IT visualizes intermediate results such as tokenisation, sentence splitting, POStagging and syntactic parsing, the other tools follow black box approaches, often only returning results as a single number. However, the possibility to check the correctness of intermediate steps and to understand the source of feature values would be crucial for researchers' trust in such feature extraction tools as well as for the interpretation of results.
Conclusion and Future Work
Linguistic complexity research being a very actively developing field, existing measures are constantly re-evaluated and new measures are proposed. It is important to be able to make those efforts available to the scientific community in a unified way. This not only helps to address current challenges in complexity research such as the overly reductionist focus on syntactic and lexical complexity measures criticized by Housen et al. (2019), but also supports researchers who do not have the technical background to implement a comprehensive set of complexity measures themselves. Furthermore, it increases the comparability and transparency of research findings. In this paper, we have presented the Italian component of CTAP which supports the broad linguistic analysis of Italian in terms of 253 complexity measures with a subset of 154 measures being available for Italian, English and German. We have described its technical characteristics and functionalities and compared it to the other publicly available linguistic complexity measurement tools for Italian. CTAP allows for easy integration of new linguistic complexity measures and configuration of the already existing ones. The UIMA framework allows the addition of an unlimited number of complexity features to the tool if those are needed by the researcher. Collaboration is facilitated by the tool being open source and available on GitHub. With this article we hope to provoke interest leading to collaboration and contribution to the development of new complexity measures for the languages already implemented in CTAP and of course for new languages. In the future, we would like to add new complexity measures for Italian and modify the graphical user interface in order to allow for the visualisation of intermediate analysis results. Additionally, CTAP is currently being extended to support more languages such as Dutch, Spanish and French in order to widen the scope of cross-lingual complexity research.
Acknowledgements
We would like to express our profound gratitude to Xiaobin Chen and Detmar Meurers of the University of Tübingen for creating the CTAP tool and for supporting us throughout the process of its adaptation to Italian. Our special thanks also go to our colleague Lorenzo Zanasi who kindly helped us in compiling lists of Italian connectives. Table 2: Complexity measures implemented in CTAP, Coease and READ-IT. '+' indicates that a certain measure is implemented in this particular tool and '-' indicates that it is not implemented. | 2020-05-29T13:12:12.165Z | 2020-05-01T00:00:00.000 | {
"year": 2020,
"sha1": "f060719ab7c6f5700b74077d43120d32e742f7c7",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "6f07af39eda80661c228378d037ebe3e1075035b",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
261337554 | pes2o/s2orc | v3-fos-license | Systemic and mucosal adaptive immunity to SARS-CoV-2 during the Omicron wave in patients with chronic lymphocytic leukemia
Not available.
Systemic and mucosal adaptive immunity to SARS-CoV-2 during the Omicron wave in patients with chronic lymphocytic leukemia
The COVID-19 pandemic has significantly impacted patients with chronic lymphocytic leukemia (CLL), 1 with many failing to seroconvert 2 or mediate variable T-cell immunity 3 after mRNA vaccination.The emergence of the B1.1.529(Omicron) variant of SARS-CoV-2 has altered the development of the COVID-19 pandemic due to its less severe clinical course and associated reduced risk of hospitalization. 4However, the impact of Omicron on immunosuppressed subgroups, such as patients who have received CD20 monoclonal antibodies (mAb) 5 remains uncertain.Moreover, the observed decrease in severe disease cases within the general population may be influenced by the high number of infected individuals. 6n addition to the systemic immunoglobulin (Ig) G response, SARS-CoV-2 infection induces production of specific secretory IgA in mucosal secretions from local plasma cells, and serum IgA from plasma cells homing to the bone marrow. 7hether this occurs after Omicron infection in patients with hematological or solid cancer remains elusive.We report here on serological, cellular, and mucosal immune response in a cohort of patients with CLL diagnosed with symptomatic SARS-CoV-2 infection during the Omicron BA.1 and BA.2 wave.Twenty-six patients with CLL who had symptoms of COVID-19 and tested positive for SARS-CoV-2 between January 9, 2022 and April 29, 2022, were included.Ninety-nine percent of all sequenced SARS-CoV-2 samples in Sweden taken on January 17, 2022 or later were Omicron variants. 8Patients diagnosed earlier than January 17, 2023 were only included if viral sequencing confirmed Omicron.The national ethics authority approved the study.Written informed consent was obtained from each patient before samples were obtained.The clinical characteristics are summarized in Table 1.Three patients had had a previous polymerase chain reaction (PCR)-verified SARS-CoV-2 infection at a median time of 20 months earlier (range, 13-21).Their immunological outcomes were similar to those who had Omicron as their first-time infection (data not shown).Patients had either early-stage, untreated CLL (n=11) or had ongoing CLL treatment (n=12), either Burton tyrosine kinase inhibitor (BTKi) therapy (n=11) or venetoclax + CD20 mAb (n=1).Four patients paused their BTKi treatment for a few days during the infection, and their immunological outcomes were similar to those who continued (data not shown).Five additional patients had completed various prior CLL therapies (including CD20 mAb) at a median time of 26 months before infection (range, 8-74).Five patients had ongoing immunoglobulin supplemental treatment (IVIG).Total Ab levels against SARS-CoV-2 Spike receptor-binding domain (RBD) protein were analyzed in 14 of 26 patients at the time point when they had just been diagnosed with active, symptomatic COVID-19 infection, using Elecsys® anti-SARS-CoV-2 S immunoassay (Roche Diagnostics) (positive test was defined as >0.8 U/mL, patients with IVIG treatment were not included).Fifty percent (7/14) were seronegative, and of these, four had received a third vaccine dose 2-4 months before the infection and three had received 1-2 doses 9-11 months before.Two to three weeks after clinical recovery, a positive Elecsys ® total anti-RBD test was noted in 81% of analyzed patients (13/16, 1 missing sample, 9 samples were excluded from analysis due to treatment with IVIG or the anti-SARS-CoV-2 mAb sotrovimab).We next used the V-PLEX Panel 25 assay (Meso Scale Discovery 9 ) to differentiate IgG and IgA reactivities against ten different SARS-CoV-2 Spike variants in serum (n=24, 2 missing samples) respectively in saliva (n=25, 1 missing sample) from the convalescence follow-up.The serum was analyzed according to the manufacturer's instructions, and saliva collection has been described elsewhere. 10Cutoff levels for positive saliva reactivity was defined for each antigen using pre-pandemic samples from healthy donors.Serum IgG was not analyzed in samples from patients who had received IVIG or sotrovimab treatment (n=9).Results against all SARS-CoV-2 variants are shown in the Online Supplementary Figure S1.Positive IgG levels against the Wuhan-Hu-1 (wild-type) SARS-CoV-2 variant (defined by the manufacturer as >1,960 AU/mL) were noted in all but one convalescent serum sample (Figure 1A).Generally, IgG reactivity against the three main variants (wild-type, Omicron BA.1 and Omicron BA.2 variants) varied substantially between individuals, and no significant differences were noted between the CLL treatment subgroups.Congruent with the serum findings, IgG reactivity against any SARS-CoV-2 Spike variant was observed in 88% of convalescent saliva samples (22/25; Online Supplementary Figure S1C), without difference in frequency or magnitude between the CLL treatment subgroups when comparing reactivity to the three main variants (Figure 1B).In contrast to the IgG reactivity, the serum IgA (i.e., mucosa-derived) responses to BA.2 Spike were significantly lower in BTKi/BCL-2i treated patients than in early-stage untreated patients (P=0.012) with a similar trend for responses against the wild-type variant (P=0.051)(Figure 1C).
LETTER TO THE EDITOR
Furthermore, salivary Spike-specific IgA against any variant was detected only in 40% (10/25; Online Supplementary Figure S1D) of patients.In line with the serum findings, IgA response was more rarely detected in saliva in patients with ongoing BTKi/BCL-2i therapy compared to early-stage untreated patients (2/12 vs. 6/9; P=0.032).The magnitude of the IgA salivary response was also significantly lower in BTKi/BCL-2i treated patients than the early-stage untreated patients when comparing the three main variants separately (Figure 1D; wild-type P=0.010;BA.1 P=0.016; BA.2 P=0.038).The ability of the convalescence sera to block Spike-protein binding to ACE2, a measure of viral neutralization capacity, 11 was measured in 15 samples (9 samples were excluded due to sotrovimab or IVIG treatment) using the V-PLEX SARS-CoV-2 Panel 25.Fifty-three percent (8/15) were able to neutralize at least one Spike variant to 50% inhibition or higher (Online Supplementary Figure S2).Conversely, only 16% of saliva samples (4/25; 2 early-stage untreated, 1 previously treated, and 1 with ongoing BTKi/BCL-2i therapy) were able to neutralize at least one Spike variant (Online Supplementary Figure S2).The neutralization magnitude did not differ significantly between the patient subgroups (data not shown).The correlation between IgG and IgA levels and the cor-
A B C D
LETTER TO THE EDITOR responding neutralization capacity was stronger in serum than in saliva, and more pronounced for the wild-type variant compared to BA.1 (Online Supplementary Figure S3).The serum and salivary neutralization capacity of Omicron BA.2 was generally low, and correlation with corresponding Ab levels was hence not done.Next, we measured SARS-CoV-2-specific T-cell responses to wild-type and Omicron Spike-specific peptides using an AIM assay (Figure 2A), as previously described. 12PBMC were collected after clinical recovery from 22 patients (8 with untreated early-stage CLL, 4 previously treated, 9 with ongoing BTKi, and 1 with venetoclax + CD20 mAb treatment).
A B C LETTER TO THE EDITOR
were similar in all CLL treatment subgroups and comparable to those of the healthy individuals (Figures 2B, C).Taken together, many patients mounted high post-infection IgG levels and T-cell responses.Notably, the T-cell responses were similar to those of healthy donors, also in patients with B-cell inhibiting therapy or low or absent convalescent Ab levels, which is most likely of clinical importance. 12However, we found an impaired IgA reactivity against all three virus variants in patients with ongoing BTKi/BCL-2i therapy in saliva, with a similar trend in serum, suggesting a previously not yet described negative effect of precision B-cell inhibiting treatment on mucosal immunity.Whether this is related to impaired mucosal memory B cells 13 remains to be shown.Healthy individuals have significantly better protection against SARS-CoV-2 infection with higher mucosal IgA levels, 14 and further studies are required on how the decreased IgA levels and generally low neutralization capacity of saliva Ab affect the risk of re-infection, particularly in BTKi-treated individuals.Notably, a significant reduction in the risk of grade 3-4 bacterial infections, mainly pneumonia, has been reported when the administration of BTKi is temporarily ceased in patients with CLL. 15 This observation suggests a more widespread impairment of mucosal immunity post-BTKi, which also extends to other pathogens.
The major limitations of our study are the small number of included patients and the heterogeneity of both previous CLL treatment, number of vaccine doses and antiviral treatment, including short-term use of corticosteroids, which might have impacted the immunological response.Also, the use of immunoglobulin treatment limited the number of IgG analyses.We provide a comprehensive analysis of both systemic and mucosal immunity to ten SARS-CoV-2 variants after Omicron infection in patients with CLL.Our data indicate that patients on BTKi/BCL-2i therapy exhibit compromised mucosal immunity, potentially increasing the susceptibility of this already vulnerable population to recurrent episodes of SARS-CoV-2 infection.
LETTER TO THE EDITOR
sequencing.HMIS, LB, DW, KH, MSC, AÖ, HGL, and MB wrote the original draft of the manuscript.All authors reviewed and edited revisions of the manuscript and had final responsibility for the decision to submit for publication.
Figure 1 .
Figure 1.Spike-specific antibodies in serum and saliva after clinical recovery from Omicron infection.Anti-Spike immunoglobuln (Ig) G in convalescent sera (A) and saliva (B) specific for SARS-CoV-2 wild-type, Omicron BA.1, and Omicron BA.2.Patients who had received IVIG or sotrovimab are excluded from the serum analyses and highlighted (red) in the saliva panel (B).The corresponding anti-Spike IgA levels are shown in (C) (serum) and (D) (saliva).Cutoff levels (dotted lines) for positive responses against wild-type in serum were determined by the manufacturer (1,960 AU/mL) and against all antigens in saliva using prepandemic saliva samples (defined as the mean plus 6x standard deviation of the intensity signals of 27 negative prepandemic saliva samples) and were as follows: anti-wild-type IgG: 4.01 AU/mL; anti-BA.1 IgG: 4.98 AU/mL; anti-BA.2IgG: 7.33 AU/mL; anti-wild-type IgA: 226.72 AU/mL; anti-BA.1 IgA: 81.77AU/mL; anti-BA.2IgA: 203.18 AU/mL.Median and interquartile range are indicated in the panels.Statistics was assessed with non-parametric Kruskal-Wallis' test with Dunn's multiple comparison correction.*P<0.05,**P<0.01 and NS P>0.05: not statistically significant.
Figure 2 .
Figure 2. SARS-CoV-2 reactive T cells in chronic lymphocytic leukemia patients and healthy controls after clinical recovery from Omicron infection.(A) Representative flow cytometry plot of antigen-specific CD4 + (CD69 + CD154 + ) and CD8 + (CD69 + CD137 + ) T cells after peptide stimulation.Frequencies of Spike-specific CD4 + (B) and CD8 + (C) T cells against SARS-CoV-2 wild-type and Omicron BA.1 peptides.A positive response was defined with a cutoff level of 0.05%.Median and interquartile range are indicated in the panels.Statistics was assessed with non-parametric Kruskal-Wallis' test with Dunn's multiple comparison correction.NS P>0.05: not statistically significant.DMSO: dimethyl sulfoxide.
Table 1 .
Clinical characteristics of patients with chronic lymphocytic leukemia (N=26) at the time of SARS-CoV-2 Omicron infection. | 2023-08-31T06:18:31.999Z | 2023-08-31T00:00:00.000 | {
"year": 2023,
"sha1": "518b4ac7794bfc059f7714c158be6bc7399e1b4d",
"oa_license": "CCBYNC",
"oa_url": "https://haematologica.org/article/download/haematol.2023.282894/76029",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d3bd7d1121637c3675c0904135ad4b5d018681ee",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3289884 | pes2o/s2orc | v3-fos-license | Single microtubules and small networks become significantly stiffer on short time-scales upon mechanical stimulation
The transfer of mechanical signals through cells is a complex phenomenon. To uncover a new mechanotransduction pathway, we study the frequency-dependent transport of mechanical stimuli by single microtubules and small networks in a bottom-up approach using optically trapped beads as anchor points. We interconnected microtubules to linear and triangular geometries to perform micro-rheology by defined oscillations of the beads relative to each other. We found a substantial stiffening of single filaments above a characteristic transition frequency of 1-30 Hz depending on the filament's molecular composition. Below this frequency, filament elasticity only depends on its contour and persistence length. Interestingly, this elastic behavior is transferable to small networks, where we found the surprising effect that linear two filament connections act as transistor-like, angle dependent momentum filters, whereas triangular networks act as stabilizing elements. These observations implicate that cells can tune mechanical signals by temporal and spatial filtering stronger and more flexibly than expected.
Introduction
Today, we know that cells across all domains are mechanosensitive 1 , and that mechanosensitivity is the base for sensing quite different stimulus qualities including osmotic challenges, gravity, movements or even sound. In addition, mechanosensitivity is used to organize and integrate cells and organs into functional units, e.g., in the course of movements in metazoan organisms or during plant development 2 . Perturbations of mechanotransduction have been implicated in various severe diseases like cancer 3,4 . Remodeling of the cell as a response or adaption to an external, physical stimulus is steered by gene expression in the nucleus 5 . Therefore, the information of the stimulus has to be transported across the cell from the periphery to the center. Common models of cellular mechanotransduction assume the conversion of a physical stimulus to a chemical signal by membrane proteins such as integrins 3 , and the subsequent transport to the nucleus either passively by diffusion or actively by molecular motors, i.e., rather slow processes. However, the direct propagation of a mechanical stimulus by stress waves through stiff cytoskeletal elements connecting the membrane and the nucleus 6 would enable a much faster transport pathway on the microsecond timescale and thus allow almost instantaneous integration of responses across the cell 7 . A model for such a pathway has been proposed by Ingber 8,9 on the basis of a tensegrity model of flexible actin filaments (able to transmit traction forces) connected to the relatively stiff microtubules (able to transmit compression forces). In mammalian cells, microtubules are typically aligned radially inside a cell spanning from the centrosome, located close to the nucleus, to the cell membrane 10 , a set up that would allow for efficient mechanotransduction between cell membrane and nucleus 11 . In fact, mechanical stimulation has been shown recently to induce a perinuclear actin ring, brought about by the activity of actin-microtubule cross-linking formins 12 . MTs are well known as components of mechanosensing in flies 13 as well as in vertebrates 14 and microtubules have also been found to participate in gravity sensing and mechanic integration in plants (reviewed in 15 ).
Remarkably, mutants of Caenorhabditis affected in beta tubulin turned out to be insensitive to mechanic stimulation 16 . Efficiency and specificity of the MT sensory functions, however, depend on their frequency dependent viscoelastic properties, which are characteristic for biological systems.
To address these aspects of microtubule-dependent signaling, we present an approach for the targeted construction of cytoskeletal meshes with defined geometries by using optically trapped beads as anchor points. Existing approaches only demonstrated the construction of small networks without biologically relevant measurements 17 or rely on the stochastic attachment or growth of filaments to optically trapped beads or micro pillars which is less flexible and barely allows control of the number of attached filaments 18,19,20 . We use established micro-rheology techniques 21
Results
To determine the time-dependent viscoelastic properties of single microtubules (MTs) and small networks of MTs, movable Neutravidin coated beads as anchor points were attached to a biotinylated microtubule at defined positions by time-shared optical tweezers (see Methods).
Then, these anchor points were mutually displaced in an oscillatory fashion with defined frequencies and amplitudes along the x-direction as illustrated in Fig. 1. The resulting frequency dependent stretching and buckling behavior of these constructs is measured, which allows determining both the elastic and the viscous properties of the MT constructs in different geometrical arrangements.
Stiffening of single filaments at high oscillation frequencies
Upon force generation, the beads are displaced from their equilibrium position with a straight microtubule as depicted in Fig. 1. The displacements x B1 (t) -x L1 (t) and x B2 (t) -x L2 (t) of bead 1 (actor) and bead 2 (sensor) relative to the laser trap positions x L1 and x L2 , are shown exemplarily in Fig. 2a,b for two different actor displacement frequencies, f a = 0.1 Hz and f a = 100 Hz, at a displacement amplitude A a = 500 nm. Due to high tensile and small buckling forces, the sensor bead is pulled out of the trap center by up to x B2 ≈ 80nm and pushed only slightly by less than x B2 ≈ 10 nm during each half period. This situation changes significantly at high frequencies f a = 100Hz. While the maximum displacements x B2 during microtubule stretching were approximately the same at f a = 100 Hz and f a = 0.1 Hz, the displacement increased by an order of magnitude at high frequencies during buckling, i.e., x B2 (f a = 100 Hz) ≈ 10⋅x B2 (f a = 0.1 Hz). Therefore, only the compression and buckling of single filaments will be analyzed in this study. The complete frequency dependence of the filament -bead construct is expressed by the average maximum distance change between both beads ∆L x = A a -x max1 -x max2 as shown in Fig. 2c
Excitation and relaxation of higher MT deformation modes
As introduced above, the oscillatory driving force counteracts against the viscous and the elastic forces of both the MT and the two beads. The behavior of the semi-flexible MT of length L is described by the hydrodynamic beam equation, which predicts that induced MT deformations can be described by a superposition of sine waves with wave numbers n n L q π ⋅ = and a characteristic relaxation time proportional to 1/q 4 ~ L 4
(see Methods and Supplementary
Results). Hence, higher deformation modes n > 1 can only be excited at higher driving frequencies ω = 2π⋅ f a , leading to the effect of MT stiffening. The stiffening can be described by the frequency dependent complex shear modulus G(ω) = G′(ω) + i⋅G′′(ω) (see Methods Section), where a representation of all forces in frequency space allows to extract the elastic component G′(ω) and the viscous component G′′(ω).
The elastic modulus G′(ω) shown in Fig. 3 We checked whether the measured frequency response and apparent stiffening of single filaments indeed results from the excitation of higher deformation modes as described by equation (1). Therefore, we analyzed the dynamics of the trapped anchor points with two particle active micro-rheology techniques (see 22,23 and Supplementary Methods for details) as described in the following.
The frequency dependent elastic response of the single microtubule was analyzed in terms of which increases with frequency because of a successive excitation of higher modes at It can be seen in Fig. 3d that the persistence length depends sensitively on the contour length L of the MT. We find l p (0) = (0.33 ± 0.05) mm for L = 5 µm and all stabilizations. For L = 15 µm we find l p (0) = (4.06 ± 0.26) mm stabilized with 10 µM Taxol, l p (0) = (5.80 ± 0.39) mm for 100 µM Taxol, and l p (0) = (12.10 ± 0.66) mm for GMPCPP. The estimates based on equation (2) agree well with the published dependency of l p on the filament contour length 38 and stabilization 39 , as further elucidated in the discussion.
Transition frequency. Beyond a characteristic frequency, a visible increase of G'(ω) is manifested due to the excitation of higher deformation modes. We define this transition by the We find that the transition frequencyω t = 2πf t scales by a factor of 3 with the ground mode.
We obtain f t = (5.3 ± 1.2) Hz and f t = (4.2 ± 1.7) Hz for short filaments (L = 5µm) stabilized with 10 µM or 100 µM Taxol, respectively. For long filaments (L = 15µm), we find f t = (0. shows the result for a bead oscillation lateral to the axes of long (L = 15µm) GMPCPP stabilized filaments. Again, we observe a plateau value for frequencies ω < ω t and a power law rise for ω > ω t with p = 1.76 ± 0.02, i.e., a 40% larger stiffening exponent than the predicted value p = 1.25 for a longitudinal oscillation. In contrast, the plateau (0) G ⊥ ′ = 0.54 mPa is approximately one order of magnitude smaller than G || '(0) in axial direction. The transition frequency f t = 3 Hz obtained from ( ) 1.
is approximately the same as in axial direction.
Momentum transport along a linear chain of connected MTs
An important question is whether the findings for single filaments can be used to predict the momentum transport through small networks of filaments -in analogy to Kirchhoff's circuit laws for the connection of currents in network nodes. However, for connected microtubules, i.e. for different networks, the compression of one filament usually results in a stretching of another filament and vice versa, such that a separation of compression and stretching is not possible anymore. Therefore, the complete oscillation period of the actor and sensor beads will be analyzed in the following.
In a first step, we constructed a linear network consisting of three optically trapped beads and two microtubule filaments as shown in Fig. 4A. This construct was probed such that trap 1 was oscillated sinusoidally at varying frequency and amplitude, while trap 2 and 3 remained stationary. In this way, we investigated the momentum transfer along the first microtubule, while attached to a second microtubule, through (1,2) for the two step connection is larger than for the single step. As shown in Fig Lateral chain oscillation: As shown in Fig. 4b . Beyond the transition frequency, the frequency dependent elasticity
Momentum transport in an equilateral triangle
We used GMPCPP filaments to construct equilateral triangles of 15 µm side length as depicted in Fig. 5a. The trap 1 is again oscillated in x or y, resulting in a trap movement radial or tangential to the connection between bead 1 and the center of the triangle. An overlay of brightfield and fluorescence images of one radial oscillation period at f a = 0.1 Hz (T = 1 / f a = 10 s) and A a = 600 nm along x is shown in Fig. 5b. In contrast to single filaments and the linear chain, here, in total two filaments are always buckled or tense, while at the same time, the third one behaves in the opposite manner, i.e., is tense or buckled.
Triangles are stiffer than single filaments and have a similar high frequency response
Due to the symmetric configuration of the equilateral triangle, the elastic modulus for both connections 1→2 and 1→3 should be identical, except for different oscillation directions.
This is indeed the case as shown in Fig. 6a,b for an exemplary construct, where the radial and tangential elastic responses, (i, j) ( ) According to equation (3), the larger static elasticities should result in an increase of the transition frequency ω t by a factor 25, i.e., f t = (810 ± 308) Hz and f t = (75 ± 13) Hz for both oscillation directions. 810 Hz is much larger than the measured maximum frequency, so that we cannot observe a power law rise for an oscillation along x. However, the extrapolated intersection of the single filament response (fit with free exponent according to equation (1)) with the plateau of the triangle can be estimated to f t ≈ 800 Hz, which is in good agreement with the theoretical estimate of equation (3). For the tangential oscillation direction (y) displayed in Fig. 6b, the network stiffens already at a transition frequency f t ≈ (100 ± 10) Hz
Discussion
Microtubule stiffness depends on the contour length. We have analyzed the elastic behavior of single and inter-connected MTs by means of the elastic modulus G'(ω), which can be described by a low frequency plateau (0)~(0) p G′ , and a rise at high frequencies above a characteristic transition frequency ω t , defined by a 50% increase of G'(ω).
Varying the molecular composition of the filaments, by stabilization agents had no visible effect on the static elasticity of short MTs (5 µm length). Interestingly, this was different for which is approximately quadratic and results in a 9-fold higher plateau for 3-fold shorter MTs, we find a reasonable match with our measurements shown in Fig. 3. From the two MT lengths, we also find that our results for (0, ) p L agree well to those reported previously 34, 38 .
Frequency dependent persistence length and stiffness. The novelty of our observations is the increase of the persistence length, or correspondingly the elastic modulus G'(ω), of a single microtubule with the displacement frequency ω (Fig. 3). In the Methods section, we show that this is caused by the excitation of higher deformation modes, which means that filaments become stiffer on shorter timescales, such that filament buckling is suppressed. In other words, molecular relaxation processes as a consequence of internal stress along the MT cannot follow on too short timescales. The timescale of molecular relaxation is approximated by the transition frequency ω t ≈ 3⋅ω n=1 , which we indicated in all plots of G'(ω). Beyond this frequency, the second deformation mode (n = 2) renders the filament about 1-4 times stiffer, beyond ω = 20⋅ω n=1 the third deformation mode (n = 3) stiffens the filament 4-10 times relative to ω = 0 as explained in the Supplementary Results (Fig. S4). Our measurements confirm the general, theoretically predicted trend of a smaller transition frequency for longer MTs. According to ( ) ( ) For short MTs we measured a stiffness increase according to 3/4 ( )G ω ω ′ at high frequencies (50 Hz < ω < 100 Hz), whereas for longer MTs we found In biological and other noisy systems, the signal energy stored in various degrees of freedom (translation, oscillation, etc.) is significantly less pronounced at higher frequencies (e.g., a 1/ω² decay for thermal motion). In this way, microtubules should act as transmission amplifiers or high pass filters for mechanical signals, based on our observations that mechanical stimuli are transferred much more efficiently at higher frequencies. Interestingly, this situation resembles an (electronic) transistor, where a small input signal (here, a mechanical stimulus) controls a strong current (here, the momentum transport from bead 1→3). It will be interesting to perform further experimental and theoretical investigations to explain the elastic behavior, where momentum transport between two network nodes can be steered by an intermediate node.
Angular momentum filtering in a linear
The MT triangle -a uni-directional stable network. Displacement of the actor bead in either radial x-or tangential y-direction as illustrated in Fig. 5 results in a very direct and efficient transport of momentum in direction towards the one or the other sensor bead.
Remarkably, the measured elasticity behavior described by the modulus G'(ω) is the same as in the single filament case. It consists of a static elasticity G'(0), and a strong rise of G'(ω) when higher deformation modes are excited beyond the transition frequency ω t . This strong rise is clearly visible at f t = 100 Hz for a tangential oscillation, but could not be resolved for a radial oscillation. This is probably due to a much faster rise (larger exponent) of G' in a direction lateral to the filament axis, as we observed this phenomenon for single filaments and the linear MT chain as well. However, based on our observations for single filaments, we could estimate the transition frequency for a radial oscillation of the triangle to be f t ≈ 800 Hz.
In the static case, the triangle is about 25 times stiffer than a single filament. This can be explained by the fact that every radial or tangential displacement of the actor bead results in a compression and stretching of another MT at the same time. Since MTs are hardly stretchable 32 , this results in static elasticities of G'(0) ≈ 20-50 mPa. Comparing the estimates for the transition frequency ω t obtained from G'(0) and equation (3), these extrapolated values come close to the frequency where G'(ω) ≈ 1.5⋅G'(0). Again, we interpret the increased transition frequency as a result of the intermolecular relaxations of or between two tubulin heterodimers, which cannot follow on timescales below 2π/ω t < 10 ms. A stiffening beyond a transition frequency of f t ≈ 200 Hz could also be observed in cross-linked actin networks 50 .
Whereas the optically trapped anchor points could rotate and act as hinges in the previous configurations, the anchor points of the triangle can hardly rotate, and therefore rather resemble a movable support only. This triangular situation is relevant for the radial MT arrays that form around the nuclei of many cells by a mechanism where microtubule-nucleation factors are directionally transported by dynein motors 51 . In addition, the forces conveyed to the nucleus by this network would act, via links of the cytoskeleton to the nuclear lamina, on structure and dynamics of the chromatin 52 , providing a mechanism how mechanic signals can modulate gene activity in the network's center.
Summary and conclusions
Motivated by the capability of individual microtubules and inter-connected microtubule networks to transduce a mechanical stimulus over a long distance within short times, we clearly identified substantial differences in response for different network topologies and at different stimulation frequencies ω 35 . This has a couple of interesting implications for biology: The rather low stiffness at frequencies below the characteristic transition frequency, ω < ω t , of single filaments or the linear network is expected to dampen the transmission of mechanical signals, while the rise at ω > ω t would allow for an enhanced transmission of signals that typically show a reduced amplitude in noise driven systems such as living cells.
Interestingly, this transition frequency is in a physiologically relevant range (1-10 Hz). For instance, the mammalian heartbeat ranges between 1 Hz in humans up to 18 Hz in mice 53 , and muscles undergo an innate oscillation of around 20 Hz 54 .
A second aspect of the strong influence of network topology is the comparatively high stiffness at low frequencies of triangular networks. This displays a stiff, load bearing scaffold, which could be used to reinforce the cell against external pressure in densely packed tissues, or enable the contraction of large scale MT networks 55 . The specific mechanical properties of triangular networks are relevant for nuclear positioning, since the nucleus is tethered and positioned by radial arrays that are stabilized by cross-connection in many organisms integrated into cell polarity 56 . A third implication of our findings is linked with the "mechanic transistor" function of microtubule networks, where small mechanical forces can control a large amount of momentum transport.
Microtubule crosslinkers have recently been reported to be able to generate entropic forces on the pN range 57 , which could lead to passive changes of network elasticity over time by prestretching individual filaments of a network. This would provide a mechanism how cells can control the directionality of mechanic signaling, which is relevant for mechanic integration of cells into organs, or of organs into organisms 2 . These implications show that our bottom-up approach to analyze the transmission of mechanic forces in networks of increasing complexity is relevant to understand, how mechanic signals can shape biology.
Theoretical description of viscoelastic behavior
This section introduces the relevant forces acting on a single filament and its resulting deformations as well as the relative bead displacements during an oscillation longitudinal to the MT. Through a representation of all forces in frequency space, the elastic and viscous components of the filament can be extracted using the frequency dependent complex shear modulus G(ω).
To separate the viscoelastic contributions of the filament and the trapped beads, we analyzed the data by means of active two particle micro-rheology in frequency space. Here, is the elastic optical force and ( ) (4) can then be given explicitly: Hence, equation (5) represents a set of m coupled differential equations, where m is the number of beads. These are solved pair wise using relative and collective coordinates Since the contribution of the filament acts in opposite directions for each bead (points away from the MT ends, ± in equation (5)), this effect cancels out in the The buckling of the filament contour u(x B1 , x B2 , x, t) is a function of the compression given by the bead positions x B1 and x B2 and is assumed to be deformed in lateral direction y only with small angles to the x-axis. The deformation amplitude can be written as a superposition of sinusoidal modes with wavenumber q n = n⋅π/L (n≥1) 59, 60, 61 : The amplitudes The filament is axially compressed by , equation (7) reads in frequency space: The spectral forces acting on the beads with relative position ( ) R x ω are known and can be subtracted, such that the following response equation holds: deforms the MT at different temporal and spatial frequencies.
Theoretical estimate for MT stiffening on short timescales
The question is how well our observations can be explained on the basis of an equation of forces, as introduced in equation (4) Using this fit function, the transition frequency ω t ≈ 3ω 1 could be extracted from the experimental data. ω t was interpreted as the frequency at which molecular relaxations cannot follow the external filament deformation. The frequency independent stiffness at low frequencies and the sudden increase in G'(ω) on a double-logarithmic scale could be well observed in single filaments as well as in the linear and triangular MT arrangements. From these observations, we conclude that the description of forces chosen in equations (4) and (5) to quantify our mechanistic model is reasonable. However, the stronger stiffening at high frequencies with p > 5/4 needs a more thorough theoretical investigation. In addition, the theoretical approach has to be extended in the future, to also integrate the porous molecular structure, especially to explain the dependence of the transition frequency on chemical stabilization of the microtubule (see Supplementary Results).
Stiffness estimate for a linear MT chain
The two-step elastic modulus (1,3) G′ , resulting in a fivefold stiffening in longitudinal direction and fivefold softening in lateral direction compared to the one-step modulus (1,2) G′ , can be modelled as serial connection of two springs (two filaments, 2fl) with MT length L / 2 or wave number 2q, such that . Reciprocal addition of two single filament results in a two filament sum elasticity, 2 (0, 2 ) , which is two times softer than that of a single filament. Alternatively, the two-step modulus (1,3) G′ can be identified with a single filament of length L, or wave number q, such that (1,3) . However, this results in a fivefold decrease of the elasticity relative to that of a single filament with length L / 2, according to 1 The factor 1/3 arises from the length dependence of l p (q).
Hence, an additional coupling term cpl G′ is required to explain the elastic behavior of the linear construct, such that ( ) 0, cpl q G′ must be positive for longitudinal momentum transport, and negative for lateral momentum transport.
Experimental setup with optically trapped beads as actor and sensor
A single biotinylated microtubule was attached laterally to two Neutravidin coated beads displaced by x B1 and x B2 relative to the trap centers. During both half periods of an oscillation, the distance between the beads was first increased and then decreased resulting in tensile and compressive forces acting on the microtubule, respectively. Since microtubules are practically inextensible, they are bent locally at the point of attachment to beads (Fig. 1a) during the first half period 32 and buckled during the second half period due to their high compliance to compression forces 63 . The buckling amplitude along the filament is denoted by u(x, t) as illustrated in Fig. 1c.
In the experiments, we used a lateral stiffness of κ opt ≈ 25pN/µm per trap. The actor trap was typically oscillating at frequencies 0. usually resulted in filament breaking close to one bead. In such cases, the measurements were excluded from further analysis. Also, we did not observe significant differences for repetitive measurements on the same filament indicating that microtubules were not structurally damaged during oscillation. In some cases, one of the filaments was detached of a bead.
These experiments have also been excluded from analysis. Elastic effects of the biotin linker can be neglected, since the effective length of this linker is in the Ångstrom range 64 and its spring constant 65, 66 is much larger than that of microtubules, both for buckling and stretching 32 .
Suitability of experimental approach
The use of optically trapped beads as anchor points for simple microtubule networks turned out to be a very suitable approach. Potential phototoxic effects such as bleaching and filament
SI Methods and Material
The elastic relaxation forces along the filament ∫dF κ,MT are reduced by filament friction ∫dF γ,MT and act in lateral y direction. Due to the homolog constraint of the filament of constant length L and its connection to the optically trapped beads, the resulting elastic forces of the microtubule F κ,MT push the beads outwards in x-direction and are counteracted by the optical forces F opt . Whereas the friction force F γ,B on the bead in the sensor trap (blue) is negligible small, the viscous force on the oscillating actor bead (red) counteracts the driving force F drive . The tension free equation of motion for relaxation (F drive in positive x-direction) reads 1 1 1 1 , the left bead and 2 1 1 for the right bead.
Micro-rheology analysis
The measured, frequency dependent displacements x Bi , y Bi of bead i as a response to an applied actuation force ( ) In active micro-rheology as it is used here, the driving force F = κ·x L (t) is generate by a sinusoidal oscillation x L (t) = A a sin(ω a t) of one optical trap with stiffness κ, amplitude A a and driving frequency ω a . To obtain the complete spectrum A(ω), the experiment has to be repeated several times for different actuation frequency ω a and evaluated according to Eq. (S1) for each frequency ω a . As explained in the main paper, the measured bead displacements x B , y B are a superposition of the elastic trapping force, the viscous drag of beads and the wanted viscoelastic properties of the material under investigation, i.e., of the microtubule filaments in our case. Hence, the response function A is a superposition of these contributions as well. As explained in (1, 2) given by Eq. (S2), this can be separated to obtain the pure viscoelastic response function G MT of the filament: Here, κ (i) and κ (j) are the trap stiffnesses of the corresponding traps which have to be determined independently by calibration (3,4). Different pre-factors 4πL and 8πL for different directions take care of the hydrodynamic coupling To ensure correct results we tested the software implementation and measurement procedure for simple beads in water, where the viscoelastic response is known theoretically and measured experimentally (1). Since the motivation of the paper is to study the transport of mechanical stimuli, only the elastic components ( , ) i j G′ (real part of G) are shown and discussed in the main paper. Viscous components (imaginary part of G) are shown in the SI Results (see below).
The molecular architecture of differently polymerized and stabilized filaments
We choose filaments polymerized in the presence of a non or slowly hydrolysable GTP analog (GMPCPP) in addition to filaments assembled with GTP since both filament types have significantly different mechanical properties (5) ultimately governed by different molecular configurations as illustrated in Fig. S2. After polymerization, GTP molecules in the microtubule lattice hydrolyze stochastically to GDP. While GTP and GTP-analog tubulins adopt a straight conformation, the hydrolysis at the β-tubulin leads to a kink of the GDP tubulin dimer resulting in an intrinsic strain in the microtubule lattice (6,7). This conformational change is slowed down by Taxol (8) which binds on the inside of the hollow tube (9, 10) and has been used to theoretically recapitulate the tip structure and rates of assembly/disassembly of microtubules (11), the occurrence of long-lived arcs and rings in kinesin-driven gliding assays (12) and to transform MTs into inverted tubules facing their inside out by a specifically induced conformational change using spermine, a polyamine present in eukaryotic cells (13). Further, microtubules polymerized in the presence of slowly or non-hydrolyzable GTP analogs such as GMPCPP or γ-S-GTP have additional lateral interprotofilament contacts between β-tubulins compared to GTP/GDP microtubules (5,14). Assuming that the connection between individual αβtubulin dimers can be approximated by damped harmonic springs (15,16), the damping of the intermolecular connections should affect the temporal response upon exertion of mechanical stimuli and thereby the transition frequency ω t .
Lateral forces are negligible
In addition to the data shown in Fig. 2 of the main paper, we here compare the bead displacements along the x and y direction during a single filament rheology experiment at two oscillation frequencies f = 0.1Hz and f = 100Hz. As Fig. S 3 shows, the total contributions in lateral y-direction are negligibly small. Here, the lateral elastic MT buckling force F κMT,y (x,x Bj ) is increased (reduced) by the MT drag force F γMT,y (x, x Bj ) for deformation (relaxation). Both MT forces are equilibrated by the strong optical forces F opt,y (x Bj ) and the weak viscous drag forces of the beads F γ,y (x Bj ) in lateral direction. The sum of these forces is zero for all oscillation frequencies and phasings, i.e., F opt,y (x Bj ) + F γ,y (x Bj ) + F κMT,y (x, x Bj ) + F γMT,y (x, x Bj ) ≈ 0. This situation is revealed in Fig. S
Frequency dependent bead displacements
The displacements x Bi of the beads are governed by the elastic optical trapping force, the viscous drag force of the beads as well as the viscoelastic force from the microtubule filament according to Eq. (2) of the main article. In Fig. S4, we show the frequency dependence of the maximum actor and sensor bead displacement |x Bi -x Li | during filament buckling and filament stretching. While an increase of the maximum amplitude of bead displacements of approximately one order of magnitude can be observed during buckling, bead displacements stay approximately constant and are proportional to the actor amplitude A a during filament stretching. Already here, the connection between the constant low frequency plateau of G' and its power law rise above f t ≈ 2Hz to filament buckling can be anticipated. The viscoelastic contribution of the trapped beads alone moving in the purely viscous buffer medium is much smaller than the effect observed here and is dominated by the corner frequency ω c = κ / 6πR B η ≈ 2500Hz of the position power spectral density |x B (ω)|² of the bead motion, which is much larger than the transition frequencies estimated for our MT constructs.
Estimation of the total viscous force
To distinguish the absolute role of the different forces introduced in Eqs. (1) and (2) of the main paper, we determine these contributions in the following. The results shown here are the basis of Fig. 2D of the main paper.
The most obvious force is the viscous drag ,tran B F γ of bead translation. The actor bead approximately follows the sinusoidal movement of its trapping focus and is much larger than that of the sensor bead, which is neglected for this reason. The movement of the actor trap is x L1 (t) = A a sin(2πf a t) resulting in the velocity v L1 (t) = ∂/∂t x L1 (t) = 2πf a A a cos(2πf a t). Only the maximal force components are considered in the following. Hence, the translational viscous drag force is given by Eq. (S3) and shown by the yellow line in Fig. S5B.
The buckling filament causes both beads to rotate resulting in a rotational drag force ,rot For comparison, we also plotted the experimentally obtained frequency dependence of the total force of a bead alone (yellow markers) and of a bead / MT construct (red markers) together with the total force of all contributions estimated above (red line) in Fig. S5B For the experimental situation with MT filament, the total force increases continuously but still intersects with the sum of all estimates of the viscous contributions made above at approximately f = 100Hz. This indicates a strong additional contribution from the MT, very likely of elastic nature as we obtained by the micro-rheology analysis for G'.
Contributions of deformation modes to G'(ω)
As stated in the main paper, we estimate to excite N = 3 deformation modes at oscillation frequencies up to f a = 100Hz. The contributions of each additional mode are illustrated in Fig. S6 where we plotted the theoretical slope of ( ) indicate the influence of a single mode and at which frequency the next mode kicks in.
The first mode, causing a constant plateau, is dominant for low frequencies up to ω ≈ ω 1 .
Viscous components of single filaments
The viscous modulus G''(ω) of a simple bead in a ideally viscous solution such as water is A comparison to the theoretical prediction shown in Fig. S7B reveals that high deformation modes n ≥ 2 only slightly change this linear relationship and only for relatively high frequencies ω > 10ω 1 , which is roughly the maximum frequency we resolve in our experiments (ω 1 ≈ 4Hz typically, see main paper).
Viscous components of a linear connection of MTs
Similarly to the viscous components of single MT filaments, the viscous components
Pre-stress in triangular networks
Free floating filaments are subject to Brownian forces and sometimes bend heavily. This can cause pre-stress during the construction of a network, i.e., the subsequent attachment of a filament to optically trapped beads. This effect becomes more prominent if the number of filaments of a network increases. Fig. S9 shows the elastic modulus G' of three different equilateral triangles with a side length of 15µm. For both oscillation directions, the plateau of G' is larger for the connection of the first filament 1→2 and smaller for the connection of the second filament 1→3 for the first two triangles, compared to the third triangle where the elastic moduli for both connections are approximately equal. This clearly indicates a prestress of the first MT compared to the second filament. (1) (1)
Viscous components of triangular networks
Again, we observe a linear relation between the viscous components G''(ω) of filaments in a triangular networks and the frequency ω as shown in Fig. S10 together with power law fits with free exponent p ≈ 1. For the tangential oscillation along y, the viscous component (1,2) G′′ of the first filament deviates strongly from the expected linear response for the first two triangle constructs, indicating that maybe the connection to either of the beads (1) or (2) was not perfect. A B (1) (2)
Comparison of transition frequencies
As described in the main paper, we found that he transition frequency ω t , separating the constant plateau value of G' for low frequencies from the high frequency rise approximately proportional to ω 1.25 , depends on filament length, stabilization, polymerization and especially on the geometry of the network. This is summarized in Fig. S11A. The transition frequency is the highest for the triangular network, which is also the stiffest. The difference of filament stabilization is clearly visible for long filaments. There is also a clear difference visible for different oscillation directions of all geometries, where the transition frequency is much smaller for an oscillation lateral to the filament axis, indicating much faster stiffening in this direction.
In order to analyze how well the experimental transition frequencies match the theoretical predictions according to Eq. 3 of the main paper, we plotted the transition frequency as a function of MT contour length as shown in Fig. S11B. Here, we included the length dependence of the MT persistence length according to Pampaloni (18). For the persistence length p l ∞ of MTs much longer than a critical length l c = 21µm, we used the values for L = 15µm long MTs obtained in this study to reflect the different stabilizations.
Geometric effects of beads
MT filaments are attached laterally to the anchor beads with a diameter d = 1062nm. This results in a torque on the beads during filament buckling because the optical trapping force acts on their geometric centers. This causes the point of attachment of the filament to a bead to rotate. Hence, the precisely measured distance ∆x L + x B1 -x B2 between both beads does not coincide with the actual projected length p MT of the buckled filament as illustrated in Fig. S12A. For convenience, we assume the symmetric case with x B1 = x B2 = x B and ∆ 1 = ∆ 2 = ∆ B = R B ·sin(ϕ) ≈ R B ·ϕ in the following. Neither the actual compression δ L given by Eq. (S7) nor the rotation angle ϕ² = 4δ L / L MT can be measured directly.
( ) However, both unknowns depend on each other leading to the quadratic relation ( ) This can be substituted in Eq. (S7) to calculate the actual compression δ L and to plot force compression curves, i.e., the buckling force F = κ 1 ·x B1 + κ 2 ·x B2 versus the compression δ L as we show for two filaments with different length in Fig. S12C+D. The data shown here were obtained in a quasi-equilibrium where we moved trap 1 in discrete steps of ∆x L1 = 50nm every ∆ t = 100ms.
In the ideal case, i.e., a perfectly axial application of force on the filament, the MT should not buckle, i.e., δ L (F < F crit ) = 0, until a finite critical buckling force F crit = π²EI / L² MT is reached, above which the filament behaves like a spring with spring constant κ MT , i.e., δ L (F > F crit ) = (F -F crit ) / κ MT (17). Here, we observe a nearly exponential dependence of the force on the compression for small δ L < 400nm and a linear dependence for δ L > 400nm. This is due to the imperfect, lateral application of the force on the filament. We indicated this behavior in However, we did not consider this geometric effect in our rheology experiments. We expect this to have only a minor effect on the measured viscoelastic properties G MT of the filaments, but this has to be tested and included into the theory in the future.
Microtubule buckling amplitude
For the integration step in equation (5) Since this square root cannot be solved analytically, we investigate the buckling amplitude u qn as a function of arc length n L′ for single bending modes n, where the ground mode n=1 allows to estimate the maximum possible deflection u q1 > u(x).
Substituting ϕ = q n x leads to However, the sum of all deformations in a filament is always smaller than the ground mode buckling, i.e. u(x) <u q1 .
Linear system theory
Our theoretical description as well as our analysis assume a linear relationship between the microtubule buckling amplitude u qn and the driving force F D , such that ( ) ( ) ( ) In order to test whether the buckling responses of single MTs are linear with force, we analyzed the force dependency F(δ L ) on a stepwise MT compression by δ L . Fig. S14 shows that F(u qn ) increases indeed roughly linearly for not too large buckling amplitudes u qn (δ L ), in accordance with to Eq. S13.
Linearity: By analyzing the normalized χ² value as a function of the number of data points included in a linear fit to the data, we find an approximately linear response up to u qn ≤ 300 nm for short microtubules (L = 5µm) and u qn ≤ 1.4µm for long filaments (L = 15µm). This is equivalent to an oscillation amplitude of the laser trap x L < 500nm for short and x L < 1200 nm for long microtubules, respectively. In our rheology experiments, we usually analyze the microtubule response for three different oscillation amplitudes A a = 200 nm, A a = 400 nm, and A a = 600 nm. Hence, linear response is well fulfilled for long microtubules and at least for the two smaller oscillation amplitudes for short filaments. In addition, we always compare the results for different oscillation amplitudes to each other and never observe a significant difference.
Local forces acting along the filament
The curve of a filament deformed in the mode q n can be described by the vector We like to point out the strong dependence of the local bending force on the third power of the mode number n, meaning that the highest present order always dominates the buckling force. | 2018-04-03T02:10:25.932Z | 2017-05-01T00:00:00.000 | {
"year": 2017,
"sha1": "17fe7a7349a59be716c80097b02ca2304e864c54",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1038/s41598-017-04415-z",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "17fe7a7349a59be716c80097b02ca2304e864c54",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Physics",
"Medicine",
"Chemistry"
]
} |
195833217 | pes2o/s2orc | v3-fos-license | Impact of the Ga flux incidence angle on the growth kinetics of self-assisted GaAs nanowires on Si(111)
In this work we show that the incidence angle of group-III element fluxes plays a significant role in the diffusion-controlled growth of III–V nanowires (NWs) by molecular beam epitaxy (MBE). We present a thorough experimental study on the self-assisted growth of GaAs NWs by using a MBE reactor equipped with two Ga cells located at different incidence angles with respect to the surface normal of the substrate, so as to ascertain the impact of such a parameter on the NW growth kinetics. The as-obtained results show a dramatic influence of the Ga flux incidence angle on the NW length and diameter, as well as on the shape and size of the Ga droplets acting as catalysts. In order to interpret the results we developed a semi-empirical analytical model inspired by those already developed for MBE-grown Au-catalyzed GaAs NWs. Numerical simulations performed with the model allow us to reproduce thoroughly the experimental results (in terms of NW length and diameter and of droplet size and wetting angle), putting in evidence that under formally the same experimental conditions the incidence angle of the Ga flux is a key parameter which can drastically affect the growth kinetics of the NWs grown by MBE.
Introduction
GaAs nanowires (NWs) are one of the most promising materials for the integration of III-V semiconductors on Si, since they can be grown by molecular beam epitaxy (MBE) on Si substrates via a self-assisted vapor-liquid-solid (VLS) mechanism 1-8 preventing the use of Au catalysts which would jeopardize the electronic and optoelectronic properties of these semiconductors, forming deep-level states in both of them. [9][10][11][12][13][14] When it comes to MBE, both Au-catalyzed and self-assisted growths of NWs are diffusion-controlled processes. Many theoretical and experimental studies were carried out to understand the growth mechanisms and to identify the parameters inuencing the NW structure and the growth kinetics. 2,3,7,[15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31] First studies have shown that the catalyst droplet volume or shape 15,16,18,27,31 and, more recently, that the droplet wetting angle 20,22,24 do control the NW crystal structure through the location of the nucleation site. Moreover, the volume of the catalyst droplet controls the kinetics of the axial growth through the related capture surface, 25,26,28 and in particular, through the capture area for As in the case of self-assisted GaAs NWs. 18,22 Concerning growth parameters, little experimental work has been devoted to the inuence of the Ga ux incidence angle on the NW properties. The morphology and chronology of formation of GaN NWs and GaN-AlN core-shell NWs, including the effect of the III and V source positions, were experimentally studied using the AlN marker method in ref. 32 and 33. Regarding the NW growth kinetics, the growth models developed so far take into account the Ga ux incidence angle 17,18,22,23,25,26,28,30 but do not demonstrate its inuence. As an example, the model of Glas et al. 17 for self-assisted GaAs NWs was based on the assumption that the Ga ux adopted is always high enough to supply the Ga droplet, therefore neglecting to consider the inuence of the incidence angle of the Ga ux on the amount of Ga atoms collected by the droplets, and consequently on their volume and shape. Thus, despite all these important studies, to the best of our knowledge, no experimental study has been so far undertaken to ascertain how different Ga ux incidence angles can affect the NW growth kinetics under formally the same growth conditions.
Considering that for self-assisted GaAs NWs (i) the Ga droplet volume is determined by the balance between the droplet supply in Ga atoms and its depletion caused by the NW growth, and (ii) the droplet supply in Ga atoms occurs through three different ways, i.e. diffusion of Ga adatoms on the substrate, diffusion of Ga adatoms on the NW facets and direct impingement of Ga atoms on the droplet, it is thus expected that the incidence angle of the Ga ux has an inuence on the amount of Ga atoms which can be collected by the droplet.
Based on these considerations, we decided to demonstrate experimentally the inuence of the Ga ux incidence angle, further denoted as a, with respect to the surface normal of the substrate (i.e. with respect to the growth axis of vertically grown NWs on the substrate). To this end we used a MBE reactor equipped with two Ga cells at a x 27.9 and a x 9.3 denoted as Ga (5) and Ga (7), respectively. We studied the axial and radial growth rate of GaAs NWs with a series of GaAs NWs grown for different growth times using either the Ga(5) or the Ga (7) cell. The experimental results have been explained by using a semiempirical model so as to determine the physical factors which originate the signicant differences observed with the two different Ga sources.
Experimental results
We grew a series of Ga(5)As and Ga (7)As NWs for different growth times ranging from 5 to 80 min, so as to obtain a vast description of the axial growth rate depending on the Ga source used. The growth conditions adopted (cf. -Experimental section) are the same as those employed in ref. 34, which have proved to provide GaAs NWs with zinc-blende (ZB) structure when using the Ga(5) cell.
The impact of the incidence angle a on the NW growth kinetics is highlighted in Fig. 1(a) and (b) reporting respectively the evolutions of the NW length and NW diameter (measured at the NW top just below the Ga droplet), as a function of the growth time. SEM images of the as-obtained NWs are shown in Fig. 1 in the ESI. † Firstly, Fig. 1 shows that, as expected, despite the equal value of the Ga(5) and Ga (7) uxes in terms of planar growth rate (0.5 ML s À1 ), the a angle exerts a signicant inuence on the NW growth kinetics. It can be observed that for shorter growth times, the lengths of NWs obtained with Ga(5) and Ga(7) cells are comparable, whereas for longer growth times the Ga(7)As NWs are signicantly shorter than their Ga(5)As counterpart. From Fig. 1(a), it can also be noticed that while the experimental points for Ga(5)As NWs can be tted with a single linear regression corresponding to a NW axial growth rate of 1.9 nm s À1 , those ones for Ga (7)As NWs lay on the same slope for growth times up to x17 min, but are tted with a different one for longer growth times, corresponding to a NW axial growth rate of x0.8 nm s À1 only. Such a result suggests that in the latter case the growth process undergoes two different growth regimes, named R1 and R2 in Fig. 1(a), with a transition from R1 to R2 at a growth time of about 17 min and corresponding to a NW length of about 1.8 mm (cf. vertical and horizontal dashed blue lines in Fig. 1(a)). As clearly highlighted in Fig. 1(b), the angle a affects not only the NW length evolution with the growth time but also the NW diameter evolution. In particular, while for Ga (5) As NWs the diameter increases linearly with the growth time (x1 nm min À1 ), it seems roughly constant (slope x 0.2 nm min À1 ) in the case of Ga(7)As NWs.
Secondly, a difference in the droplet shape can be observed as the growth time increases (Fig. 2). In fact, while for shorter growth times the Ga droplets present equivalent features and wetting angle b in the 138-142 range for both Ga(5)As and Ga (7)As NWs ( Fig. 2(a) and (b)), for longer growth times the droplets exposed to Ga(7) ux present a smaller wetting angle in the 120-130 range (cf. Fig. 2(d) and (f)) than their Ga(5) counterpart (still in the 138-142 range as shown in Fig. 2(c) and (e)). Note that the wetting angle b is calculated from the relation R NW ¼ R d sin b with R NW and R d being the NW and droplet radii, respectively. It can be stated that the droplets on Ga(7)As NWs, contrary to their Ga(5)As counterparts, tend to decrease in size as the growth time increases. It should also be noticed that the difference in wetting angles between Ga(5) and Ga(7) droplets can affect the crystal structure of the NWs, the former case leading to the ZB structure and the latter one to the wurtzite (Wz) one (cf. Fig. 2 in the ESI †). Fig. 1 (a) Length of Ga(5)As NWs (black) and Ga(7)As NWs (red) as a function of the growth time. The vertical dashed blue line marks the separation between the two growth regimes observed for Ga(7)As NWs, while the horizontal one shows the corresponding NW length. (b) Diameter of Ga(5)As NWs (black) and Ga(7)As NWs (red) as a function of the growth time. The NW length and diameter are measured on about 100 NWs.
In order to obtain additional insights on the growth process for shorter growth times, a second series of Ga(5)As and Ga (7)As NWs samples was also realized for growth times in the 20 s to 3 min range (cf. Fig. 3 in the ESI †). The results were compared with the rst points obtained for longer growth times (Fig. 3). As far as the length is concerned (Fig. 3(a)), it can be noticed that the linear trend is conrmed for both Ga(5) and Ga(7) cases also at very short growth times, with the axial growth rate being still equal to 1.9 nm s À1 .
On the contrary, Fig. 3(b) shows that the trend for the evolution of the diameter is quite different. A rapid increase (x5 nm min À1 ) of the diameter is observed for both Ga(5)As and Ga (7)As NWs for the shortest growth times (20 s to 5 min), whereas for the longest ones the Ga(5)As NWs show a linear radial growth rate (x1 nm min À1 ), while their Ga(7)As counterparts present an almost constant one. For both cases, the NW diameter increase during the axial growth leads to NWs with an inverse tapered geometry. The different behaviors observed between short and long growth times are thus conrmed by the measurement of the tapering coefficient T% of the NWs, as dened by Colombo et al., 29 which results equal to 4-6% with shorter growth times and to 0.5-1% with longer ones, for both Ga(5)As and Ga (7)As NWs. This demonstrates that the radial growth compensating for the tapering effect is higher for the longer growth times (for which the diameter increase is low) than for the short ones (for which such an increase in diameter is higher). It should also be noticed that the NW diameter at the nucleation, occurring aer about 12 s of growth, is x15 nm and corresponds to the average diameter of the Ga droplets as observed before the NW nucleation (cf. Fig. 4 in the ESI †).
Quantitative estimates
Due to the nature of the substrate surface (epi-ready SiO 2 terminated Si substrate) and due to the low density of NWs in our experiments, in the following we shall (i) neglect the reemission and shadowing effects for Ga atoms and (ii) consider that among the various sources of Ga and As atoms that supply the droplet we shall only take into account the simplest: direct impingement of both Ga and As atoms, and diffusion of Ga adatoms on the substrate and on the NW facets. As estimates based on experimental data show that the direct impingement of As atoms is not sufficient to support the growth process so that, following an assumption already proposed in ref. 35 we shall also include a xed amount of re-emitted As atoms.
While the microscopic features of the NW growth by the layer-by-layer mechanism depend strongly on the crystal structure of the materials involved, and are accompanied by oscillations in both the droplet concentrations and the truncated facet under the droplet, 20 we shall adopt here an effective (macroscopic) point of view, limiting our study to the evolution of the NW length and diameter. Meanwhile, we still account for both wetting angle evolution and direct or inverse tapering but do not relate them to Wz and/or ZB formation. The proposed model can be further rened to a small-scale approach so as to include the layer-by-layer mechanism and crystallographic features but for simplicity we present a minimal version.
By extending previous semi-analytical models 3,25,28,30 proposed for Au-catalyzed and self-assisted III-V NWs, we report a description of the NW growth kinetics using generic Ga and As sources located with respect to the substrate normal at angles a Ga and a As respectively. The main original feature of our model is to assume that, in agreement with stability requirements, 36 the wetting angle b of the Ga droplet can take only values in an interval (b min , b max ) ¼ (55 , 140 ), and to associate mechanisms with these limit values that allow the NW diameter below the droplet to increase (or to decrease), resulting in inverse (or direct) tapering.
In order to identify the model parameters we shall use only the experimental data obtained for the NWs grown by using the Ga(5) source. Then, using these parameters, we simulate the NW growth using the Ga (7) source so as to compare the predicted values for both the axial growth and the changes of the NW diameter (under the droplet) with the above reported experimental data. (a) The amount of Ga atoms, further denoted as q sub Ga , able to reach the droplet by surface diffusion on the SiO 2 -terminated Si substrate (which must be followed by diffusion along the NW facets) exists only as long as the NW length, further denoted as L(t), is such that L(t) < l facet , where l facet corresponds to an average diffusion length on the NW facets. Per time unit we have
The capture surfaces for the Ga atoms
where F Ga is the Ga ux on the SiO 2 -terminated Si substrate is the substrate capture area, l SiO2 is the average diffusion length on the SiO 2 -terminated Si substrate and r(t) is the NW radius.
(b) The amount of Ga atoms able to reach the droplet by diffusion along the NW facets can be written as where S facet (t) ¼ 2r(t)min(l facet , L(t)) is the NW facet capture area projected on the plane normal to the Ga ux direction. Here above, the factor F Ga tan a Ga is the value of the ux on a vertical surface when the nominal ux (i.e. the ux on the plane normal to the direction of the source) is F Ga /cos a Ga . Finally, the min function accounts for the NW length L(t) only for NWs with length lower than l facet . (c) The amount of Ga atoms supplying the droplet by direct impingement is q droplet Ga ðtÞ ¼ F Ga cos a Ga Sða Ga ; bðtÞ; rðtÞÞ: Here the factor 1/cos a Ga accounts for the position of the source and the factor S(a Ga , b(t), r(t)) is the exact value of the droplet area projected in the direction normal to the ux when the wetting angle of the droplet is b(t) and the droplet is located on top of a NW with radius r(t), as reported by Glas. 37 3.2 The amount of As atoms supplying the droplet By using the experimental data for the Ga(5)As NWs and a piecewise linear interpolation for the NW radius and length, we can estimate the amount of As atoms needed to grow the Ga(5)As NWs at t ¼ 80 min as Fig. 3 Graphics of (a) the NW length and (b) diameter as a function of the growth time including the data for short growths (20 s to 3 min). Black and red points correspond to Ga(5)As NWs and Ga (7)As NWs, respectively. The black line in (a) is a guide for the eyes for both black and red points. where a GaAs is the lattice parameter of ZB GaAs. We point out here that the values of the NW diameter reported in Fig. 1 do not include the vapor-solid NW radial growth contribution. Moreover, as the experimental results show that the Ga(5)As NW diameter is constantly increasing, we can deduce that, except at very early stages of the growth process, the droplet wetting angle value equals its maximum one b max experimentally measured in the 138-142 range (see Fig. 2). With respect to previous models in ref. 3, 25, 28 and 30, the existence of a maximum (minimum) value for the droplet wetting angle is a feature of our model that allows including (as described below) a mechanism of increase (decrease) of the NW radius under the droplet.
If the incorporation of As atoms supplying the droplet is the result of only the direct impingement, then knowledge at time t of the NW radius r(t), the droplet wetting angle b(t), the As source incidence angle a As and the nominal As 4 ux F As allows a straightforward computation that gives the amount of As atoms, N As , supplying the droplet. In our case, an estimation by excess is where T is the growth-duration and S(a As , b max , r(t)) is the projected droplet area on the plane normal to the As 4 ux direction, as reported by Glas. 37 With our numerical data for the Ga(5) source and in agreement with previously reported results in ref. 17 and 35, we have found that the amount of As atoms supplying the droplet from direct impingement is insufficient for the Ga(5)As NW growth. More exactly, direct impingement provides only x89% of the amount of As atoms needed for the NW growth. Thus, we shall follow a previously proposed mechanism 17 and include also an additional As retro-diffusion ux factor 3, so that q droplet As (t) ¼ (1 + 3)F As S(a As , b(t), r(t)), where, from numerical estimates, we take § 3 ¼ 0.127. Obviously, the above description of Ga and As sources supplying the droplet holds in an isothermal process at a low density of NWs (in which case the shadowing effects can be neglected).
Growth mechanism description
We shall further assume that there is a critical concentration threshold, 17 further denoted as c*, such that solidication occurs only if the droplet concentration c(t) $ c* (oversaturation). The growth process can be described as follows (see the ESI †): 1. At xed t let L(t), r(t), b(t) and c(t) be the NW length, NW radius, droplet wetting angle and As concentration in the droplet, respectively. The size and concentration of the droplet provide the amount of Ga and As atoms in the droplet, further denoted as Q Ga (t) and Q As (t). Then, during a small time-interval (t, t + Dt) we can update Q Ga (t) and Q As (t) so as to account for the amount of atoms supplying the droplet as described previously: Q As (t) /Q As (t) ¼ Q As (t) + q droplet As (t)dt.
2. The knowledge ofQ Ga (t) andQ As (t) provides an estimate for the concentration as: c(t) ¼Q As (t)/(Q Ga (t) +Q As (t)) so that, depending on the value ofĉ(t), several scenarios may occur: 2.1 The generic case occurs when the updated concentration is such thatĉ > c*. In this case there is a unique amount QðtÞ ¼Q As ðtÞ 1 À c * =ĉ 1 À 2c * of equal quantities of Ga and As atoms that can form a crystalline solid phase and such that for the remaining quantities Q Ga (t + Dt) ¼Q Ga (t) À Q(t) and Q As (t + Dt) 1 Q As (t) À Q(t) we obtain c(t + Dt) ¼ Q As (t + Dt)/(Q As (t + Dt) + Q Ga (t + Dt)) ¼ c*. (7) Thus, both the NW length and diameter do increase with amounts that depend on both the solid material and the remaining liquid quantities: if Q Ga (t + Dt) and Q As (t + Dt) can form a droplet with b(t + Dt) < b max , the NW grows only in the axial direction. If instead Q Ga (t + Dt) and Q As (t + Dt) cannot form a droplet with radius r(t) and wetting angle b(t + Dt) # b max , then both the increase of NW radius (under the droplet) and axial growth take place. In this case, the solid phase will modify both the NW radius and the NW length so as to t the remaining liquid quantities Q Ga (t + Dt) and Q As (t + Dt) in a droplet with a wetting angle b(t + Dt) ¼ b max .
2.2 On the opposite, ifĉ # c*, which may be the case if for instance q sub Ga (t) + q NW Ga (t) + q droplet Ga (t) > q droplet As (t), solidication will not occur but the droplet will change its volume. In this situation, the generic case occurs when the droplet increases its volume at a xed NW radius. But it may happen that b(t) ¼ b max , so that the wetting angle cannot be increased further. In this case, a certain amount of Ga atoms cannot be incorporated into the droplet, because the pinning of the droplet on the NW top is unstable. The instability of the droplet pinned at the NW top can lead to various scenarios among which we cite: kinking induced by the wetting on the NW top and NW facets and/or droplet topology changes by separation. This situation is very similar to the one encountered when a droplet is supplied by Ga atoms only. In that case, since the wetting angle is bounded by b max , incorporation of Ga atoms into the droplet stops at this value of the wetting angle. Decreasing the amount of Ga atoms that can be incorporated into the time-interval (t, t + Dt) increases the concentrationĉ(t). At the upper limit, when only As atoms supply the droplet, the droplet concentrationĉ(t) § The 12.7% missing As atoms are computed with respect to the total amount of As needed; the retro-diffusion coefficient represents the % of the same quantity with respect to the total amount of As from direct impingement. increases so that the NW length increases and the droplet decreases its volume. Similarly, at the lower limit, when due to solidication the liquid volume cannot ll a droplet with radius r(t) and wetting angle b(t + Dt) > b min , the solid phase will decrease the NW radius so as to obtain the unique r(t + Dt) able to sustain the remaining volume at a wetting angle b(t + Dt) ¼ b min .
As proposed above, the model has 3 parameters: the two diffusion lengths l facet and l SiO2 and the retro-diffusion factor 3.
Previous models consider c* x 0.01, 17 l facet x 1-5 mm (ref. 26 and 38) and l SiO2 x 50À90 nm. 18,39 We have implemented the above described model with initial conditions r(0) ¼ 7.5 nm, c(0) ¼ c* ¼ 0.01 and b(0) ¼ p/2, L(0) ¼ 0 and compute the evolution of the NW length, the NW diameter, the droplet size and the wetting angle as well as the amount of Ga and As atoms incorporated into the droplet during the process for the Ga(5) source. The best results, presented in Fig. 5 (le), were obtained using the following parameters: l SiO2 x 70 nm, l facet x 1.8 mm and 3 ¼ 0.13, in good Fig. 5 Numerical results (blues lines) obtained with the semi-empirical model for Ga (5) and Ga (7) sources. On the first line: the amount of Ga atoms supplying the droplet (in blue) and the amount of Ga atoms from the liquid droplet used for the NW growth (red) as a function of the growth time. On the next lines: the droplet radius, the wetting angle, the Ga/As ratio supplying the droplet, the NW length (computedblue; experimentalred) and the NW radius (computedblue and experimentalred). Numerical parameters are identified by fitting only the Ga(5) experimental data (left column). Using the same model parameters, but for the Ga(7) source, we have obtained the numerical results (blue lines) in the right column, plotted together with the experimental data (red points and error bars). agreement with previously cited references. 18,26,38,39 . Parameters obtained by the best t using the Ga(5)As NWs experimental data were subsequently used to predict the length and diameter evolutions of the Ga(7)As NWs. The results are reported in Fig. 5 (right).
We are now able to explain the main differences induced by the source position: at very short times (<1 min), starting with identical NW geometry and droplet size as well as identical Ga uxes on planar surfaces, since the amount of As atoms captured by the droplet is very small, both Ga(5) and Ga (7) droplets increase their volumes. Meanwhile, even in this regime, the As amount is sufficient to supply the axial growth of the NW. But since the amount of Ga atoms exceeds the amount of As atoms, the critical wetting angle is rapidly attained (at xt x 20 s as shown in Fig. 3 in the ESI †) for both sources as the droplet radius R(t) increases. As the NW length increases, the amount of Ga atoms supplying the droplet from diffusion over the NW facets from the Ga(5) source is about 3 times more important than that from the Ga (7) source. This quantity becomes dominant for the droplet supplied by the Ga(5) source while for that supplied by the Ga (7) source it has the same order of magnitude as the amount coming from diffusion on the substrate.
As shown in Fig. 5, due to the high F As /F Ga ratio, the amount of As atoms supplying the droplet has the same order of magnitude as the amount of Ga atoms for both sources all along the growth process. This means that all As atoms supplying the droplet are transferred to the solid phase at each time step. But the remaining liquid phase contains fewer Ga atoms with the Ga(7) source than those with the Ga(5) source so, as a consequence, the increase of the NW diameter (under the droplet) of the Ga (7)As NWs is slower than that of Ga(5)As NWs. In turn, this implies that gradually the size of the droplet for the Ga(5)As NWs increases faster than that of the Ga(7)As NWs. The higher the droplet radius, the higher the amount of As atoms supplying the droplet, and this explains the faster axial growth of the Ga(5) As NWs with respect to Ga(7)As NWs for t > 17 min.
At t x 17 min, corresponding to L ¼ 1.8 mm, when the length of the NWs overcomes the diffusion length on the NW facets, a large amount of Ga supplying the droplet is gradually lost, but since the Ga ux on the NW facets with the Ga(5) source is higher than the Ga atoms lost for the NW growth, this is not a signicant event for Ga(5)As NWs. For the Ga(7)As NWs, the As/Ga ratio becomes suddenly greater than 1 and, as a consequence, additional Ga atoms from the droplet will be used for solidication at each time step. As shown in Fig. 5 (right), for Ga(7)As NWs the NW diameter stops to increase, the wetting angle decreases and the axial growth rate decreases accordingly.
The sudden loss of the Ga atoms supplying the droplet from substrate diffusion at t x 17 min is actually a smoother transition between a regime dominated by the Ga atoms supplying the droplet from diffusion on the substrate and a regime dominated by the Ga atoms supplying the droplet from diffusion on the NW facets. Including this transition in the model will affect the local (in time) length and radius values but will have non-signicant impact on the qualitative results.
These considerations highlight the importance of the Ga adatom diffusion on the substrate, without which a large part of the Ga collected by the droplet would be missing and the experimental data could not be explained. Such a result is consistent with models previously developed by others 3,18,25,26,28,30 but it should be considered as specic for the diffusion of Ga adatoms on SiO 2 -terminated Si substrates, with a thin SiO 2 surface layer 1-2 nm-thick, where the Ga adatom diffusion length is longer, whereas Ga adatoms can behave differently on thicker SiO 2 masks (typically 10-20 nm-thick) used for substrate patterning, 18 as shown elsewhere. 23,34 It is interesting to notice that, in agreement with results in ref. 7 and 30, both classes of NWs evolve toward a stationary growth regime when the amounts of As and Ga atoms are identical and the growth mode is only axial. This asymptotic behavior is determined by two main factors: the fact that the V/ III ux ratio is greater than 1 and the existence of a diffusion length for Ga adatoms along the NW facets. This is easily understood in a simplied framework when the NW radius is assumed constant but can be extended straightforwardly to variable NW radius growth models. Indeed, if the growth process is in a Ga-excess range, the droplet radius increases but since the V/III ux ratio is greater than 1, the system evolves toward a regime when the droplet is supplied with equal amounts of Ga and As atoms. On the opposite, in the As-excess range, the droplet decreases its volume and the direct ux amount on the droplet decreases for both species. However, since the droplet has an additional source of Ga atoms from NW facet diffusion, the system will evolve again toward a regime where the droplet is supplied with equal amounts of Ga and As atoms. These two arguments hold also when the NW radius evolves during the growth. The main reason for this is that the amount of atoms supplying the droplet from the direct ux scales (up to a bounded factor) like r 2 , while that of atoms attaining the droplet through the diffusion on the NW facets scales like r.
Conclusions
In conclusion, we experimentally demonstrated the inuence of the incidence angle of the Ga ux on the growth kinetics of selfassisted GaAs NWs grown on SiO 2 -terminated Si substrates. The experimental results demonstrate that this growth parameter signicantly affects the NW length and diameter evolution. Subsequently, we develop a model and performed numerical simulation so as to fully explain the experimental results.
We developed a semi-empirical model and numerical simulations which highlight that the impact of the incidence angle of the Ga ux on the NW growth kinetics can be explained only by accounting for the contribution of Ga adatoms diffusing from the substrate surface to the Ga droplet. Such a result should be considered as specic for the diffusion of Ga adatoms on the epi-ready SiO 2 -terminated Si substrate, whereas Ga adatoms behave differently on patterned Si substrates with a thick SiO 2 mask.
The second equally important factor is the diffusion length of the Ga adatoms on the NW facets. The role of such a contribution to supply the Ga droplet becomes important when the NW length overcomes such a value, so that the droplet cannot be supplied anymore by the adatoms diffusing from the substrate. It then becomes the main contribution to the droplet supply and, as expected, depends on the Ga ux incidence angle. As a consequence, the difference in length and diameter between GaAs NWs grown with different Ga ux incidence angles can be explained assuming that variations in Ga supply may cause a different response from the Ga droplet between the two cases once the NW length exceeds the diffusion length of Ga adatoms on the NW facets. This will modify the volume and shape of the Ga droplet, thus affecting the capture surface of As atoms and consequently, when the wetting angle of the Ga droplet becomes equal to a maximum value of typically 140 , it will modify both the NW axial growth rate and the NW diameter.
Ultimately, the results here reported show that the incidence angle of the Ga ux is an essential parameter to obtain good control over the self-assisted GaAs NWs grown by VLS-MBE. Such a result is quite signicant, since it opens up to the possibility, having Ga cells with appropriately different incidence angles, of achieving ne control over the NW geometry and probably also over the NW crystal structure, by quickly modifying the amount of the incident Ga ux and therefore the amount of Ga supplying the droplet.
Experimental section
The samples subjected to this study were realized in a MBE reactor Riber 32 equipped with two Ga cells with different ux incidence angles respectively equal to 27.9 (denoted as the Ga(5) cell) and to 9.3 (denoted as the Ga(7) cell), and an As 4 valved cracker cell with a ux incidence angle equal to 41 . All substrates employed for the growths consisted of 1 Â 1 cm 2 chips of boron-doped Si(111) (0.02-0.06 U cm) with an epiready surface oxide layer (x1-2 nm-thick). The substrates were cleaned by sonication in acetone and ethanol for 10 min and degassed at 200 C in ultra-high vacuum before introduction into the MBE reactor. In all cases 1 ML of Ga was predeposited at 520 C always with the Ga(5) cell so as to form Ga droplets and, subsequently, pinned into the surface oxide layer when the substrate temperature was increased. 40,41 The substrate temperature was subsequently increased up to 610 C in 10 min and stabilized for 2 min. Then the substrate was exposed to Ga and As 4 uxes. As far as Ga is concerned, the ux in question originated either by the Ga(5) or the Ga(7) cell, but in any case the Ga ux adopted corresponded to a planar growth rate equal to 0.5 ML s À1 , dened in terms of equivalent growth rate of a 2D GaAs layer grown on a GaAs substrate, as measured by reection high energy electron diffraction (RHEED) oscillations. Similarly, the As 4 ux was equal to an equivalent 2D GaAs layer growth rate 5 of 1.15 ML s À1 , thus providing an As/Ga ux ratio F As /F Ga ¼ 2.3 for a GaAs growth on the substrate. The NW growths were nally stopped by closing the shutter of Ga and As 4 cells simultaneously and rapidly decreasing the sample temperature, so as to preserve the Ga droplet on the NW top.
Conflicts of interest
There are no conicts to declare. | 2019-07-07T06:01:18.000Z | 2019-07-07T00:00:00.000 | {
"year": 2019,
"sha1": "f1b3b7bb521cc1e5f6b95573b9d841c7e6f08e08",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2019/na/c9na00443b",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "010ab0d07517c65f2758512055679bf40b4a73b0",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Physics",
"Materials Science"
]
} |
230799495 | pes2o/s2orc | v3-fos-license | Quantum Field Theory of Correlated Bose-Einstein condensates: I. Basic Formalism
Quantum field theory of equilibrium and nonequilibrium Bose-Einstein condensates is formulated so as to satisfy three basic requirements: the Hugenholtz-Pines relation; conservation laws; identities among vertices originating from Goldstone's theorem I. The key inputs are irreducible four-point vertices, in terms of which we derive a closed system of equations for Green's functions, three- and four-point vertices, and two-particle Green's functions. It enables us to study correlated Bose-Einstein condensates with a gapless branch of single-particle excitations without encountering any infrared divergence. The single- and two-particle Green's functions are found to share poles, i.e., the structure of the two-particle Green's functions predicted by Gavoret and Nozi\`eres for a homogeneous condensate at $T=0$ is also shown to persist at finite temperatures, in the presence of inhomogeneity, and also in nonequilibrium situations.
I. INTRODUCTION
The aim of this paper is to derive a closed system of self-consistency equations for the single-and two-particle Green's functions of correlated Bose-Einstein condensates, which is formally exact and can also be used in practical calculations to describe both equilibrium and nonequilibrium condensates. This will be performed in such a way that it automatically meets the three exact requirements: (i) the Hugenholtz-Pines relation predicting a branch of gapless single-particle excitations [1]; (ii) conservation laws [2][3][4]; (iii) identities among vertices [5,6] originating from Goldstone's theorem I, i.e., the first proof [7,8].
We have already made similar attempts in terms of (i) and (ii) [9][10][11]. However, the resulting self-consistent perturbation expansion has encountered an infrared divergence, similarly as in the case of simple perturbation expansion [12] starting from either the ideal gas or the Bogoliubov theory [13]. which has prevented us from performing practical calculations on correlated Bose-Einstein condensates. A key additional observation here, which originates from our previous renormalizationgroup study [6,14], is that the infrared divergence can only be removed by extending the self-consistency procedure beyond the self-energies up to the four-point vertices so as to satisfy hierarchical identities among two-, three-, and four-point vertices [5,6] as dictated by Goldstone's theorem I [7,8].
The background of the present study is briefly sketched as follows. Bogoliubov [13] pioneered a microscopic description of interacting Bose-Einstein condensates to predict that the quadratic energy-momentum relation of free particles should be changed upon switching on the interaction into a linear sound-wave-like dispersion, whose speed is proportional to the square root of the bare interaction U 0 . Beliaev [15] formulated a field-theoretic perturbation expansion in terms of Green's functions. Hugenholtz and Pines [1] proved that single-particle excitations should have a gapless branch. Gavoret and Nozières [12] performed a structural analysis of the perturbation expansion for the single-and two-particle Green's functions to show that they have a common branch of poles. Nepomnyashchiȋ and Nepomnyashchiȋ [16,17] used the identity between the two-and threepoint vertices derived by Gavoret and Nozières [12] to conclude that the anomalous self-energy should vanish in the low energy-momentum limit, contrary to the Bogoliubov theory where it is finite and proportional to the bare interaction U 0 . These basic studies consider only homogeneous Bose-Einstein condensates in equilibrium at T = 0. The field-theoretic approach has also encountered difficulties in practical applications such as the infrared divergence mentioned above or the conservinggapless dilemma [18,19].
The present formulation covers both equilibrium condensates at finite temperatures and nonequilibrium ones. It will proceed by combining Schwinger's functional derivative method based on the generating functional [2,[20][21][22], the Legendre transformation to the effective action [8,[21][22][23][24], the Luttinger-Ward functional [25], and conserving gapless condition [9,11]. A similar approach was adopted previously to analyze properties of twoparticle Green's functions [10], which however reached an erroneous conclusion that the single-and two-particle Green's functions do not have common poles. It will be reexamined here by (i) incorporating the identities among the vertices and (ii) correcting the form of the perturbation. The resulting revised conclusion is that the single-and two-particle Green's functions do share poles not only at T = 0 of a homogeneous condensate, as predicted by Gavoret and Nozières [12] and also restated recently by Watabe [26], but also at finite temperatures, in the presence of inhomogeneity, and also in nonequilibrium situations. As a bonus, we will be able to clarify the connections among the vertices which were not given in the Gavoret-Nozières study [12].
This paper is organized as follows. Section II studies properties of the condensate wave function Ψ and Green's functions G in equilibrium in terms of the effective action. Section III derives expressions of the three-point and four-point (i.e., two-particle) Green's functions based on the functional derivative method. Section IV obtains self-energies in terms of Ψ, G, and vertices. Section V summarizes the key equations derived and also supplement them with equations for the irreducible four-point vertices to construct a closed system of equations. Section VI performs a nonequilibrium extension. Section VII presents concluding remarks.
A. System and Partition Function
We consider a system of identical bosons with mass m and spin 0 described by the dimensionless action [8,21,22] with Here ψ is the complex bosonic field and ψ * its conjugate, x ≡ (r, τ ) specifies a space-"time" point with 0 ≤ τ ≤ β ≡ (k B T ) −1 (k B : Boltzmann constant, T : temperature),p ≡ −i ∇ is the momentum operator, µ is the chemical potential, and is the interaction potential. We regard ψ(x) and ψ * (x) as elements of a column vector, and will often express ψ j (x) ≡ ψ(ξ) with ξ ≡ (j, x) and j = 1, 2. Next, we introduce the grand partition function Z JI ≡ Z[J, I] with extra source functions [20] J(ξ) and I(ξ, ξ ′ ) by where T τ is the "time"-ordering operator [27] and subscript JI emphasizes that J and I are finite. Introduction of the two-point external source function I(ξ, ξ ′ ), besides J(ξ) in the standard formalism [8,21,22], is one of the key ingredients here. Indeed, it enables us to express the effective action in terms of the renormalized Green's function G(ξ, ξ ′ ) instead of the bare propagator G 0 (ξ, ξ ′ ), as seen below.
B. Effective Action
Let us perform a Legendre transformation from − ln Z JI into the effective action [8,[21][22][23][24] Γ JI ≡ − ln Z JI − dξ Ψ JI (ξ)J(ξ) which is a functional of (Ψ JI , G JI ). Its first derivatives with respect to Ψ JI and G JI can be calculated by considering their explicit dependences only; the implicit dependences through (J, I) cancel out because of Eq. (6). Thus, we obtain where we have incorporated the symmetry G JI (ξ ′ , ξ) = G JI (ξ, ξ ′ ) in the second differentiation. Next, we introduce the functionals which determine (Ψ, G) in equilibrium. Indeed, Γ is connected with the grand potential Ω in equilibrium by Γ = βΩ.
, unlike the case of Γ JI where G JI is independent of Ψ JI . On the other hand, it also follows from Eq. (10) that G in Γ can be regarded as independent of Ψ up to the linear order. Put it another way, the total derivative δ in Eq. (10) can be replaced by the partial derivative, which we will express by ∂.
, where χ is a constant. Thus δΓ J /δχ = 0 holds, which can be transformed by using Eq.
Substituting Eq. (12) and setting the coefficients of nth order equal to zero, we obtain Note that differentiation of Eq. (17) with respect to Ψ(ξ n+1 ) yields the (n+1)th identity by using Eq. (13). The case of n = 1 is expressible by substituting Eq. (14) and adopting the vector-matrix notation of Eqs. (4) and (15) as which extends the Hugenholtz-Pines relation [1] to inhomogeneous systems. Next, we set n = 2 in Eq. (17), which connects the anomalous self-energy Σ jj (x, x ′ ) with the three-point vertex.
The n = 1 identity (18) has been presented as the key result from Goldstone's theorem I [7,8,22,24]. On the other hand, higher-order identities have turned out equally important. Among them, the n = 2 identity was obtained by Gavoret and Nozières; see the second equality of Eq. (5.4). Later, it was used by Nepomnyashchiȋ and Nepomnyashchiȋ to show that the anomalous self-energy vanishes in the low energy-momentum limit [16,17]. Castellani et al. [5] derived and considered the identities of n ≤ 3 in their renormalization-group study at T = 0. The n = 2 identity (19) will play a crucial role in the derivation of the two-particle Green's function below.
D. Luttinger-Ward Functional
Following Luttinger and Ward [25], we formally write Γ in terms of another unknown functional Φ as [9,11] where we have used Eq. (10b) to omit implicit dependences through G in the differentiation; see also the comment in the paragraph below Eq. (10) concerning the use of ∂ instead of δ. The right-hand side of Eq. (21) should be identical with the left-hand side of Eq. (18). Thus, we obtain where we have used Eqs. (10a) and (15). These are the two basic relations concerning Φ.
III. TWO-PARTICLE GREEN'S FUNCTIONS
We will derive expressions of two-particle Green's functions based on the Dyson-Beliaev equation (15) and Hugenholtz-Pines relation (18).
On the other hand, one can show using Eqs. (5) and (6) that Eq. (45) can be written alternatively in terms of the variations of Ψ I and G I with respect to I by Let us express δΨ(ξ 1 ) = ξ 1 |δ Ψ in Eq. (46a), substitute Eq. (41a) with Eq. (24), and perform the differentiation. Noting that Γ (3) and G are symmetric with respect to the arguments, we obtain We also write δG(ξ 1 , ξ 2 ) = ξ 1 , ξ 2 |δ G in Eq. (46b), substitute Eq. (41b), perform the differentiation, and use Eq. (47a). The procedure yields where Γ (4) , Γ (3)T , and Γ (3) are given in Eqs. (35). Functions G (3) and G (4) in Eq. (47) are both connected, as they should, and Eq. (47b) tells us clearly that G (4) shares poles with G, in agreement with the result of Gavoret and Nozières [12]. They should be symmetric with respect to any permutation of its arguments in the exact treatment; the apparent asymmetry in Eq. (47) originates from that of the irreducible vertex Γ (4i) defined by Eq. (28a) with which we have constructed Γ (4) . It should also be noted that both G (3) and G (4) will acquire asymmetry in practical studies of using some approximate Φ in Eq. (28a).
IV. SELF-ENERGIES AND CONDENSATE WAVE FUNCTION
In this section, we derive (i) expressions of the selfenergies in the Dyson-Beliaev equation and (ii) the equation for the condensate wave function, i.e., the generalized Gross-Pitaevskiȋ equation, both in terms of (G, Γ (3) , Γ (4) ). Subsequently, we will see that conservation laws are satisfied by the Dyson-Beliaev and Gross-Pitaevskiȋ equations.
A. Expressions of self-energies
The Heisenberg equation of motion for the field operatorψ j (x) ≡ e τĤψ j (r)e −τĤ corresponding to Eq. (1) is given by [2] Taking its thermodynamic average yields where G (3) is defined by Eq. (44). One can also show based on Eq. (48) that G ≡ G (2) obeys [2] We can construct the equation of motion for Equation (50) should be identical with Eq. (15) that can be written as (Ĝ −1 0 −Σ)Ĝ =1 in terms ofĜ −1 0 in Eq. (16). Hence, we obtain The terms in the square brackets of Eq. (51) can be transformed by using Eq. (45) into We use Eq. (52) in Eq. (51), substitute Eq. (47), perform the differentiation, and symmetrize the expression so that Σ(ξ 1 , ξ ′ 1 ) = Σ(ξ ′ 1 , ξ 1 ) can be seen manifestly. We thereby obtain Σ(ξ, ξ ′ ) where integration over repeated arguments is implied, and (ξ ↔ ξ ′ ) denotes terms obtained from the preceding three terms in the curly brackets by exchanging ξ and ξ ′ . The first two terms on the right-hand side are the Hartree and Fock terms that are expressible as Fig. 1(a)-(d), whereas the third one represents correlation effects given diagrammatically by Fig. 1(e)-(h).
It should be noted that there is arbitrariness in expressing the correlation term of Eq. (53) in terms of Γ (4) and Γ (3) , which are symmetric in the exact theory but may acquire asymmetry in approximate treatments. We have removed it here so that the two Green's functions entering and leaving x 1 of the bare interaction vertex U (x, x 1 ) in Eq. (53) are linked with the latter two arguments of Γ (4) , i.e., (ξ 3 , ξ ′ 3 ). The advantage of this choice is that the density fluctuation mode is naturally incorporated in Γ (4) even in approximate treatments.
Using Eq. (19) and following the argument of Nepomnyashchiȋ and Nepomnyashchiȋ [16,17], one can confirm oneself that diagrams (g) and (h) in Fig. 1 make the anomalous self-energy vanish in the low energymomentum limit for homogeneous systems. Thus, the Nepomnyashchiȋ identity is naturally satisfied in our formulation to remove the infrared divergence, thereby making practical calculations possible.
B. Equation for the condensate wave function
Let us express G (3) in Eq. (49a) in terms of (G (3) , G, Ψ) by using Eq. (45a) and substitute Eq. (47a) subsequently. We then obtain with where integrations over (x 1 , ξ 2 , ξ ′ 2 , ξ 3 , ξ ′ 3 ) are implied. Equation (54) generalizes the Gross-Pitaevskiȋ equation [28,29] so as to incorporate the quasiparticle contribution and correlation effects in η j (x). It is equivalent to Eq. (18), i.e., the generalized Hugenholtz-Pines relation, in the exact theory. However, they will be different in approximate treatments. We prefer Eq. (54) to Eq. (18), because conservation laws are satisfied as seen below. Adopting Eq. (54), we should determine the chemical potential so as to reproduce a branch of gapless excitations in the single-particle channel.
C. Conservation Laws
We follow the argument of Kadanoff and Baym [2,3] to confirm that the number-, momentum-, and energyconservation laws are satisfied in our formulation.
B. Equation for Γ (4i)
Equations (56)-(59) are formally exact, which still include the irreducible four-point vertices Γ (4i) as unknown functions. Hence, it is necessary for performing practical microscopic studies to supplement them with equations to determine Γ (4i) . Incidentally, we seek an alternative possibility in the following paper [30] of constructing phenomenological parameters in terms of Γ (4i) to describe low-energy properties, like the Landau theory of Fermi liquids [31][32][33][34].
To derive the equations for Γ (4i) , we approximate the functional Φ in the conserving-gapless form of satisfying Eq. (22) [9,11]. To be specific, our Φ is given in terms of an unknown effective two-body potentialŨ(x 1 , x 2 ) = U(x 2 , x 1 ) by [11] where ρ andρ jj are defined by The irreducible vertices Γ (4i) are obtained from this functional by Eq. (28a). The basic finite elements are given by Γ (4i) The other finite elements can be found easily by using the symmetries Γ (4i) (ξ 1 , . The corresponding Γ (3i) is obtained by Eq. (28b). The finite elements are given by Thus, Eqs. (62a) and (62b) yield an identical expression, as they should. We determine the unknown functionŨ (x 1 , x ′ 1 ) so as to satisfy Eq. (59). Noting that Σ 22 (x, x ′ ) = Σ * 11 (x, x ′ ) holds, we realize that the number of unknown variables, i.e.,Ũ (x, x ′ ), is equal to the number of constraints to be satisfied, i.e., Eq. (59). Especially in the weak-coupling cases, we can impose the condition thatŨ (x, x ′ ) approaches the bare interaction potential U (x, x ′ ) in the high energy-momentum limit.
VI. EXTENSION TO NONEQUILIBRIUM SYSTEMS
The formulation of Sects. II-IV can be extended to nonequilibrium systems by (i) performing the inverse Wick rotation τ = it/ and (ii) changing the Matsubara contour τ ∈ [0, β] into the round-trip Keldysh contour C that extends over t ∈ [−∞, ∞] [35,36]. We sketch it with (a) modifying the definitions of functions and (b) transforming every integral on C into that over t ∈ [−∞, ∞] in the second half. | 2021-01-08T02:15:28.088Z | 2021-01-07T00:00:00.000 | {
"year": 2021,
"sha1": "0f209ddf9254359f731d6bbdfb6cfb0cf2dc6063",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2101.02389",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "0f209ddf9254359f731d6bbdfb6cfb0cf2dc6063",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
3314966 | pes2o/s2orc | v3-fos-license | Independent evaluation of a FOXM1-based quantitative malignancy diagnostic system (qMIDS) on head and neck squamous cell carcinomas
The forkhead box M1 (FOXM1) transcription factor gene has been implicated in almost all human cancer types. It would be an ideal biomarker for cancer detection but, to date, its translation into a cancer diagnostic tool is yet to materialise. The quantitative Malignancy Index Diagnostic System (qMIDS) was the first FOXM1 oncogene-based diagnostic test developed for quantifying squamous cell carcinoma aggressiveness. The test was originally validated using head and neck squamous cell carcinomas (HNSCC) from European patients. The HNSCC gene expression signature across geographical and ethnic differences is unknown. This is the first study evaluated the FOXM1-based qMIDS test using HNSCC specimens donated by ethnic Chinese patients. We tested 50 Chinese HNSCC patients and 18 healthy subjects donated 68 tissues in total. qMIDS scores from the Chinese cohort were compared with the European datasets (n = 228). The median ± SD scores for the Chinese cohort were 1.13 ± 0.66, 4.02 ± 1.66 and 5.83 ± 3.13 in healthy oral tissues, adjacent tumour margin and HNSCC core tissue, respectively. Diagnostic test efficiency between the Chinese and European datasets was almost identical. Consistent with previous European data, qMIDS scores for HNSCC samples were not influenced by gender or age. The degree of HNSCC differentiation, clinical stage and lymphatic metastasis status were found to be correlated with qMIDS scores. This study provided the first evidence that the pathophysiology of HNSCC was molecularly indistinguishable between the Chinese and European specimens. The qMIDS test robustly quantifies a universal FOXM1-driven oncogenic program, at least in HNSCC, which transcends ethnicity, age, gender and geographic origins.
from age 35 to 85 by more than 65-fold despite only a moderate (11-fold) increase in incidence rate within this age range [4], emphasising an urgent need to identify and treat patients as early as possible. When comparing urban to rural areas in China, urban incidence rate was 40% higher than rural areas but no difference was found for mortality rates between the two areas [4], suggesting there may be a systemic problem in current diagnostic and/ or treatment interventions that leads to no improvement in survival rates despite higher detection rates in the urban population. This is likely due to the inability to identify high-risk patients at early stages when treatment is most effective. The 5-yr survival for early localised cancers can exceed 80% but falls to less than 20% in late stage tumours especially when regional lymph nodes are involved [2]. Such data is neither surprising nor exclusive to China. A worldwide consensus opinion appears to be that of tumour heterogeneity hampering accurate diagnosis/ prognostication which impacts on treatment insufficiency in turns lead to high rates of tumour recurrence and no improvement in survival rates over the last 3 decades [7][8][9]. Early treatment can significantly safe long-term costs and improve survival by avoiding expensive, invasive head and neck surgery which often leads to debilitating consequences not only affects feeding, speech and vision, but may also destroy the face, disrupting one's personal identity. It is well documented that improved diagnostic and prognostic accuracy to inform the most appropriate intervention could significantly improve patient outcome, reduce mortality and alleviate healthcare costs [10].
In 2013, we have developed a FOXM1-oncogene associated multi-biomarker 'quantitative Malignancy Index Diagnostic System' (qMIDS) [11] for quantifying the aggressiveness of squamous cell carcinoma (SCC). FOXM1 transcription factor has been shown to be amongst the top upregulated oncogenes across 39 cancer types and is a major predictor of poor cancer prognosis [12]. The qMIDS assay therefore represented the first FOXM1-based cancer diagnostic test which was previously validated on patients living in the UK and Norway [11]. Given that cancer is often heterogeneous, one marker alone would not be reliable or accurate for diagnosis. Hence, qMIDS was demonstrated previously to involve FOXM1 plus 13 FOXM1-associated genes (HOXA7, AURKA, NEK2, CCNB1, CEP55, CENPA, DNMT3B, DNMT1, HELLS, MAPK8, BMI1, ITGB1 and INV) as a panel of 14 biomarkers (and 2 reference genes) for quantitative diagnosis of malignancy [11]. We had previously shown that qMIDS test were able to quantitatively segregate between normal and malignancy whilst unaffected by nonmalignant inflammatory condition (lichen planus). The present study was carried out to independently compare and evaluate the use of qMIDS assay for diagnosing HNSCC in non-European patients, for which we carried out a study in China involving ethnic Chinese. The qMIDS assay was also independently setup and performed in China to rule out bias and inherent technical factors.
The Chinese normal oral mucosa samples showed slightly lower qMIDS scores compared to the normal samples from Europe. Both the Chinese and European samples were showing highly significant segregation of qMIDS scores between normal and tumour samples, respectively. Unlike the European cohort, the Chinese adjacent tumour margin samples showed significant 2.4fold higher scores when compared to the normal samples. Based on the previous European study [11], an optimum cut-off score value was at 4.0. This cut-off value was therefore used in the current study to calculate and compare the diagnostic test efficiency for qMIDS assay on the two cohorts ( Figure 2A). The normal samples were grouped together with tumour margin samples as disease free group for the diagnostic test efficiency calculation. Overall, the diagnostic efficiency data between the Chinese and European cohorts was highly comparable ( Figure 2B).
Further analysis of clinicopathological features within the Chinese HNSCC samples (n = 44), we found no differences between gender or age, which were in agreement with the European data. Statistically significant differences were found when HNSCC samples were segregated into differentiation status, tumour staging and lymphatic metastasis (Table 1). These findings were similar to previous European data whereby qMIDS scores were inversely correlated with differentiation status of HNSCC and were not significantly affected by gender and age [11]. We have previously established that HPV status did not affect qMIDS scores in neither HNSCC nor vulva SCC samples (data not shown) hence it was not further investigated. As habits such as smoking and drinking are well established as risk factors for HNSCC, due to the scarcity of patient records for risk factors, we were unable to analyse the correlation between habits and qMIDS scores. www.impactjournals.com/oncotarget dIscussIon For many cancer types, especially HNSCC, tumour heterogeneity has been a key problem that eluded clinicians whereby histopathological findings could not provide a quantitative and objective correlation with tumour aggressiveness [8,16]. To resolve this issue, we have previously developed a molecular method, the qMIDS assay [11], by exploiting the aberrant expression of a key oncogene FOXM1 shown to be amongst the top upregulated oncogenes across 39 cancer types and is a major predictor of poor cancer prognosis [12]. We and others have previously confirmed that FOXM1 is one of the top oncogene in HNSCC [11,[17][18][19][20][21][22][23][24]. We have previously published our bioinformatics meta-analysis on across over 40 different human cancer types available in Oncomine and NCBI's Gene Expression Omnibus (GEO) databases, showing that FOXM1 is one of the top oncogenes in HNSCC [18,21].
Due to the heterogeneity found in many cancer types including HNSCC, using a single gene as a biomarker is unlikely to be accurate for quantifying tumour aggressiveness. To improve diagnostic accuracy and specificity, the qMIDS assay had been designed to quantify mRNA levels of 14 FOXM1-associated genes (HOXA7, AURKA, NEK2, FOXM1B, CCNB1, CEP55, CENPA, DNMT3B, DNMT1, HELLS, MAPK8, BMI1, ITGB1 and IVL) involved in the regulation of cell proliferation [25], differentiation [17], ageing [26], genomic instability [16,18,24,27,28], epigenetic [18,20] and stem cell reprogramming [17,[29][30][31] as a collective basis to measure cancer aggressiveness via an algorithm to compute a malignancy index [11]. The qMIDS test was originally validated in the UK involving 256 Caucasian (from UK and Norway) and 36 South Asian (resided in the UK) patients. The assay was found to be a practical, sensitive, objective, and quantitative method for detecting not only for HNSCC, but also applicable for vulva and skin squamous cell carcinomas [11]. We had also previously shown in the Norwegian retrospective study with 19 years of HNSCC survival data that qMIDS score was significantly correlated with tumour aggressiveness [11] thereby providing a method for quantitative diagnosis and objective stratification of cancer aggressiveness.
Previous studies have reported that geographical, lifestyle and ethnic differences can impact on genetic/ molecular pathways in head and neck squamous cancers [32][33][34][35][36][37]. Majority of these studies investigated genetic DNA polymorphisms but none of them, to our knowledge, Figure 1: comparison of qMIds scores between chinese and european head and neck tissue samples. Data were plotted as dot-plot with box-and-whisker overlays (median and 25-75% percentiles). An optimum cut-off at 4.0 was found previously based on the European samples [11]. Statistical Student-t tests were performed between sample groups and corresponding P values were as indicated within the figure.
Figure 2: qMIDS Diagnostic test efficiency comparison between Chinese and European cohorts. (A) Cohort analysis for
Chinese (n = 68) and European (n = 228, consisting of UK and Norwegian participants, data were extracted from previous publication [11]). Calculations were based on cut-off score at 4.0 and statistical results are compared in panel (b). compared gene expression levels in HNSCC. We are presenting the first study comparing different ethnic groups and gene expression levels in HNSCC using a FOXM1based cancer diagnostic system [11]. Although the 14 genes used in the qMIDS assay were fundamental genes regulating squamous cell carcinoma, it was not clear if environmental factors (food, cultural & geographical variations, etc.) coupled with differences in ethnicity may impact on molecular differences in HNSCC that render the qMIDS test invalid. Given that the HNSCC patients tested previously constituted mainly of ethnic Caucasians (~86%) and South Asians (~14%) whereby all the patient samples were obtain either in the UK or Norway, we therefore aimed to further validate the qMIDS test to involve an entirely distinct ethnicity located in another geographic continent and have the assay independently set up and run in a different laboratory using different instruments (but using the same reagents). For this purpose, we recruited a total of 68 ethnic Chinese participants of whom, 50 were HNSCC patients and 18 were healthy individuals. All participants in this study were residence of Guizhou Province in China. The results obtained from this study on Chinese specimens were highly comparable to previously published European (UK and Norway) cohort [11]. Using the previously determined optimum cut-off score at 4.0 [11], overall diagnostic test efficiency was found to be almost identical between the Chinese and European datasets.
We have previously shown that the qMIDS assay had a detection rate of 90-94% and false positive rate of 1.3-3.2% on the European patients [11]. These data were consistent with the current study on Chinese patients. We had previously demonstrated that qMIDS was able to differentiate between benign (low risk) lesions such as oral lichen planus or fibro-epithelial polyps with premalignant (high risk) oral dysplastic samples [11], due to scarcity of Chinese patients with premalignant oral lesions (probably due to lack of self-awareness on oral diseases and patients were generally of lower social economic status), unfortunately we did not get sufficient number of these lesions for investigation. We are currently investigating the use of qMIDS as a tool for early oral premalignant cancer risk stratification.
In addition to HNSCC diagnosis, we previously demonstrated another clinical utility for qMIDS in tumour margin analysis whereby a 2D molecular topology of resolution down to 1 mm could be reconstructed using qMIDS on surgical samples. This was possible because each qMIDS test requires only a minute 1-2 mm tissue sample for analysis [11]. Although we did not carry out similar tumour margin analysis, the present study found a notable 2.4-fold higher qMIDS score in the Chinese adjacent tumour margin tissues compared to that of the European. This could be due to confounding factors such as error in pathological classification of the tissue samples and/ or differences in width of surgical margins used. Although the difference was found to be statistically significant, the Chinese sample size was small (n = 6) and therefore caution in interpretation should be exercised here for the adjacent tumour margin group. Due to the sensitivity of qMIDS test, it is not surprising that some of these tumour margin samples did contain malignant cells that escaped detection by pathologists. Further study involving larger sample size with patient follow up may potentially reveal a relationship between qMIDS-positive tumour margins and tumour recurrence.
Similar to histopathology, qMIDS also involves testing tissue biopsy samples and hence it remains invasive and prone to mis-sampling issues. However, as field change is a common phenomenon in HNSCC [38][39][40] and that qMIDS detects molecular changes (mRNA expression) that precedes phenotypic change (protein and structural alterations), the sensitivity of detecting pathological genetic change in a given sample would arguably be much higher than that of histopathology which relies solely on visualising protein and structural change. Furthermore, dysplastic phenotype is often missed or misinterpreted when examining histopathological slides because molecular changes indicative of malignant conversion do *Mean qMIDS score; **SD = standard deviation; ***P values in bold are highly significant P > 0.001. www.impactjournals.com/oncotarget not necessarily produce clinically or histopathologically detectable changes [38,39]. Hence, given that qMIDS detects molecular changes, it would be more resistant to sampling issues (considering oral field changes) compared to histopathology. Current clinicopathological features are unable to predict tumour aggressiveness [41][42][43]. As a result, current practise is that most patients with oral premalignant disorders (OPMD) are indiscriminately put on time consuming, costly and stressful surveillance [42,43]. Such "waiting game" creates unnecessary anxiety and stress for majority (88%) of low risk patients whilst delaying and under-treating minority (12%) of high risk patients [44]. A systematic review estimated a malignancy conversion rate for OPMD is 12% [44]. Given 135,100 HNSCC cases in China each year [4], and 70% of HNSCC preceded by OPMDs [45], the estimated total number of OPMDs would therefore be over 788,000 cases/year. Most patients only return when tumours have grown to advance stages when it is difficult to treat or untreatable. Delayed treatment thereby directly causes poor long-term morbidity and survival [7,8,16,42,43]. The current lack of a 'case-finding' diagnostic test results in ineffective patient management and unnecessary long-term financial burden to both patients and healthcare establishments. With a molecular test such as qMIDS, we have shown promising results previously that qMIDS was able to detect malignant cells in otherwise clinicopathologically "normal-looking" biopsy tissue [11] and therefore we are currently investigating the clinical use of qMIDS for identification of premalignant lesions.
In summary, this study provided the first evidence that the pathophysiology of HNSCC was molecularly (at mRNA levels) very similar between the Chinese and European specimens. Furthermore, it reiterates that the qMIDS assay robustly measures a universal oncogenic program driven by FOXM1, at least in HNSCC, which transcends ethnicity, age, gender and geographic origins. A high throughput, cost-effective and robust test such as qMIDS may play an important role for quantitative diagnosis of ambiguous biopsy specimens and/or to provide an objective diagnosis based on digital molecular profile to avoid mis-diagnosis. Given that majority (88%) of oral lesions are benign [44], identifying 12% of high risk potentially malignant oral lesions is notoriously difficult [41][42][43]. Further study involving testing oral premalignant lesions with qMIDS and long-term correlation with follow-up study would enable the qMIDS test to be used as an early cancer test.
Patient recruitment and study protocol
All 50 patients with HNSCC admitted from June 2014 to August 2015 were selected, 6 of these patients provided paired adjacent tumour margin and core HNSCC tumour specimens. In addition, 18 healthy individuals (undergone either wisdom tooth extraction or facial restorative/reconstruction surgery) donated redundant normal oral mucosa tissues for this study. All patients and healthy individuals in this study were ethically Chinese and natives of Guizhou Province in China. All clinical samples were collected according to local ethical committee-approved protocols and informed patient consent was obtained from all participants. The study was approved by the Institution Review Board of Human Ethics Committee of Guizhou Medical University. For each patient, histopathological reports of the tissue samples were obtained from collaborating clinicians. Fresh biopsy tissues were preserved in RNALater (#AM7022, Ambion, Applied Biosystems, Warrington, UK) and stored short term at 4°C (within 1 day) before transportation and subsequent storage at −80°C until use. All tissue samples were digested with nuclease-free proteinase K (Roche, UK) at 55-60°C before mRNA extraction (Dynabeads mRNA Direct kit, Invitrogen, UK) and cDNA synthesis (Transcriptor cDNA Synthesis kit, Roche, UK). All samples were tested blindly to ensure that the qMIDS assays were performed objectively.
The qMIDS assay
The qMIDS assay methodology was described previously [11]. Briefly, the qMIDS assay involves quantification of mRNA levels of 14 target genes (HOXA7, AURKA, NEK2, FOXM1B, CCNB1, CEP55, CENPA, DNMT3B, DNMT1, HELLS, MAPK8, BMI1, ITGB1 and IVL) and 2 reference genes (YAP1 and POLR2A). We setup and run the qMIDS assay at our laboratory in Guiyang, School of Stomatology, Guizhou Medical University. In order to obtain data comparable to previous European data [11], we adhered tightly to the original qMIDS assay protocol for reverse transcription and quantitative PCR (qPCR) procedures as described previously [11]. qPCR reactions were setup in 96-well format (see supplementary Figure S1) and run on a Bio-Rad CFX Connect TM Real Time System (Bio-Rad Life Science Research and Development Co., Ltd., Shanghai, China). Relative expression data for each target gene against the two reference genes were obtained using the Bio-Rad CFX manager 3.0 software. Relative expression data were then exported into Microsoft Excel for calculation of qMIDS score based on its original qMIDS algorithm [11]. Due to the tiny tissue size (1 mm 3 ) used for each qMIDS assay and direct extraction of mRNA (rather than total RNA), quantification of mRNA yield was not accurate by neither spectrophotometer (eg., NanoDrop) nor fluorescence dye (eg., PicoGreen). Hence, data quality for each specimen was directly determined by qPCR based on the ability to measure both reference genes (YAP1 and POLR2A). Samples that failed one or both reference genes were omitted from the study.
Statistical analysis
For comparison, qMIDS scores from the European study (data extracted from [11]) and the current Chinese data were analysed in R (version 2.13.1; The R Foundation for Statistical Computing) and plotted using Beeswarm Boxplot software package [13]. Diagnostic test performance between the European and Chinese data were compared at a specific qMIDS cut-off at 4.0 which was previously found to give the lowest false-positive rate and highest detection rate/ sensitivity [11]. Diagnostic test efficiency comparison data were calculated using a Diagnostic Test Calculator freeware [14]. The qMIDS diagnostic assay efficiency tests were performed according to the STARD Initiative recommended protocol [15]. The qMIDS scores were also examined in relation to gender, age, differentiation status, tumour staging and lymphatic metastasis status, using the statistical package SPSS version 14.0. Kruskal-Wallis analysis was used to test the differentiation of the qMIDS scores among the three groups (normal mucosa, tumour margin and core HNSCC). The qMIDS scores of HNSCC were further examined using Student-t test for any relationships between the above mentioned clinical features using Student's test. P < 0.05 was considered statistically significant. | 2018-04-03T03:57:59.731Z | 2016-07-09T00:00:00.000 | {
"year": 2016,
"sha1": "12e0374defc5399a62b04d1ce4d20b4dfa8be824",
"oa_license": "CCBY",
"oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=10512&path[]=33214",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6e4ba9e24e1d143b36a1f3c8eaea6ad32ffe730e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
54050451 | pes2o/s2orc | v3-fos-license | Developing Academic Motivation Scale for Learning Information Technology ( AMSLIT ) : A Study of Validity and Reliability
This study aimed to develop Academic Motivation Scale for Learning Information Technology for university students. For this purpose, 120 randomly selected university students studying in different classes and faculties at KSU were invited to the study during the 2016-2017 academic year. To define the scale indicators students were asked to answer the question; “What are your motivations for learning information technologies?”. Four different academicians examined the answers in accordance with the self-determination theory and they created the item pool. After expert examinations and pilot studies, the scale was designed in Likert-type with 18 items in 6 categories. To analyze the construct validity of the scale, 824 randomly selected students among the freshmen at KSU were included in the sample of the research. Among those, 276 of the students were included in the exploratory factor analysis in the first step, 269 were involved to repeat the first step with a new sample, and 279 participated in the last step to carry out the confirmatory factor analysis. Although literature suggest three different types of motivation (extrinsic, intrinsic, and amotivation), in this study, it was found that the intrinsic and extrinsic motivation items were gathered together and expressed as a single factor named “Intrinsic-Occupational Motivation”. According to the results, the final state of the scale included 15 items in two sub-dimensions. The sub-dimensions were named as “Intrinsic-Occupational Motivation” and “Amotivation”. It is understood from the analysis that the results derived from the scale have high reliability.
Introduction
Information technologies refer to the technological elements in the management and use of "knowledge" that has gained much more importance in the recent years (Turunc, 2006).In other words, "information technology" is a concept, which expresses transmission and processing of the acquired information as well as the storage and usage of it (Robson, 1990).Thanks to the Information Technologies, almost every field has undergone a serious restructuring, which has directly affected many areas, including education (Barutcugil, 2002).
Although the use of computers in education started in 1980 in Turkey (Matargem, 1991), thanks to the projects run by the Ministry of National Education since 1987, elective "computer" course has taken its place in the curriculum.The ministry, which has made serious project breakthroughs in the past years, has now been actively involved in computer training at all levels."Computer" training at the university level, on the other hand, has continued until today under the leadership of Data Processing Centers (BIM) (Ak, 2009) established in 1980.The course, which entered the university curriculum under the name of "Computer", is now named "Basic Information Technologies" as a compulsory course in the first year.The main objective of this course is "to know the advantages and disadvantages of the use of technology in everyday life while acquiring basic computer literacy skills.In the framework of individual and independent learning, the course also aims at providing students with an effective learning environment where they reach remote information."In the light of this information, it might be claimed that the teaching of information technology is a critical issue that should be emphasized in the field of education (Seferoglu & Akbıyık, 2005).Another topic that needs to be addressed in education is student motivation.learning process rather than the input and output of it, and it is a pushing factor that influences learning behaviour of the student (Brophy, 1987).Deci & Ryan (1980) examined the sources of motivation for individuals considering their psychological needs.They have put forward the well-known "self-determination theory" in order to understand the motivation of individuals (Deci & Ryan, 1980).In this theory, there is a distinction between motivation types.The main reason for such a kind of distinction is that individuals have different reasons and objectives to move into action (Ryan & Deci, 2000).Those reasons and objectives have been referred as intrinsic motivation, extrinsic motivation and amotivation.
Intrinsic motivation is the willingness of a person to demonstrate an activity without an external reward (Deci, 1975).Individuals with internal motivation have their own desire-anticipation and they are able to see the end of their work (Akpınar et al., 2013).In education, these characteristics are critical since they reinforce concepts such as creativity (Ryan & Deci, 2000).
Extrinsic motivation depends on the external factors (punishment, reward etc.) (Dede & Argün, 2004) and it can easily be observed since it depends on an external source (Kazusu, 1999).Unlike intrinsic motivation, extrinsic motivation may be viewed as an inadequate form of motivation, and students may be engaging in their activities as a result of different negative motives (e.g., punishment) rather than because they are intrinsically motivated (Ryan & Deci, 2000).Another difference between these two types of motivation is the effect of individual control in intrinsic motivation, and environmental factors in extrinsic motivation (Yazıcı, 2009).Additionally, it is necessary to see that intrinsic motivation is based on the individual factors, and the extrinsic motivation is based on the organization level.As it can be distinguished in motivational theories, in particular, intrinsic motivation is the case when a person is not motivated by a prize other than the work that s/he is doing.Extrinsic motivation on the other side appears with the presence of activities driven by external awards (status, appreciation or promotion etc.) (Deci, 1971).
The difference between the intrinsic and extrinsic motivation is similar to the difference between work content and context.Factors such as achievement and responsibility are the motivational factors most associated with the work itself, and the performance the actor, etc. Factors such as payment and promotion etc. are related to the context or the environment of the work.Therefore, intrinsic motivators are internal rewards that one feels when performing a task.For this reasons, in intrinsic motivation, there is a direct and rapid relationship between the work and the prize.It means that a person in this situation is self-motivated to accomplish the task.Extrinsic motivators, on the other side, are external rewards that are out of the nature of the work.They are not direct satisfactory factors when performing the task (Newstrom & Davis, 2002).On the other side, amotivation can be described as a reluctance to move/get an action, which occurs with occasional negative experiences (Deci & Ryan, 1985).It might lead to serious distress in education, especially in the sense of academic achievement.
When the relevant literature about the motivation of the students towards academic issues is analyzed it is understood that "academic motivation" has been commonly used to refer motivation for academic activities.In academic motivation-based studies, it has been found that motivation leads to a positive influence on the academic achievement of the individuals (Fortier, Vallerand, & Guay, 1993;Singh, Granville, & Dika, 2002;Wentzel & Wigfield, 1998).In addition to this, for measuring academic motivation quantitatively, "scales" examining various motivation styles have been developed.For instance, Vallerand (1992) developed a different version of the "Academic Motivation Scale" within the framework of the self-determination theory mentioned above.
When other scales for academic motivation are examined, it is seen that Gottfried (1986) has developed the "Intrinsic Academic Motivation Scale", evaluating the internal dimension of motivation; Pelletier et al. (1995) have suggested "Sports Motivation Scale" within the framework of self-determination theory; Bozanoğlu (2004) has put forward "Academic Motivation Scale", and Tuan, Chin, & Shieh (2005) have developed the "Motivation Scale for Science Learning".In another study, Aydın, Yerdelen, Yalmanc, & Göksu (2014) have offered "Academic Motivation Scale for Learning Biology".They have developed the scale for high school students, and 472 participants in Kars city centre consisted of the population of the study.They have developed the scale with a four-factor structure.These are named as Intrinsic Motivation, Amotivation, Extrinsic Motivation-Occupation, and Extrinsic Motivation-Social.
Moving from the aforementioned studies, a literature review has not revealed any academic scale that measures the motivation of students for learning information technologies.In addition, it is revealed that the scales developed in other disciplines emphasize motivation processes mostly (Aydın et al., 2014).In this respect, this study aims at developing an "Academic Motivation Scale for Information Technology Learning (BÖYAM)" for university students in accordance with the self-determination theory.In line with this purpose, as the first scale development work in the field of information technology-motivation, this study is thought to be of great importance in terms of providing fundamental information to those who develop university education programs related to information technology.As well as bridging the gap in the relevant literature, it will also shed light on the scale development studies related to academic achievement and motivation for other disciplines at the tertiary level.
Item Writing
In order to construct items, a randomly selected 120 university students studying at different faculties and classes at KSU in the 2016-2017 academic year were asked to respond to the question "What is your motivation for learning information technology?".60 minutes were allocated to the volunteered students to answer the question, and their answers were examined in accordance with the self-determination theory (Amotivation/Intrinsic motivation / Extrinsic motivation) as specified by Deci & Ryan (1985, 1991).(What we mean by examination here is that student responses were analyzed according to the content analysis principles of qualitative research.It was aimed to reveal the motivating factors of students regarding their learning of information technologies.)According to this theory, "motivation" is the reason for behaviour.In other words, if an individual has a motivation to learn a subject, we can express it on the scale as "for ....." (Vallerand et al., 1992).For this reason, after a detailed examination of the data collected from 120 university students, three different academicians in the field of educational sciences constructed a questionnaire pool consisting of 21 questions.Since it is practical to analyze the data and easy for students to answer, the questions were redesigned in 6-pointLikert-type questionnaire between the choices of (1) strongly disagree-(6) strongly agree (Büyüköztürk, Çakmak, Akgün, Karadeniz, & Demirel, 2014).After the analysis of the questionnaire by a Turkish language expert, one item was eliminated, and the scale was designed with 20 items.80 university students were selected randomly for pilot study.The 20-item scale was administered to those 80 students.During the practice, a suitable environment in which students would feel comfortable was provided, and students participated voluntarily.After all students completed the scale, each question was discussed one by one in the classroom.Volunteered students reported their views on what the items might mean, and the experts evaluated these opinions.Scale items were discussed in a mutual interaction environment.Two items with the same meaning and/or meaning ambiguity were eliminated from the scale.Thus, the scale had 18 items in its final form before the further analysis.
Participants
The study group was randomly selected among university students taking Information Technology courses.Accordingly, students studying at their 1st year at KSU was designated as the population of the study.Afterwards, among the population, a total of 824 students, 276 of whom for the first stage, 269 for the second stage, and 279 for the third stage were randomly selected.Participated students are those who took information technology or a related course during their secondary and high school education, and those who have been attending the Information Technology course at the university level.
Data Analysis
In this study, the Hybrid approach was followed (Matsunaga, 2010).According to this approach, Principal Component Analysis (PCA) should be applied first in order to reduce the number of the items.In the second step, with a different sample, an exploratory factor analysis (EFA) is performed to confirm the factor structure of the PCA in the first round.Finally, a confirmatory factor analysis (CFA) is applied to a new sample in order to support the factor structure (Aydın et al., 2014).Although this method is used in scale construction studies, some researchers indicate that the PCA "does not carry the qualities of the factor analysis" (Costello & Osborne, 2005;Field, 2005).Thus, when looking for evidence in the second sample to confirm the validity of the results from the first data set, it is more appropriate to use exploratory factor analysis rather than principal component analysis at the first stage (Aydın et al., 2014).For all these reasons, the data were collected from 545 people in the first part (first and second stage) of the study.The data were randomly divided into two, and both groups were subjected to exploratory factor analysis, then the similarity of the results for the structures was examined.In the final part of the study, more data were collected from another group of 279 people, and the validity of the structures emerged in the first part was tested.
Exploratory Factor Analysis (First Step)
The data collected from the 276 participants in the first data set for developing The Academic Motivation Scale for Learning Information Technology (AMSLIT) were analyzed through SPSS 20 statistical analysis software for EFA.The results are given below.
In order to reveal initials factors, 18 items in the first version of the scale were analyzed with Principal Axis Factoring.Principal Axis Factoring was preferred since it is more appropriate for the purpose of the research (Matsunaga, 2010;Warner, 2012).Principal Axis Factoring is also the most commonly used method in social sciences (Warner, 2012).Field (2005) and Thompson (2004) suggest the use of oblique axis rotation methods for the analysis of interrelated factor structures.For this reason, assuming that the items on the scale were related to each other, it was concluded that the Promax (kappa=4) axis rotation method was appropriate for this research.
According to the results of the initial analysis, there was no item loaded on two different factors, thus no item was removed from the scale.Later, another item ("Not to be considered as backward/illiterate") was found to have factor loading lower than 0.30, thus it was removed from the scale.
Factor analysis was carried out again with 17 items as the new form of the scale.According to the analysis result, the value of Kaiser-Meyer-Olkin (KMO) was found as 0.843 which was higher than the recommended cut-off value of 0.60 (Field, 2005;Pallant, 2001).Those results prove that the data structure was appropriate to carry out factor analysis.Barlett's sphericity test showed that chi-square value (χ2= 1325) was statistically significant (p <.001).All these results revealed that the correlation matrix was appropriate; in other words, the variables performed a sufficient relationship to carry out factor analysis (Field, 2005).Consequently, findings of the exploratory factor analysis were evaluated in the next stage.
Considering the rule that Keiser value is higher than 1, and it is consistent with the scree plot, it was seen that the scale consisted of two factors.These factors explained 24.80% and 10.17% of the total variance, respectively.In other words, the two-factor structure rotated according to the Promax method explained 34.98% of the total variance.The first of these two factors was named "Intrinsic-Occupational Motivation" and the second one is called "Amotivation".According to this result, the loadings of all items on these two factors are given in Table 1 (numbers between the parenthesis represent factor structure matrix).
As a result, when the above-mentioned item ("Not to be considered as backward/illiterate") was excluded from the analysis and the remaining 17 items were reanalyzed, the item distribution was as follows: 9 items for Intrinsic-Occupational Motivation factor and 8 items for Amotivation factor.The factor loadings of these items ranged from .704 to .514, and .675 to .392,respectively, while factor correlations ranged between .703 and .448;.636,and .460,respectively.Table 1.Factor loadings of 2-factor AMSLIT items rotated by promax method Items Factor Intrinsic-Occupational Motivation Amotivation 2 Because I think it will be useful to me for my profession in the future.
.704(.703) 5 Because I am interested in the topics related to information technology.
.658(.676) 15 Because I think it will contribute to my education.
.644(.626) 4 Because it is prerequisite for business life.
.559(.611) 1 Because it is the necessity of our age.
.557(.559) 8 Because I want to improve myself in the field of information technology.
.556(.547) 16 Because it is related to the profession I will do in the future.
.533(.541) 7 To be able to fulfil citizenship duties with systems such as Bimer / e-government.
.514(.448) 11 I keep away from it since it has a negative effect on social life.
.675(.636) 9 I do not want to learn information technology because it hurts my personality.
.630(.633) 6 I am not interested in information technology since it leads to addiction.
.619(.617) 13 I am against information technology because it isolates people.
.588(.556) 12 I do not think it is beneficial to me.
.578(.546) 14 I cannot find any reason to learn information technology.
.489(.508) 17 Honestly, I do not know why I learn Information technology.
.392 (.460) Internal consistency coefficient for the above two factors was examined.For intrinsic-occupational motivation factor, Cronbach Alpha was found to be .826(9 items), and for amotivationfactor, it was .780(8 items).Hence, it can be claimed that the results of the data set were reliable.
Exploratory Factor Analysis (Repetition of the Analysis in a Different Sample-Second
Step) The data collected from the 269 individuals in the second data set were analyzed through SPSS 20 statistical analysis software for exploratory factor analysis.As a result, 17 items were fixed in 2 factors and the factor analysis was repeated.
Similar to the first step, in the second step Principle Axis Factoring method with Promax (kappa = 4) axis rotation was used.Kaiser-Meyer-Olkin (KMO) value was found as 0.60 (Field, 2005;Pallant, 2001).Thus, it was decided that the data set were appropriate for factor analysis.The Bartlett sphericity test showed that chi-square value (χ2 = 1300) was statistically significant (p <.001).All these results revealed that the correlation matrix was appropriate, and the variables performed a sufficient relationship to carry out factor analysis (Field, 2005).Thus, the results of the explanatory factor analysis with the second data set were sufficient for evaluation.
By repeating the exploratory factor analysis, the first two-factor structure was supported for 17 items.The variance explained by the two factors from the second dataset was 24.76% and 10.41% respectively.These values are very close to the first dataset.According to this result, factor loadings of each item on are presented in Table 2 (additionally, the numbers in parentheses represents factor structure matrix).
Table 2. Factor loadings of 2-factor AMSLIT rotated by promax method according to the repeated exploratory factor analysis Items Factor Intrinsic-Occupational Motivation Amotivation 2 Because I think it will be useful to me for my profession in the future.
.699(.628) 5 Because I am interested in the topics related to information technology.
.656(.610) 15 Because I think it will contribute to my education.
.643(.624) 4 Because it is prerequisite for business life.
.579(.574) 8 Because I want to improve myself in the field of information technology.
.557(.596) 1 Because it is the necessity of our age.
.557(.673) 16 Because it is related to the profession I will do in the future.
.529(.544) 7 To be able to fulfil citizenship duties with systems such as Bimer / e-government.
.511(.447) 11 I keep away from it since it has a negative effect on social life.
.674(.633) 6 I am not interested in information technology since it leads to addiction.
.628(.520) 9 I do not want to learn information technology because it hurts my personality.
.626(.616) 13 I am against information technology because it isolates people.
.599(.570) 12 I do not think it is beneficial to me.
.575(.630) 14 I cannot find any reason to learn information technology.
.485(.542) 17 Honestly, I do not know why I learn Information technology. .383(.452) According to Table 2, the factor loadings of the items were between .699 and .511;.674and .383,respectively, whereas the factor correlation showed similar results with the first data set changing between .628 and .447;.633and .452respectively.This indicated that each item explained sufficient variance in its factors.Additionally, the Cronbach Alpha scores were found to be .828for Intrinsic-Occupational Motivation (9 items), and .780for the Amotivation factor (8 items) respectively.Thus, it can be said that the results were also reliable for the second data set.In other words, it was seen that the results of the first and the second stages were parallel to each other, and 17 items in two factorial structures were supported in the second stage.
Confirmatory Factor Analysis (Last Step)
The factor structure emerged in the first data set was also supported in the second data set, and it provided reliable results.Thus, it was decided to carry out a confirmatory factor analysis in order to finalize the AMSLIT scale.Therefore, the 17-item AMSLIT scale was applied on a new sample for the last time.
As in the first and second data sets, a different student group was randomly selected for the third data set, and the scale was applied to a total of 279 individuals.
Firstly, the data were examined in terms of the premises, and then the univariate normality, the multivariate normality and extreme values were checked.Skewness and Kurtosis values proved that the data did not violate the "univariate normality assumption" (Skewness = 190, Kurtosis= 148).Then, in the third data set, the data of five individuals that had a tendency to distort the multivariate normality were removed from the analysis (the number of samples decreased from 284 to 279).Confirmatory factor analysis was then performed with AMOS 25 software.
Two items of the 17-item scale were omitted from the analysis since their factor loadings were lower than the cut-off point of 0.5 (Hair, Black, Babin, Anderson, & Tatham, 2010).Those were the items of "I am not interested in information technology since it leads to addiction" and "To be able to fulfil citizenship duties with systems such as Bimer / e-government".Confirmatory factor analysis of the remaining 15 items showed that the fit indices were consistent with the model data set (χ2(279) = 225.07,p< .05;χ2/sd = 2.61; CFI = 0.88; GFI = 0.90; NFI = 0.82; RMSEA = 0.76; 90% CI = 0.064, 0.089).In the table below, the loadings of the items are given according to the standardized parameter (λ) estimate.Factor correlation (Phi values) was significant at p <.01 and a negative correlation was found between these two factors.In addition, the 2 factorial structure of the AMSLIT was supported by confirmatory factor analysis, and then the internal consistency coefficients of the factors were analyzed.Cronbach's Alpha value for the Intrinsic-Occupational Motivation factor was found to be .816;while it was .785for the Amotivation factor.Based on those results, it can be said that the obtained data from AMSLIT were reliable.Finally, the mean score, standard deviation and reliability coefficients for the sub-dimensions of AMSLIT were found to be 4.61, 1.46 and .816for intrinsic-occupational motivation and they were 2.89, 1.64 and .758for the Amotivation.
Discussion
In this research that aims to develop Academic Motivation Scale for Learning Information Technologies, an item pool was established by considering the self-determination theory of Deci & Ryan (1985;1991).One item was eliminated from the pool according to the results of the analysis, and it was seen that the scale consisted of 17 items in two sub-dimensions.In other words, items of intrinsic motivation and occupational motivation, which were considered as two separate factors, regrouped and formed a single factor.In the second stage, this 17-item scale was applied to a different sample and the results were analyzed with descriptive factor analysis.The same factor structure was obtained statistically (2 factors, 17 items).Thus, the dimensions of the scale were named as Intrinsic-Occupational Motivation and Amotivation.Although Deci & Ryan (1985;1991) suggest three different types of motivation (extrinsic, intrinsic, and amotivation), in this study, it was found that the intrinsic and extrinsic motivation items were gathered together and expressed as a single factor named "Intrinsic-Occupational Motivation".
Considering the fact that university students in our country are going to the universities in order to acquire a profession, it is quite natural that such a sub-factor related to "occupational motivation" appears in the scale.Similar to the study carried out by Glynn, Taasoobshirazi, and Brickman in 2009, it was seen that the scale formed a different sub-dimension related to the career in scale development process.Likewise, in a study for the development of Academic Motivation Scale for Learning Biology, Aydın et al. (2014) found that one of the sub-dimensions of the scale was structured as "Extrinsic Motivation-Occupation", they also concluded that in such a case "considering the conditions in Turkey and the age factor of the participating individuals, separation of occupation-related items as a different sub-dimension for motivation is logical".
The factor structure of the Academic Motivation Scale for Learning Information Technologies was supported in the last step by confirmatory factor analysis.This proves the construct validity of the first two phases.Cronbach's Alpha values for reliability were determined as .816for the Intrinsic-Occupational Motivation factor, and .785for the A motivation factor.These values prove that the data was valid and reliable.
Thinking that information technologies are in all areas of life today, it might be suggested that studies on information technology education, especially those in the field of motivation, should be encouraged.Because, thanks to the individuals with high academic motivation, Turkey's academic achievements in the IT sector will grow and the class activities that apply information technologies will be more fruitful. | 2018-11-27T05:08:57.421Z | 2018-05-15T00:00:00.000 | {
"year": 2018,
"sha1": "2b2f9e188a7ab8a3b016ef877381687aa5369ee2",
"oa_license": "CCBY",
"oa_url": "https://www.ccsenet.org/journal/index.php/jel/article/download/73653/41608",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "2b2f9e188a7ab8a3b016ef877381687aa5369ee2",
"s2fieldsofstudy": [
"Computer Science",
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
14337346 | pes2o/s2orc | v3-fos-license | Electronically Tunable Differential Integrator: Linear Voltage Controlled Quadrature Oscillator
A new electronically tunable differential integrator (ETDI) and its extension to voltage controlled quadrature oscillator (VCQO) design with linear tuning law are proposed; the active building block is a composite current feedback amplifier with recent multiplication mode current conveyor (MMCC) element. Recently utilization of two different kinds of active devices to form a composite building block is being considered since it yields a superior functional element suitable for improved quality circuit design. The integrator time constant (τ) and the oscillation frequency (ω o) are tunable by the control voltage (V) of the MMCC block. Analysis indicates negligible phase error (θ e) for the integrator and low active ω o-sensitivity relative to the device parasitic capacitances. Satisfactory experimental verifications on electronic tunability of some wave shaping applications by the integrator and a double-integrator feedback loop (DIFL) based sinusoid oscillator with linear f o variation range of 60 KHz~1.8 MHz at low THD of 2.1% are verified by both simulation and hardware tests.
Introduction
Dual-input integrators with electronic tunability are useful functional components for numerous analog signal processing and waveforming applications [1]. A number of single and dual-input passive tuned integrators using various active building blocks are available [2,3].
Here we present a simple electronically tunable dualinput integrator (ETDI) topology based on a composite current feedback amplifier-(CFA-) multiplication mode current conveyor (MMCC) building block. It has been pointed out in the recent literature [20,21] that utilization of two different kinds of active elements to form a composite building block yields superior functional result in analog signal processing applications.
Hence the topic of quadrature sinusoidal oscillator design and implementation with better quality is receiving considerable research interest at present. A number of such oscillators using various active building blocks [9][10][11][12][13][14][15][16][17] It is seen in recent literature that such linear VCQO is a useful functional block which finds wide range of applications in emerging fields; namely, in certain telemetry-related areas it could convert a transducer voltage to a proportional frequency which is then modulated for subsequent processing [22], as quantizer for frequency-to-digital or time-to-digital conversion [23] and also as the spectrum monitor receiver [24] in cognitive radio communication studies.
Here, we present the design and realization of a new linear VCQO using the composite type active device; analysis shows that device imperfections, namely, port tracking errors (| | ≪ 1) and parasitic capacitors ( , ) at current source nodes, yield negligible effects on the nominal design, whereby the active-sensitivity figures are extremely low. Experimental measurements by simulation and hardware tests on the proposed design indicate satisfactory results with -tunability in the range 60 KHz-1.8 MHz following the variation of a suitable control voltage (1 ≤ (d.c.volt) ≤ 10) wherein a desired band-spread may be selected by appropriate choice of the grounded components [25], without any component matching constraint, even with nonideal devices.
Analysis
The ETDI topology is shown in Figure 1(a); the nodal relations of the active blocks are = 1 , V = 1 V , = 1 V , and = 0 for the CFA and = 2 , V = 2 , V = 2 V , and 1 = 0 = 2 for the MMCC where (= 0.1/volt) is multiplication constant [12] and is control voltage. The port transfer ratios ( , , and ) are unity for ideal elements; the imperfections may be postulated in terms of some small error coefficients (.01 ≤ | | ≤ .04) as ≈ (1 − ), ≈ (1 − V ), and ≈ (1 − ). Also, shunt-parasitic components appear [26][27][28] at the -node of the blocks having typical values in the range of 3 pF ≤ , ≤ 6 pF and 2 MΩ ≤ , ≤ 5 MΩ; since resistance values used in the design are in KΩ ranges, their ratios to , are extremely small and hence effects of , are negligible in the design. It may also be mentioned that a low-value internal parasitic resistance ( ≈ 45 Ω) appears in series with the current path at -node of the devices; its effect can be minimized by absorbing -value in the load resistors at these nodes. Routine analysis assuming in = ( 1 2 − 1 ) yields the open-loop transfer ≡ / in in Figure 1(a) as where = 1 2 2 1 2 ≈ (1 − ) , It may be seen that effect of may be compensated by absorbing its value in since both are grounded [25], an attractive feature for microminiaturization. The noninverting input signal is also slightly altered; in practice, however, the deviation is quite negligible as we observed during the experimental verification.
Hence, it is seen that the effects of device nonidealities are quite negligible; assuming therefore that ≈ 0 and 2 = 1 for simplicity, we get the desired ETDI transfer ( ) from (1) as
Linear VCQO Design
The proposed oscillator is designed with DIFL using the block diagram of Figure 1(b); neglecting port errors ( ≈ 0) in (1), we get the loop-gain ( ≡ 1 2 ) of the DIFL, assuming ≪ 1 and = 1, as The MMCC block is shown in Figure 1 The parasitic phase components are extremely low since the lossy capacitors ( , ) create distant pole frequencies compared to the usable frequency range; hence the input and output stimulus of the DIFL would be in same phase at unity gain and closure of loop incites sinusoid oscillations build-up; the corresponding characteristic equation is which yields the oscillation frequency after putting ≡ , as International Scholarly Research Notices With equal-value components and = 0.1/volt, (7) yields = / (20 ); thus linear tunability of is obtained by directly applying the control voltage ( ) of MMCC unit. No additional to current conversion circuitry as compared to previous realizations using OTA [14,17] is required. The active -sensitivity is calculated as
Experimental Results
The proposed ETDI of Figure 1(a) is built with readily available ADF-844/846 type CFA device [28,29]; since MMCC [30]-chip is not yet commercially available, we configured it [12,31] as shown in Figure 1(d), with four-quadrant multiplier (ICL-8013 or AD-534) coupled with a current feedback amplifier (CFA) device (AD-844 or AD-846). The bandwidth of the CFA device is almost independent of the closed loopgain at high slew rate values [28,29]. This yields the element to be particularly advantageous for various signal processing/generation applications. Recently, reports on superior versions of the CFA element (OPA-695) have appeared [32] indicating very high slew rate (∼2.5 KV/ s) and extended bandwidth (∼1.4 GHz).
For the linear VCQO design, we formed the block diagram of Figure 1 elements, along with both AD-844 and AD-846 type CFA devices; satisfactory results with both sets of components had been verified. A comparative summary of some recent quadrature oscillator characteristics is described in Table 2.
Some Discussions
Keeping in view the measured responses, a few observations are presented here on functional unit to unit basis; this substantiates the accuracy and versatility of the proposed realization. (c) Frequency domain phase error is measured as ≈ 2.4 ∘ at 2 MHz; adjustment of for control did not affect this error. Also the ETDI is practically activeinsensitive to port mismatch errors ( ≪ 1) [28,33].
CFA-MMCC
(a) Availability of in-built control voltage node of MMCC adds flexibility to a designer; this feature is verified experimentally by generating a quadrature (integrated) wave modulation response as shown in Figure 2(f). This is a useful application of the CFA-MMCC based ETDI.
(b) Analysis shows that is dominantly caused by parasitic of CFA device; another component of this error due to MMCC is negligible since (MMCC) ≈ arctan( / ) ≈ 0 as ≈ 10 3 .
Thus the overall phase error of N ( ) could be limited to extremely low values for frequencies ≪ , after selecting moderate values of , while may be tuned by ; these two adjustments are noninteracting. This substantiates the versatility of selecting a composite building block. Phase error had been measured as in Table 1 VCQO (a) Literature shows that albeit some quadrature oscillators were presented earlier, very few [10,17] provide electronically tunable linear tuning law. These designs are based on electronic tuning by a bias current ( ) that is replicated from which requires additional current processing circuitry/hardware consuming extra quiescent power; moreover such -to-conversion involves thermal voltage [18] and hence temperature sensitivity issues. In view of these comparative attributes, the proposed design appears to be superior.
Conclusion
The realization and analysis of a new ETDI and its applicability to the design of linear VCQO using the CFA-MMCC composite building block are presented.
The effects of the device imperfections are examined which are seen to be quite negligible as indicated by low phase and magnitude deviations. The linear -to-tuning characteristics had been experimentally verified in a range of 60 KHz-1.8 MHz with good quality low distortion sinewave generation response. It may be mentioned that here the linear tuning feature is obtained by the simple and direct application of the same control voltage ( ) to the appropriate terminal of the MMCC building blocks for the two ETDI stages. Additional current processing circuitry for -tobias current ( ) conversion and its associated hardware complexity with additional quiescent power requirement would not be needed as compared to previous OTA based electronically tunable realizations; moreover, this conversion involves thermal voltage ( ) that may ensue temperature sensitivity issue [18]. Also, use of the superior quality devices [28,32] in the proposed topology is believed to exhibit low distortion generation at extended range of frequencies. A comparative study of similar designs, presented in a concise table, indicates the superiority of the proposed implementation.
As further study, we plan to utilize the linear VCQO for some cognitive radio spectrum assessment applications after translating the proposed design using suitable building blocks with appropriate high frequency specification.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper. 6 International Scholarly Research Notices Frequency stability Ψ = 1 in proposed design; also reported Ψ = 1 in [10]; no mention of Ψ in other references above. | 2018-04-03T05:42:07.965Z | 2015-04-19T00:00:00.000 | {
"year": 2015,
"sha1": "b0308bf4625ce6210bb6ee1329ced9b59e517b58",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/archive/2015/690923.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d00e36ac7bb2e72d7a882fec352863480945705e",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
} |
236254868 | pes2o/s2orc | v3-fos-license | Short Communication: Chemical composition, antioxidant and antimicrobial activity of Fagonia longispina (Zygophyllaceae) of Algerian
Ziane L, Djellouli M, Berreghioua A. 2021. Short Communication: Chemical composition, antioxidant and antimicrobial activity of Fagonia longispina (Zygophyllaceae) of Algerian. Biodiversitas 22: 3448-3453. The study's aim is to discover the antioxidant, antibacterial efficacy and identifying the main constituents of the essential oil of Fagonia longispina from southwest of Algeria. The essential oil from the aerial parts of the endemic plant F. longispina collected from the region of Sahara southwest of Algeria was isolated by hydrodistillation and analyzed by gas chromatography-mass spectrometry. Our work was designed then, to evaluate the antioxidant activity of the essential oil of F. longispina by DPPH free radical scavenging and HPTLC techniques. Antibacterial potency of essential oil from this plant has been tested against Staphylococcus aureus (ATCC 29213), Escherichia coli (ATCC 25922), and Bacillus cereus (ATCC11778) by disk diffusion assay. We found that the chemical profile of the essential oil revealed the presence of 14 compounds: Trans-pinocarveol (3.14%), panisaldehyd (4.24%), trans geraniol (3.05%), carvacrol 18.72%), elemicin (22.85%), (Z,E) farnesol (15.69%), caryophyllene oxide (2.68%), alpha-curcumene (1.75%), germacrene D (4.22%), longipinane (2.89%) and α-terpinine (2.74%). The antioxidant assay showed that the essential oil could scavenge DPPH (IC50 values of 2.1959 mg/mL free radical. The essential oil exhibits very effective antimicrobial activity using disk diffusion assay method with minimum inhibitory concentration ranging from 0.75 μL/mL to 1.9714 μL/mL. These results showed that this native plant may be a good candidate for further biological and pharmacological investigations.
INTRODUCTION
A large number of plants (fragrant, therapeutic, zest) have intriguing organic properties which empowered them to be applied in different zones, to be specific in medication, drug store and cosmetology (Alaa et al. 2017). Herbs can contain a wide assortment of cell reinforcements atoms, for example, phenolic compounds, nitrogen compounds, nutrients, terpenoids and so forth, which group's a critical cancer prevention agent movement (Pirbalouti et al. 2013). Nowadays there is a developing interest in normal items showing cell reinforcement properties that are provided to human and creature living beings as food parts or as explicit precaution drugs. The plant realm offers a wide scope of characteristic cancer prevention agents and antimicrobials. However, there is still a lack of understanding regarding the practical utility of the majority of them. Antioxidant phenolic acids, flavonoids, and alkaloids are common secondary plant metabolites found in a variety of fruits, vegetables, and herbs. By interfering with oxidizing agents and free radicals, they have been shown to provide protection against cancer and oxidative stress (Campanella et al. 2003;Djellouli et al. 2015;Zulueta et al. 2007;Ziane et al. 2020).
The woody plant Fagonia longispina belongs to the Zygophyllaceae family. It reaches a height of 10 to 20 centimeters. It's a small plant with ground-level branches radiating outwards from the base. The whole plant is shrouded in coarse hairs that tie sand. This plant has purplish flowers with a bright red. F. longispina is a common plant that goes by the nickname "Atlihia" and used as a common herbal medicine. Previous research has shown that F. longispina contains a number of secondary metabolites, including a wide range of antioxidants such as tannins, flavonoids, and saponins. (Hamidi et al. 2012).
The antioxidant activity of F. longispina is reported for the second time here (Hamidi et al. 2014), however, there are other antioxidant studies for other species in the same family as the paper reported by Satpute et al. (2012) for the Fagonia arabica and Rashid et al. (2019) for the F. longispina. Fagonia olivieri is a form of Fagonia (Rashid et al. 2019). The aerial plant could provide remedy for cancer in its early stages, it'is a plant traditionally used for the treatment of various skin lesions and for the treatment of various other diseases of digestive in the southwest of Algeria (Saoura region of Bechar) Northern Africa. This work aimed to perform a preliminary screening of radical scavenging activities of the extracts isolated from F. longispina (Zygophyllaceae) (Hamidi et al. 2012;Djellouli et al. 2013).
Plant material
Aerial parts of F. longispina were collected in March 2016 from Boukais, Southwestern Algeria (Figure 1), and reported by the National Agency of Nature Protection (ANN) in Bechar, Algeria (Hamidi et al. 2012;.
Extraction of essential oil
The dried aerial parts of F. longispina (1 kg) were subjected to hydrodistillation for 5 h in 3 times using a Clevenger-type apparatus, according to the method outlined by the European Pharmacopoeia (Council of Europe 1997). Then, the essential oil was then separated from the aqueous layer, dried over anhydrous sodium sulfate. The calculated average of essential oil yield is 0.0392%. The essential oil was stored in sealed vials at low temperature (4°C) until (GC-MS) analysis.
GC-MS analysis
A Hewlett Packard Agilent 6890 GC system was used for the study coupled with a 5973C MS. HP-5 MS analytical fused silica capillary column (30 m × 0.25 mm × 0.25 μm, Agilent, Santa Clara, CA) was used for chromatographic separations. For both columns, the oven temperature had ramped from 60°C to 250°C (8 min) at 2°C/min isothermal for 10 min. The flow rate of the helium was 0.5 mL/min. The retention indices for all components were determined according to the method using n-alkanes as standard.
Identification of components
Individual constituents were detected and registered by comparing their mass spectra to those of known compounds stored in the spectral database of the National Institute of Standards and Technology (NIST), which was connected to the GC-MS instrument.
Determination of antioxidant activity by DPPH method
The determination of the antioxidant potential essential oils was on the basis of their scavenging activity of the stable 1,1-diphenyl-2-picrylhydrazyl (DPPH) free radical. To summarize, 100 NL of different concentrations of each extract in methanol were applied to 1.9 mL of a DPPH methanol solution (0.004 percent). The mixture was vigorously shaken before being allowed to rest for 30 minutes at room temperature.
A solution of 100 NL methanol and 1.9 mL DPPH was used as the control. The DPPH radical scavenging activity was represented as an inhibition percentage using the following equation (Benmehdi et al. 2013):
%Inhibition= [(AB-AS)/AB] x 100
Where: AB represents the absorbance of the control reaction (which includes all reagents except the test compound) and AS represents the absorbance of the test compound.
Ascorbic acid is an antioxidant that has been used as a reference or a supportive control in studies. The experiments were performed three times. The extract concentration providing 50 percent inhibition (IC50) was determined from the graph of inhibition percentage plotted against extract concentration (0.5; 0.25; 0.125; 0.0625; 0.0312; 0.0156; 0.0078 mg/mL).
High-performance thin-layer chromatography (TLC) study of extracts and DPPH
Essential oil of F. longispina was submitted to thinlayer chromatography (TLC) to High performance on silica gel plate (20x 20 cm, Silica gel F254, Merck). Methanol and chloroform (40:60 v/v) were used to create a solvent system that was tailored for essential oils of F. longispina. essential oils was based on a TLC silica gel plate, which was grown to a distance of 70 mm in a sandwich TLC chamber. After 15 minutes of air drying, the plate was sprayed with 0.004% (w/v) DPPH reagent prepared in methanol and 10% (v/v) sulphuric acid, respectively. After spraying, the plates were heated at 60°C for precisely 30 minutes to observe the activities (Amari et al. 2014).
Antibacterial activity
We used the disc diffusion method to screen for antibacterial activities of the essential oils against the four bacterial strains namely: Staphylococcus aureus (ATCC 29213, positive control: penicillin, Escherichia coli (ATCC 25922, positive control: gentamycin sulfate injection), and Bacillus cereus (ATCC11778). The Petri dishes were maintained by serial sub-culturing every month on nutrient agar slants and incubating at 37°C for 18-24 h. The minimum inhibitory concentrations (MIC) of the extracts were determined using a serial microplate dilution assay against each test bacterial species.
We remarked on the TLC plate, by using TLC bioautography technique, the appearance of zones of antiradicalaire activity of pale yellow color on purple bottom for the extracts under study as for the ascorbic acid (Hasan et al. 2009). After spraying the plates with sulphuric acid (Figure 4), purple color spots were observed and indicated the presence of several bioactive compounds in essential oils of F. longispina. In addition, the study revealed also the observation of yellow spots after spraying the plates with DPPH solution (Figure 3), indicating the presence of antioxidant compounds in the extracts.
Antibacterial activity
Essential oil's antibacterial function in vitro against the employed bacteria was qualitatively assessed by the presence or absence of inhibition zones (Nahar et al. 2016). The results obtained for antibacterial activity screening of F. longispina essential oil are summarized in Table 2. With the broth dilution method, the MIC values for essential oil of aerial parts were in the range of 0.75-1.9714 μL/mL.
Discussion
Aerial parts of this plant were screened for the principal classes of secondary metabolites, such as anthraquinones, terpenes, saponins, alkaloids, coumarins, flavonoids and tannins (Hamidi et al. 2012). Hamidi et al. (2014) studied phytochemical constituents in different extracts (hexane, ethyl ether and chloroform) of F. longispina by gas chromatography-mass spectrometry (GC-MS). This study revealed 13 compounds, and phytochemical constituents in ethyl acetate (EtOAc) extract from the aerial parts of F. longispina (Zygophyllaceae) by gas chromatography-mass spectrometry (GC-MS). They quantified 12 compounds (Hamidi et al. 2016). The study of Dastagir et al. 2014 described the Oxygenated monoterpenes as the major components of the plants of family Zygophyllaceae (Fagonia cretica and Tribulus terrestris) collected in Pakistan (Dastagir et al. 2014).
Geographic area, climatic impact, harvest season, condition of the soil, age of the plant parts, state of used plant materials (dried or fresh), part of the plant used, time of collection, and chemotype could all contribute to these qualitative and quantitative variations in the chemical composition of essential oils. Although the DPPH test is considered simple, fast and easy to operate, the experiences showed certain difficulties in the measurement of the state of reduction: a dynamic phenomenon with fable concentration (tracks of antioxidant ppm) and accompanied with numerous trained, in certain cases unstable compounds.
Previous results have shown that antioxidant capacity of fig leaves was significantly correlated with phenolic contents (Mahmoudi et al. 2016). Antioxidant activities have recently become a topic of increasing interest to health and food science researchers as well as medical experts (Huang et al.2019). The scavenging of the stable DPPH radical is widely used method to evaluate the free radical scavenging ability of various samples, including plant extracts. This approach was used in this research for the examination of the potent antioxidant of the essential oils from Algerian species F. longispina.
According to the results obtained (Figures 5), and by measuring IC50 values of each extract with comparison to the ascorbic acid as an authentic simple IC50 = 0.0331mg/mL (Benmehdi et al. 2013), the results showed that the essential oils of F. longispina (Figure 6) had the activity with IC50 value of 2.1959 mg/mL. Generally, the antioxidant activity of essential oils is related to their major compounds. These results are in accordance with other antioxidant study carried on other species such the paper of Hamidi et al. (2014), Pervaiz-Iqbal et al. (2012), and Hasan et al. (2009). They are promising plants for more detailed investigation of their antioxidant properties and application possibilities. With regard to antimicrobial activity, Hamidi et al. (2014) publication has described the activity of F. longispina extracts against Enterococcus faecalis, Bacillus spizigenil, Salmonella heidelberg and Escherichia coli using the agar diffusion method. In the current study, the essential oils of F. longispina was found to have moderate to high antimicrobial activity. It showed strong inhibition against B. cereus and low activity against S. aureus. This antimicrobial activity may be due to the chemical composition of the essential oil, which is rich in oxygenated compounds.
In conclusion, this research looked into the chemical compositions, antioxidants, and antimicrobial properties of essential oils extracted from F. longispina. The GC-MS results revealed the presence of 14 volatile compounds. The essential oil extracted from the aerial portion of F. longispina had high antioxidant activity. The essential oils demonstrated a high level of antibacterial activity against both gram-positive and gram-negative bacteria. | 2021-07-26T00:06:23.361Z | 2021-06-05T00:00:00.000 | {
"year": 2021,
"sha1": "a406a3f316704a19b3c4641faf1fb03d07a6ef53",
"oa_license": "CCBYNCSA",
"oa_url": "https://smujo.id/biodiv/article/download/8489/4900",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "5f25bc1004955baf40689be9e4fc569096d33b27",
"s2fieldsofstudy": [
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
234804246 | pes2o/s2orc | v3-fos-license | Oxidative stress status in severe OHSS patients who underwent long agonist protocol intracytoplasmic sperm injection cycles
Department of Obstetrics andGynecology, EtlikWomens’ Health and TeachingHospital, 06170Ankara, Turkey Department of Obstetrics andGynecology, Faculty ofMedicine, Karadeniz Technical University, 61080 Trabzon, Turkey Program ofMedical Laboratory Techniques, Vocational School ofHealth Sciences, Karadeniz Technical University, 61080 Trabzon, Turkey Department ofHistology and Embryology, AnkaraUniversity, Faculty ofMedicine, 06590Ankara, Turkey
Introduction
Current infertility treatment strategies may result in ovarian hyperstimulation syndrome (OHSS), which is considered a thrombotic disease. OHSS affects 5% of patients who undergo IVF and induces microvascular thrombosis. In pathogenesis, patients respond to exaggerated exogenous gonadotropins and experience a change in the hemostatic system and marked hemoconcentration [1]. Following ovulation triggering with human chorionic gonadotrophin, the serum levels of most serum coagulation and fibrinolytic factors increase within 2 to 8 days [2]. Somehow, OHSS can cause microvascular thrombosis and circulation dysfunction that leads to tissue ischemia.
Ischemia-modified albumin (IMA) is a novel marker for assessing tissue ischemia. IMA levels correlate to tissue ischemia [3,4]. In this study, we expected that microvascular thrombosis caused by OHSS might elevate serum IMA levels that could alert clinicians of severe complications. We also aimed to establish an association between OHSS and changes in total antioxidant capacity (TAC), total oxidative capacity (TOS), oxidative stress capacity (OSI), and serum malondialdehyde (MDA) levels.
Materials and methods
This prospective study included women with primary infertility subjected to ICSI-ET cycles with moderate and severe OHSS (study group, group I). The control group (group II) consisted of women with primary infertility subjected to ICSI-ET cycles without any sign of OHSS. Members of the study and control groups were younger than 40 years old and had comparable body mass index (BMI) scores. All patients were screened for inherited or acquired thrombophilia. We excluded women with known inherited or acquired thrombophilia, history or thromboembolism, a history of antithrombotic treatment within the past three months, thrombophilia, systemic diseases, and smoking. Patients in both groups were hyper-responders (PCOS) who underwent ART for oligospermia or azoospermia. The study group contained 25 women, and the control group contained 27. We used the Rotterdam criteria to diagnose PCOS. Two out of three of the following criteria are required for a diagnosis: oligo-and/or anovulation, clinical and/or biochemical signs of hyperandrogenism, polycystic ovaries (determined by ultrasound) [5]. Institutional Review Board (Etlik Zubeyde Hanım EA Hospital) approval was obtained on 04 August 2011, and an approval number of 139 was assigned. Informed consent was also obtained for each participant.
The luteal long leuprolide acetate controlled ovarian protocol was used for all women. Pituitary down-regulation with leuprolide acetate (1 mg/day; Lucrin, Abbott Laboratories, North Chicago, IL) began on day 21 of the previous menstrual cycle. Following the second and third day of initial menstruation, subcutaneous administration of recombinant gonadotropin (Gonal-F, 150-225 IU/day, Laboratories Serono S.A., Aubonne, Switzerland) was performed. Serum estradiol measurement and folliculometry via transvaginal ultrasound were used for ovulation induction monitorization. Ovulation was triggered with recombinant human chorionic gonadotropin (0.25 µgr; Ovitrelle subcutaneously, Serono, Istanbul, Turkey) following assessment of at least two or three mature follicles (> 17 mm in diameter). Oocyte pick-up was scheduled 34-36 hours later. Gonadotropin dosage was adjusted according to antral follicular count, age, and serum FSH / E2 levels for each patient. All women underwent day-3 embryo (one or two (for patients age > 35) embryo) transfer. Vaginal progesterone gel (Crinone 8% gel, Serono S. A. Aubonne, Switzerland) was used twice daily for luteal phase support. Four weeks after embryo transfer, visualization of fetal heartbeat with surrounding gestational sac on transvaginal sonography was accepted as clinical pregnancy.
The published classification of OHSS severity was used [6]. Based on this classification, women with complaints of abdominal distension and discomfort, nausea and/or vomiting, and sonographic findings (ovarian size of 8-12 cm, ascites) were diagnosed with moderate OHSS. Women with all moderate OHSS findings (n = 19), at least 2 kg weight gain, and altered laboratory findings (hematocrit > 45%, white blood cell count > 15.000, oliguria, creatinine of 1.0-1.5, creatinine clearance of > 50 mL/min, and high serum ALT and AST results) were diagnosed with severe OHSS (n = 6). Oocyte and embryo quality classifications were based on the current published system [7].
Women with moderate or severe OHSS were hospitalized. Avoidance of physical activity, oral or parenteral hydration, daily laboratory testing (CBC, electrolytes, creatinine, serum albumin, and liver enzymes), and physical and ultrasound examinations were performed. Weight, abdominal circumference, and any worsening signs and symptoms were assessed daily. Disturbed fluid and electrolyte balances were corrected, the secondary complications of ascites and hydrothorax were relieved, and thromboembolic events were prevented with low molecular weight heparin. Ultrasoundguided culdocentesis was performed in women with tense ascites, orthopnea, rapid increase of abdominal fluid, or any other sign of illness progression.
The control group (group II) comprised patients with PCOS who underwent the same controlled ovulation induction protocol but did not demonstrate symptoms of OHSS.
Blood samples were collected on the day on which ovulation was triggered. Antecubital venous blood samples of approximately 5 mL were taken, and the aspirated serum sam-ple was stored at -80 C • until at the end of the experiment. The author who studied the samples did not know whether the samples belonged to the study or control group. Serum levels of IMA, MDA, TOS, TAS, and OSI were measured.
We hypothesized that OHSS may have detrimental effects on serum oxidative stress markers.
We used Student's t-test to compare the parametric variables and Fisher's exact chi-square test to compare the nonparametric variables. P values were calculated using SPSS 13.0. The Spearman correlation analysis was used to assess serum IMA, TAC, TOC, OSI, and MDA; P < 0.05 was accepted as a statistically significant value.
A post-hoc power analysis was performed for 25 patients in each group, considering the end point as mean serum IMA values (0.67 for the study group and 0.55 for the control group with standard deviation values of 0.1). The calculated power was 0.98 with 0.5% type 1 errors.
This case control study fulfilled the requirements (STROBE) of the Enhancing the Quality and Transparency of Health Research (EQUATOR) network guidelines.
Results
The recruited participants included 52 patients requiring IVF because of male factors. There were no significant differences between the study and control groups according to the baseline demographic characteristics (Table 1). A comparison of both groups' serum albumin levels revealed no statistically significantly differences.
Comparison of IMA, TAC, TOC, OSI, and MDA levels between both groups are shown in Table 2.
High numbers of retrieved and mature oocytes and low fertilization rate were found in the OHSS group compared with the control group. However, the clinical pregnancy rate decreased in group I without reaching a statistically significant value. There were no significant differences in the pregnancy rates of women who underwent one or two embryo transfers in OHSS compared with control group.
Bivariate analysis revealed that serum TOC levels were well correlated with the total number of retrieved oocytes (r = 0.515, P < 0.001), total number of retrieved MII oocytes (r = 0.439, P = 0.001), total number of dominant oocytes on HCG day (r = 0.417, P = 0.002), total number of grade I and II embryos (r = -0.437, P = 0.001), and serum E2 levels on HCG day (r = 0.483, P < 0.001) in the whole group. Similarly, the OSI ratio was well correlated with total number of retrieved oocytes (r = 0.467, P < 0.001), total number of retrieved MII oocytes (r = 0.396, P = 0.004), total number of dominant oocytes on HCG day (r = 0.346, P = 0.012), total number of grade I and II embryos (r = -0.422, P = 0.002), and serum E2 levels on ovulation trigger day (r = 0.398, P < 0.003) in the whole group. Serum IMA levels were negatively correlated with oocyte quality scores (r = -0.299, P = 0.031).
Discussion
Controlled ovarian hyperstimulation can significantly affect hepatic and renal functions in patients by causing OHSS [13,14]. OHSS is characterized by altered capillary permeability, which may result in the transfer of intravascular fluid to extravascular areas, leading to systemic endothelial dysfunction. Fluid escape into a tertiary space can result in hemoconcentration, resulting in thromboembolic events [15]. This phenomenon is similar to sepsis, in that OHSS patients demonstrate vascular permeability. The main cause of this condition is high serum levels of vascular endothelial growth factor (VEGF) [16,17].
Numerous studies also emphasized the importance of reactive oxygen species in reproduction [18][19][20][21]. Unexpected events such as OHSS could negatively affect the delicate balance between antioxidants and ROS. In addition, ROS release may result from oxidative stress. As shown in our study, factors such as OHSS that lead to ischemia may increase serum IMA. Rising IMA levels may be a signal for increased ROS and its likely negative influences over oocyte quality and implantation. Our study also revealed increased TOC, OSI, and MDA levels, which were probably related to increased IMA. Increased TOC, OSI, MDA may result in increased reactive oxygen species levels and oxidative stress.
The interrelationship between the follicular fluid levels of oxidative stress markers and embryos or oocytes is a debatable subject. Some authors suggested that high ROS concentrations in follicular fluid may alter the quality of oocytes in tubal infertile patients [22]. One limitation of our study was that we did not establish ROS levels directly, but attempted to understand pathogenesis indirectly by measuring TAC, TOC, OSI, and MDA levels. Further studies could be designed to examine this point. Another limitation of the study was that we measured only serum levels, not follicular fluid levels of TAC, TOC, OSI, and MDA.
In our study, we found that retrieved oocyte counts were higher in the OHSS group, but the fertilization rate and grade 1 and 2 embryo counts were higher in the control group. This can be explained by variety of factors, including tissue ischemia and increased oxidative stress. The endometrium should be high enough qualified for implantation. This process is very delicate, and follicular development can be dis-turbed by various factors and may interfere with implantation. Increased IMA levels, which may be a sign of microthrombotic events, might therefore be the cause of changing levels of TAC, TOC, OSI, and MDA. In the literature, lower TAC levels are linked with fertilization failure [23]. In our study, lower TAC levels indicated no significant differences between clinical and biochemical pregnancy rates even if there was a significant difference in grade 1 and 2 embryo counts. The low number of patients might therefore restrict us from making strict suggestion and conclusions.
Microthrombotic effects may also result in chromosomal aberrations in the oocyte or embryo in women with OHSS. This may be related to intrafollicular hypoxia and insufficient angiogenesis in the follicles of OHSS patients [24]. The authors also agreed on the need for balanced oxidative stress in folliculogenesis and oocyte maturation [25].
In the light of recent studies, oxidative stress has been accepted as valuable parameter in the success of controlled ovarian stimulation. Oxidative stress may alter the oocyte quality, sperm and oocyte interaction, fertilization, implantation, and embryonic growth [26]. Some studies show that various factors, including even light exposure, can cause ROS production in cultured media. ROS may decrease the rate of blastocyst development and increase embryo fragmentation and apoptosis, which might explain the detrimental effects of OHSS [23,27].
Successful IVF may also be related to clinical (e.g., age, AMH, FSH dose), laboratory, and physician associated factors (e.g., low experience, embryo transfer technique) [28,29]. To date, considerable effort has been focused on identifying a correct algorithm that uses a woman's age and ovarian reserve markers as tools to optimize the follicle-stimulating hormone (FSH) starting dose in IVF procedures. Nevertheless, current available evidence regarding women with PCOS, particularly those with high AMH, appears inadequate [30,31]. This point has also been an important limitation in preventing OHSS, especially when determining the correct starting FSH dose in IVF patients. In addition, preventing OHSS during controlled ovarian stimulation may increase patient satisfaction and decrease the incidence of severe microvascular complications. In our study, serum level was assessed on the trigger day, which may not represent the entire OHSS process. This was an additional limitation of the current study. Further studies with serial serum marker results until the OHSS improves would provide better insight into the oxidative effect.
Conclusions
In the light of these findings, high oxidative stress might influence oocyte maturation and implantation in women with OHSS. This study also revealed that OHSS could initiate a thrombotic cascade caused by the high oxidative stress condition. IMA elevation might be an indicator of microvascular thrombosis. However, antioxidant supplementation along either with LMWH or aspirin may reduce the detrimental influence of OHSS. Larger clinical trials are necessary to explore this hypothesis further.
Author contributions
RD, ESGG, SD, SA conceived, designed and performed the experiments. SG, ESGG analyzed the data; AM contributed reagents and materials; SG, RD and ESGG wrote the paper.
Ethics approval and consent to participate
Clinical trials registration number: NCT02202278. | 2021-05-21T16:56:54.287Z | 2021-04-15T00:00:00.000 | {
"year": 2021,
"sha1": "175f72fa8ec2b329f95368f52c47c2a1541bd09d",
"oa_license": null,
"oa_url": "https://ceog.imrpress.com/EN/article/downloadArticleFile.do?attachType=PDF&id=5311",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "383e412f1bc9ca65296d608e3c0182e17aa16eb4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55170901 | pes2o/s2orc | v3-fos-license | INTEGRATION OF OVERALL EQUIPMENT EFFECTIVENESS ( OEE ) AND RELIABILITY METHOD FOR MEASURING MACHINE EFFECTIVENESS
Maintenance is an important process in a manufacturing system. Thus it should be conducted and measured effectively to ensure performance efficiency. A variety of studies have been conducted on maintenance as affected by factors such as productivity, cost, employee skills, resource utilisation, equipment, processes, and maintenance task planning and scheduling [1,2]. According to Coetzee [3], equipment is the most significant factor affecting maintenance performance because it is directly influenced by maintenance activities. This paper proposes an equipment performance and reliability (EPR) model for measuring maintenance performance based on machine effectiveness. The model is developed in four phases, using Pareto analysis for machine selection, and failure mode and effect analysis (FMEA) for failure analysis processes. Machine effectiveness is measured using the integration of overall equipment effectiveness and the reliability principle. The result is interpreted in terms of maintenance effectiveness, using five health index levels as bases. The model is implemented in a semiconductor company, and the outcomes confirm the practicality of the EPR model as it helps companies to measure maintenance effectiveness.
INTRODUCTION
Maintenance is done to ensure that machines are in a good condition, serviceable, and operationally safe for producing quality products.Chan et al. [4] reported that 15% to 40% of the total production cost is attributed to maintenance activities.However, up to 33% of this cost is spent unnecessarily [5].This wastage shows that effective maintenance and equipment reliability can help companies to reduce waste and improve productivity without investing in costly equipment and systems [6].Waste is accrued in maintenance costs because of failures in maintenance activities, such as using the wrong maintenance techniques, assigning underskilled workers to such tasks, and using fake spare parts.It is also caused by negligence in determining machine specifications and operational safety features that may contribute to the overall utilisation of the machines.This practice reduces machine reliability and sometimes causes dangerous accidents.
Maintenance activities require continuous monitoring, control, and measurement to determine performance levels.Performance measurement is a means of quantifying the effectiveness and efficiency of action [7].Measurement provides a means of capturing performance data, which can be used to aid decision-making and the formulation of plans for improvement.Tangen [8] stated that performance measurements are often used to increase the competitiveness and profitability of manufacturing companies through the support and encouragement of productivity improvements.
Maintenance performance measurement requires a simple yet effective model that reveals the actual situation, based on measured factors [9].A conceptual model or a theoretical construct is needed as a strong basis for research, especially in case studies [10].The model may help companies to develop a measurement process that addresses maintenance performance levels, analyses the causes of ineffectiveness, and improves the system.
EQUIPMENT PERFORMANCE AND RELIABILITY (EPR) MODEL
Maintenance performance is defined as the periodic measurement of the state or condition of the processes involved in conducting maintenance functions [4].In this paper, maintenance performance is measured using equipment performance and reliability as bases.This gauging system is grounded on the hypothesis that conducting good maintenance activities results in effective and reliable machines, given that maintenance directly affects machine effectiveness.The model introduced in this paper is called the Equipment Performance and Reliability (EPR) model.It consists of four phases: machine identification, critical system assessment, maintenance performance measurement, and maintenance performance level assessment.The model is illustrated in Figure 1.
Phase I: Machine identification
The first phase involves identifying 'critical machines' in a manufacturing plant where production processes are conducted.'Critical machine' refers to equipment that has the highest failure effect on the manufacturing process.A manufacturing plant usually has one or more machines for each process, making it a complex manufacturing plant.However, because of cost, time, and resources constraints, analysing every machine is impossible in maintenance management.So an identification process is needed to select the equipment that requires immediate attention.Pareto analysis is routinely used in identifying failures that contribute to the majority of machine maintenance costs and operation downtimes [11].The Pareto analysis principle, also known as the 80-20 rule, states that for many events, 80% of the effects come from 20% of the causes [12 & 13].By concentrating on the 20% (i.e., the critical machine), the measurement process and improvement plan produce a much more favourable effect, and can result in more effective maintenance [14].Three basic steps characterise this phase. http://sajie.journals.ac.za
Step 1: Loss Identification
The first step identifies the losses that occur in the manufacturing plant.Losses refer to equipment-related failures, problems, or breakdowns.The identification process can be carried out by analysing data from historical and failure records, which are generally kept by the manufacturing company.Historical records are the daily maintenance documents containing the information collected by the maintenance department.The historical record also includes planned maintenance activities, such as preventive maintenance (PM).The failure record is the documentation that contains a detailed analysis of failures that occur, including the examination of the occurrence, time, duration, and location of the failure, and the work done to repair it.This step focuses on losses related to mechanical factors.Thus all data collected should exclude failures and other problems caused by humans, materials, and facilities.Given that many types of losses arise from machines, the types of losses should be grouped to enable easy analysis and identification.Nakajima [15] identified 'six big losses' that affect machine effectiveness (Table 1).[21] applied loss segregation in their research and debated the definition of each loss type.On the basis of the definition of each loss group, we choose only three big losses for the first phase: breakdown, setup and adjustment, and idling and minor stoppages.This restriction is adopted because maintenance activities cannot improve a machine's acceleration or speed feature, which is more related to machine design.For process defect losses, most products are rejected because of inappropriate or defective materials, unsuitable process environments, and human error during the process setup.
Startup loss is also disregarded because it involves manufacturing principles, and can only be improved using good quality materials.
2.1.2
Step 2: Loss occurrence analysis Once the failures are grouped, a loss occurrence analysis is conducted.The aim is to record and calculate the occurrence, frequency, or rate of losses that arise in the manufacturing plant.The occurrences can be counted on a weekly, monthly, or yearly basis.Aside from the number of loss occurrences, the total number of loss occurrences is also calculated by adding up all the loss occurrences for each machine.The value obtained is then divided by the total number of loss occurrences to determine the percentage for each individual problem classification.The cumulative percentage (C) of each type of loss is calculated to draw the Pareto chart.
2.1.3
Step 3: Machine selection The final step of the first phase involves machine selection.The basic features of the Pareto chart are the columns with two vertical axes (Figure 2).In the chart, the left column represents the frequency or loss occurrences, while the right represents the cumulative percentage.The left vertical axis is marked in increments from zero to the total number of all the losses classified, while the right vertical axis is marked in increments from zero to 100%.The Pareto chart shows that the most critical machine would be that falling under the first column on the left axis.This machine should be selected for the measurement process of maintenance performance.
2.2
Phase II: Critical machine assessment Once the critical machine is identified, the machine failures are analysed.The purpose of this approach is to implement activities that eliminate or reduce failures, beginning with the highest-priority problems.Using failure mode and effect analysis (FMEA), failures are prioritised according to how serious the consequences are, how frequently they occur, and how easily they can be detected.FMEA is a structured and bottom-up approach that begins with identifying the potential failure modes at one level, and then investigating the effect on the next subsystem level [22 & 23].Five steps are involved in implementing Phase II.
Step 1: Identification of critical machine function
A machine consists of various components with different working functions and purposes.
On the basis of the information from the machine operating system, manual, and components list, the first step in FMEA is to list each component function of the critical machine.Machine function is defined as the task assigned to a component of the critical machine to accomplish specific processes.This step aims to simplify and focus the analysis on the smaller component levels.This way, a direct and accurate maintenance solution can be planned for the critical machine, based on the functionality of its components.
The process can be carried out by first constructing a functional block diagram (FBD) of the critical machine.The FBD is constructed to show diagrammatically the breakdown of a machine into components that are required to achieve successful operation.The basic structure of an FBD is shown in Figure 3, and the terms used are defined in Table 2 [24].Table 2: Terms and definitions in FBD [24] Terms Definition
Function block
The items in the function block may include machine functions or components that depend on the FBD indenture level.
Numbering A uniform numbering system is developed for the breakdown order of the functional system.Providing traceability through each level of indenture is essential.Functions that are identified in the FBD at each level should be numbered in a manner that preserves the continuity of functions and provides information with respect to the function origin throughout the system.The top level FBD should be numbered by sequence: 1.0, 2.0, 3.0, etc.
Input and output
The input and output of the system in FBD are presented in a single box enclosed by a dashed line.
Flow connection
Lines connecting function blocks indicate input and output flow.
Boundary
Border line between the system functions is represented by a single box enclosed by a solid line.
2.2.2
Step 2: Identification of potential failure modes The second step in FMEA is to identify the potential failure mode of the critical machine.The failure mode is the component that fails to perform its intended process.Generally, the failure mode describes how the failure occurs.During this step, FMEA includes some processes to assess the risk associated with failure mode by rating the severity of each mode.A severity (S) rating considers the worst potential consequences of a failure, determined by the degree of injury, property damage, or system damage that ultimately occurs.The severity is quantitatively rated by experienced or expert workers in the work area, such as the process engineer, maintenance engineer, or technicians who are responsible for the selected machine.They are a team of workers who operate the machine on a regular basis and are therefore the most familiar with its operations.The ratings, ranging from 1 to 10 as based on the quantitative judgment of these experts, are used.
2.2.3
Step 3: Identification of Potential Failure Effects The potential effects for each failure mode are identified, pertaining to the changes or consequences that stem from the failure mode.The effects are observed and recorded to assess effective maintenance action for the failure by looking at the historical record of previous failures, as well as the machine handbook, operation manuals, and actual observations of the machine.During this stage, rating the likelihood of occurrence (O) for each failure cause is necessary.The failure occurrence or failure rate represents the number of failures that occur in the identified failure mode.Experts make decisions by referring to the historical record of failure occurrences, and assess whether the failure has a remote, low, moderate, high, or very high probability of occurrence during operation.The rate is numerically valued from 1 to 10 points.
2.2.4
Step 4: Identification of potential failure causes The fourth step is structured to extend the analysis of the failure mode by identifying its potential cause.The identification process can be carried out by asking questions such as:
What could cause the component to fail in this manner? What circumstances could cause the component to fail to perform its function?
What can cause the component to fail to deliver its intended function?During this step, the likelihood of prior detection (D) for each cause of failure is identified and rated from 1 to 10, as in the previous steps.
2.2.5
Step 5: Evaluation of current maintenance action The next step is the evaluation of current maintenance action.This step is significant, because any improvement that is conducted at a later stage can be planned using previous maintenance activities as bases.This eliminates redundant action plans and ensures more effective maintenance.Here, the risk priority number (RPN) is calculated for each failure mode.The RPN can be obtained by multiplying the ratings of the severity (S), likelihood of occurrence (O), and likelihood of detection (D) obtained in the previous steps.Given that all the ratings are taken in the integral interval of 1 to 10, the three factors are considered to have the same weight in the RPN score.The RPN calculation is expressed as http://sajie.journals.ac.za where i is the failure mode number with i=1…..n.The assumption of this step is that the higher the value of RPN, the greater the risk of failure, and the lower the value of RPN, the lesser the risk of failure.Thus the usage of the RPN score prioritises improvement activities by focusing first on the most risky failure mode.
Phase III: Machine performance measurement
After the critical machine is assessed and the problematic functions of the machine are identified, the focus is now directed toward measuring machine effectiveness and reliability.Overall equipment effectiveness (OEE) and reliability are the main concepts adopted for the model, because both methods can be used to measure maintenance performance based on machine effectiveness.The key point in this phase is the assumption that machine effectiveness can be achieved with effective maintenance activities.OEE is a diagnostic function for multi-attribute factors, which are availability, performance rate, and product quality rate.The measurement method provides the total effectiveness of machine performance during its operation.Meanwhile, the reliability principle can be used to gauge maintenance performance based on machine dependability and lifetime.The main objectives of reliability analysis are to reduce failure rate and to extend machine operating time.This phase can be divided into two steps.
Step 1: Calculation of machine effectiveness
Step 1 uses the OEE method to calculate machine effectiveness (ME).In accordance with OEE, a machine's availability measures the fraction of total operating time in an observation period, such as a week or a month, in which a machine is capable of performing processing work.Available time excludes times when the machine is non-operational due to repairs or queued repair schedules.It also excludes times when the machine is undergoing preventive maintenance, cleaning, calibration, and re-qualification after maintenance, or is being used in engineering efforts.The available time for the machine includes actual processing time and idle time.In a manufacturing plant, unavailable time (the complement of available time) is commonly called machine 'downtime'.
For the machine performance rate element, the OEE method measures the fraction of total operating time in an observation period in which the machine asset is actually engaged in a processing activity.For practical reasons, time credited to performance rate may include not only actual processing time, but also the short periods of time in which the machine is idle while operators perform handling, program downloading, and metrology tasks that are required between consecutive machine cycles.
The performance rate also considers the comparison between the actual production and the expected production of the process.It represents the associated speed losses caused by poor adjustment carried out during maintenance work.An ME time frame can be developed from the information about the elements.The time frame is drawn to show schematically how the elements are determined and calculated [25].The durations of the ME time frame are determined in relation to the three big losses.Figure 4 shows the computation of the time frame for machine effectiveness [16,26] The time frame is structured as three levels of bars.The data can be collected from production data that contains production scheduling and operation.In the first level, the top bar represents planned production time (T plan ) and shows the total time a machine is supposed to be available to produce a product.Therefore the planned production time for a selected machine can be calculated by multiplying the days of work in a month by the total number of minutes the machine is expected to operate in a day, as in Equation 2.
where δ is the number of working days and is the daily production time (converted to minutes) planned for the machine to operate.
The bar in the second step represents the actual production time (T act ) calculated by eliminating downtime losses, such as machine failures and setup and adjustments.This is the time planned for machine availability.The duration of T plan is the maximum time of machine operation, but it is rarely achieved because of unplanned and planned downtimes.Therefore, T act is used instead.T act is affected by availability losses, which are grouped based on breakdown, as well as setup and adjustment downtime.T act is expressed as T updt in Equation 3 denotes the duration of unplanned downtime that occurs during the entire T plan .T updt occurs when a machine experiences failures or breakdowns, while T pdt is the duration of downtime planned on the machine for maintenance actions or breaks such as: implementation of PM or routine checkups and calibrations of the machine; machine trials and process improvement activities; machine stoppages for change of components to produce different products; machine stoppages for software installation.
The third stage in the ME timeline is the machine's net production time (T net ).This is the time the machine takes to produce the finished product based on its capacity and capability as initially designed.The determination is based on the product cycle time as specified and recorded in the process manual and product specification.Thus the calculation can be carried out by multiplying the theoretical cycle time T tc of one product by the number of products processed (α) by the machine.The equation is The losses experienced in the net production time are performance losses such as idling and minor stoppages caused by poor machine conditions.The construction of the ME measure is undertaken using historical data for availability and performance elements.The data required for the measurement can be collected on a daily basis by the machine operators, and the actual machine performance can be calculated at the end of the day.The percentage for the 'world class performance' availability element is considered to be at 90% [27] The percentage for world class performance OEE is set at 85% [27].The value is similar for ME.The discussions on how to interpret the results are provided in Phase IV: Maintenance performance level assessment.
2.3.2
Step 2: Reliability calculation In the previous step, maintenance performance is gauged based on machine performance and capability to work in the operation system.The measurement in this step is based on the reliability principle, defined as the ability of a machine to perform, without failure, a specified function under a given production time [28].The gauging system is based on the assumption that conducting good maintenance activities results in a more reliable machine.Machine failures, such as that portrayed by the bathtub curve, occur.The curve is the graphical representation of the reliability principle shown in Figure 5.It shows three stages of failure rates that are usually experienced by a machine: infant mortality, normal or useful life, and end of life wear-out stage [29,30].This research gauges ME during the machine's useful life by calculating the failures resulting from ineffective maintenance [31].A machine deteriorates relative to usage and age; thus, reliability during usage life represents the prevention of machine failure by performing effective maintenance [32].The reliability computation is carried out by calculating the number of failure occurrences based on the failures identified and analysed in Phase II, in which the final results are in RPN.The failures modes with the highest RPN values are chosen as the critical failures of the machine.The list of failures can be recorded, and is shown in Table 3.
http://sajie.journals.ac.zaDuring this step, the number of failures or failure rate (λ) can be calculated by The f(t) is the number of failure occurrences collected from the historical record during T plan .However, the result from failure rate calculation is in number form.It cannot be interpreted based on maintenance performance level.Thus the solution suggested in the model is to convert the number into a percentage.The total value of λ is recorded in the last row in Table 3, and is used to calculate the failure ratio for each failure type using Equation 9.
Out of six data in Table 3, the highest failure percentage is taken to depict machine reliability that will be interpreted in the final phase.The processes on how to interpret the result from failure ratio are provided in Phase IV: Maintenance performance level assessment.
Phase IV: Assessment of maintenance performance level
The completion of previous phases yields the percentages of ME and reliability.A medium is needed to tally the score for machine performance with maintenance performance.A literature review is carried out to identify a rating system that can be used to convert machine performance to maintenance performance.From the reviews, the health index (HI) is determined as the most suitable method.The concept of the HI is commonly applied in the rating of power transformers [33,34].This index represents a practical method for quantifying the results of operation observations, field inspections, and site testing into an objective quantitative index that represents the overall condition of a machine.The HI is developed at five levels of maintenance performance, with Level 1 standing for very good performance and Level 5 for poor performance (Table 4).
In accordance with the OEE method, any ME percentage below 85% is considered ineffective and should be further improved.Thus, Level 1 for machine performance is set at 85% and above.The remaining percentages are divided into four groups.Any machine with an ME value below 24% is considered to be at Level 5 and requires immediate risk or failure assessment.As a result, the machine needs to be replaced or subjected to maintenance activities as identified by the FMEA.
The HI for the reliability principle is based on failure ratio, which is the percentage of failures that occur in the critical machine.Any failure ratio under 5% is considered very good, to which Level 1 in maintenance performance is allocated.This indicates that the failure with the highest RPN value occurs only once in a while, and is addressed by maintenance activities.However, any performance level between Levels 2 and 5 should be further analysed and improved.A A A http://sajie.journals.ac.za
IMPLEMENTATION AND RESULTS
The model was implemented in a semiconductor company located in North Malaysia.The illustrative case is a company that does business in the assembly and testing of leaded semiconductor packages.Operating as a subcontracting company, it offers different process packages for products manufactured with integrated circuit production based on customer specifications (packages).The two main sections in the company are front of line (FOL) and end of line (EOL), as shown in Figure 6.The EPR model was implemented in FOL.Six processes in FOL use machines, and this is where the loss occurrence analysis was conducted.By referring to the historical and failure records, the number of loss occurrences (l) for the three big losses were calculated.To obtain accurate analysis, we compiled month-long data.The Pareto chart was developed as in Figure 7.The Y-axis of the chart represents the loss occurrences for the six processes in FOL.The left X-axis is assigned to the number of loss occurrences, while the right X-axis is the value of cumulative percentages.
http://sajie.journals.ac.zaUsing Pareto analysis, we determined that the machines at the wire bond process have the highest number of losses.This process also exhibits the highest cycle time in production time, with frequent machine breakdown leading to low productivity rates.
Phase II: Failure analysis using FMEA
There are 140 machines under the wire bond.Thus a large amount of time and a lot of human resources were needed to collect the data from the machines.This feature also complicates the data analysis.However, choosing only one machine is impractical because the data will be inadequate and inaccurate.The company is on its way to adopting a new technology, known as the copper wire bonding process, to broaden its market niche.However, the process engineers were concerned by the many unknown failures that occurred during the implementation of the new process.The FBD for the copper wire bonding machine is illustrated in Figure 8.
Subsequently, the failure modes and their effects and causes, and the current maintenance activities for the machine components, were analysed.The final results for the failure modes with the highest and lowest RPN ranges are tabulated in Table 5, along with the S, O, D, and RPN ratings.
3.3
Phase III: Machine performance analysis using OEE and the reliability principle
3.3.1
Step 1: Calculation of machine effectiveness Phase III in the EPR model was initiated by identifying the three big losses as listed in Table 1.The information was taken from the daily maintenance record.All these losses were recorded as downtime during operating time.From the data analysis, 40 types of failures were identified.The downtimes in the log sheet were recorded based on failure type, such as broken wires, machine downtimes, or insufficient preventive maintenance.Once identified, each failure was one of the groups of three big losses.Failure types such as quality assurance (QA) buy-off, under engineering, under vendor, awaiting QA buy-off, and awaiting material, were omitted in the segregation allocation process because these are not related to maintenance performance.The results of the other 35 types of failures are shown in Table Alignment, indexing, X,Y, and Z alignment, setup, wire touching die, insufficient gas repair, sagging wire, strained wire, device changes, excessive loops Idling and minor stoppages Lifted metal, vacuum error, lifted bond, power surge, output lead frame jam, tail too short, non-stick on pad (NSOP), broken wire, NSOL For the timeline, the company followed the proposed ME timeframe.The machine downtime losses identified earlier were grouped into two: planned (PDT) and unplanned (UPDT) downtimes.The failures under the three big losses were considered UPDT, whereas failures segregated under 'other' were considered PDT.
Using the values of T plan , T act , and T net , the process continues with the calculation of availability (Aeff), performance (Peff), and finally, machine effectiveness.All results obtained for the 14 packages at the copper wire bonding process are shown in Table 7. Figure 9 is the graphical representation of Aeff, Peff, and ME. 7 and Figure 9 show that the availability value of all packages is 90.6%.This indicates that the machines are operated according to T plan .The difference between T plan and T act is that low-end machines rarely have any major breakdowns.This also indicates that maintenance activities, such as machine setup and adjustment, are conducted effectively.However, machines in the copper wire bonding process have extremely low performance effectiveness, with an average of 45.6%.This result shows that the machines are frequently idle and experience minor stoppages during operation.The average of ME is 41.5%, which is low compared with the world class mark of 85%.AA A A A A A A A http://sajie.journals.ac.za
Step 2: Calculation of machine reliability
The second step in Phase III was calculating machine reliability.This reliability analysis investigates machine performance based on a machine's resistance to failure and breakdown.For this purpose, this step used the data collected from the previous phase for analysis.Information was gained using the FMEA approach, in which failure modes with high RPN values were selected.From Phase II implemented in the company, the range of the RPN value is large: between 4 and 300 points.The reliability analysis was conducted on the failure with an RPN value of more than 200 points because of the low risk for some failure modes.The decision was triggered by the understanding that maintenance conducted on risky failure modes is important because failures may cause major machine breakdowns.The selected failure modes are provided in Table 8.The reliability analysis was initiated by the collection of failure occurrences during the entire T plan in processing copper wire bonding packages.These historical data were used to count the failure frequency or failure rate (λ) of all 10 critical failure modes at the machines.However, because T plan varies for all 14 packages, the λ during this phase was collected during the entire production time when all packages in the copper wire bonding process were produced.
The failure modes and their λ values are listed in Table 8.The total number of λ is 482, with the highest rate at component 5.0 with 81 occurrences for contamination build-up in the wire clamp.The contamination affects wire bonding quality and causes the component to produce inconsistent looping and starch wires.It is usually caused by the contamination of copper wire during the oxidation process.The failure ratio () of each failure mode was then calculated.The highest number of was the selected reliability value of the machine that is analysed using the EPR model.Machine reliability was calculated at 16.8%.
Phase IV: Validation of maintenance performance level assessment
The process began with the analysis of the results of ME and reliability calculation.The main objectives of applying relevant performance measures are to detect deviations in the conditions of the production and maintenance processes for implementing the actions required at an early stage with fewer resources such as time, labor, and cost.Furthermore, the analysis and diagnosis of the deviations of the performance measures yield better results when they are associated with identifying the root cause of the changes.The recommended action will help avoid failure re-occurrence.
The average of machine effectiveness percentages was determined.The average machine effectiveness in the copper wire bonding process is 41.5%, which places maintenance performance at Level 4 or poor performance.The suggested action for this level is to begin planning the process for replacing or rebuilding the machine, considering the risks and consequences of the failures.The same process was also applied to the results from the http://sajie.journals.ac.za reliability analysis.The failure rate exhibited by the machine was calculated and then matched with the maintenance performance level in HI.The failure ratio at copper wire bonding is 16.8%.The result, when matched with the HI, exhibits a maintenance performance of Level 3, which is a 'fair' maintenance performance level.
ACTIONS TAKEN BY THE COMPANY
The company studied in this research agreed that the machines in the copper wire bonding process lacked performance efficiency, with performance rates much lower than anticipated.The company also realised that many machines did not comply with the theoretical cycle time that had been set, because many unplanned breakdowns occurred during operating hours.These unplanned breakdowns usually resulted from idling and minor losses with minor maintenance activities conducted by the operators attending to the machines.
Machine reliability is satisfactory, but the maintenance activities should be periodically monitored to ensure effective performance.The purpose of this model is to gauge maintenance performance levels based on machine effectiveness and reliability.Maintenance plays a key role in ensuring that the company's wire bonding machine and other equipment performs its required functions during production.The criticality of this idea is that many practitioners -such as process engineers and maintenance engineershave no references to guide them in ensuring that the process runs smoothly in the production line, where they have no ability to eliminate failures by modifying or improving the material properties of the wire.What they can do is to set and maintain their machines properly, while addressing the failures from a maintenance perspective.
Based on the ME and reliability analysis results, we determined that the company has two levels of maintenance performance.The company opted to focus on the lowest level, because conducting an improvement plan in the future is easier for the company when it starts with the lowest level achieved.Thus the final step taken by the company was to conduct maintenance as described in the HI.The company usually practices PM and corrective maintenance (CM) in its operating system.PM is a proactive approach in which machines are monitored and maintained periodically to avoid failures throughout the manufacturing process.For each machine, PM is planned according to machine requirements, maintenance specifications, and design.PM is conducted on a weekly, monthly, quarterly, or yearly basis.CM is conducted whenever failure occurs.This process is a reactive maintenance approach, and is regarded as unplanned downtime during operation.
Based on the HI, the company opted for continuous maintenance to ensure high overall machine performance as well as high machine reliability.The suggestions given in the EPR model are found to be applicable to this company.The model is confirmed to be general yet suitable for the kind of maintenance system practised by the maintenance department.
CONCLUSION
An effective maintenance system is important, and requires monitoring and assessment so that an improvement plan can be efficiently formulated.This research takes its roots from various discussions and observations that arise from practitioners' dilemma in measuring maintenance performance.The motivation of this research is the development of a simple, easy to use, and viable model for measuring maintenance performance.The performance measurement requires holistic and effective approaches to enable the achievement of reliable results.The practice of measuring maintenance performance in this research is discussed, based on mechanical factors.
The combination of OEE and the reliability method is proposed in developing an EPR model.Machine effectiveness can be achieved by conducting effective maintenance.Numerous companies have already conducted measurements using OEE [4,14,[16][17][18][19][20][21][25][26][27]. http://sajie.journals.ac.za However, the measurement processes conducted were focused on short-term perspectives.Machines are measured on their availability, performance rate, and quality rate, as was originally suggested by Nakajima [15] for the OEE model.The long-term effect (such as that of machine reliability) on maintenance performance was not measured.Martorell et al. [35] conducted a sensitivity study to investigate the effects of maintenance performance on machine survival functions and age.They found that when maintenance performance increases, the survival function and age of the machine also increase.
In addition, the asymptotic behaviour that represents machine reliability is achieved at a faster rate.This is deemed to be a natural consequence of implementing maintenance activities that further improve machine condition.This highlights the relationship between machine reliability and maintenance performance, as suggested in [26, 28, & 31].Thus combining OEE and the reliability principle serves as a viable approach to maintenance performance measurement.
The EPR model uses three big losses instead of six for machine performance measurement.
The new group of losses is developed, based on discussions concerning the definitions and segregation of failures and downtimes exhibited by machines in manufacturing plants [32,36,37,38].Only three losses are directly related to maintenance activities.The machine downtime caused by breakdown, setup and adjustment, and idling and minor stoppages are the losses considered in maintenance performance measurement.These losses can be repaired and improved by effective maintenance activities.The losses caused by reduced speed, process defects, and startup are omitted because these usually involve human error, material problems, or process requirements.They only indirectly affect maintenance performance.
Quality rate is omitted in the EPR model because it is not directly related to maintenance effectiveness.Quality is measured based on the number of products produced.However, production rate is commonly related to human error during process setting or material defects that lead to product rejection.Chakravarty et al. [38] measured machine effectiveness based on the availability and performance rate element to obtain the actual maintenance performance level, without considering problems related to materials and human factors.Steege [39] also omitted the quality element in his research because of the tight interrelationship of machines; this relationship makes the identification of machinerelated product defects difficult.
In this paper, the structured technique of an EPR model for identifying problematic machines and planning improvement actions is presented.In the model, the selection phase involves choosing the most problematic machine using Pareto analysis.Subsequently, the model uses FMEA as a failure analysis method to identify failures and improvement actions in machine operation.Next, the maintenance performance is gauged based on a machine's effectiveness and reliability in the operation plant.The results are interpreted as maintenance performance levels based on the HI.The case study shows that the model successfully measures maintenance performance based on machine effectiveness.The analysis from the model has been used to improve the maintenance system employed in the company.
Figure 2 :
Figure 2: Example of a Pareto chart
Figure 6 :
Figure 6: Processes in the case study company 3.1 Phase I: Machine selection using a Pareto chart
Figure 7 :
Figure 7: Pareto chart for loss occurrences at FOL
6
Figure 8: FBD of the wire bonding process
Figure 9 :
Figure 9: Graph of machine effectiveness Table7and Figure9show that the availability value of all packages is 90.6%.This indicates that the machines are operated according to T plan .The difference between T plan and T act is that low-end machines rarely have any major breakdowns.This also indicates that maintenance activities, such as machine setup and adjustment, are conducted effectively.However, machines in the copper wire bonding process have extremely low performance effectiveness, with an average of 45.6%.This result shows that the machines are frequently idle and experience minor stoppages during operation.The average of ME is 41.5%, which is low compared with the world class mark of 85%.AA A A A A A A A
Table 1 : Six big losses [16]
. The mathematical model for availability calculation is
Table 6 : Failure types at the copper wire bonding Three big losses Failure type
Breakdown Temperature out of specification, EFO, nonstop non-stick on lead (NSOL), nonstop lifted bond, machine down, wire clamp problem, transducer problem, insufficient preventive maintenance, conversion, missing ball, club ball Setup and adjustments | 2018-12-05T11:52:58.058Z | 2011-11-05T00:00:00.000 | {
"year": 2011,
"sha1": "cdf4837f1631da12c4a8623fbed858678578df6f",
"oa_license": "CCBY",
"oa_url": "http://sajie.journals.ac.za/pub/article/download/222/207",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "cdf4837f1631da12c4a8623fbed858678578df6f",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
4135785 | pes2o/s2orc | v3-fos-license | Disentangling the Spatio-Environmental Drivers of Human Settlement: An Eigenvector Based Variation Decomposition
The relative importance of deterministic and stochastic processes driving patterns of human settlement remains controversial. A main reason for this is that disentangling the drivers of distributions and geographic clustering at different spatial scales is not straightforward and powerful analytical toolboxes able to deal with this type of data are largely deficient. Here we use a multivariate statistical framework originally developed in community ecology, to infer the relative importance of spatial and environmental drivers of human settlement. Using Moran’s eigenvector maps and a dataset of spatial variation in a set of relevant environmental variables we applied a variation partitioning procedure based on redundancy analysis models to assess the relative importance of spatial and environmental processes explaining settlement patterns. We applied this method on an archaeological dataset covering a 15 km2 area in SW Turkey spanning a time period of 8000 years from the Late Neolithic/Early Chalcolithic up to the Byzantine period. Variation partitioning revealed both significant unique and commonly explained effects of environmental and spatial variables. Land cover and water availability were the dominant environmental determinants of human settlement throughout the study period, supporting the theory of the presence of farming communities. Spatial clustering was mainly restricted to small spatial scales. Significant spatial clustering independent of environmental gradients was also detected which can be indicative of expansion into unsuitable areas or an unexpected absence in suitable areas which could be caused by dispersal limitation. Integrating historic settlement patterns as additional predictor variables resulted in more explained variation reflecting temporal autocorrelation in settlement locations.
Introduction
Spatial correlation (i.e. a geographic dependency of observations) is a fundamental attribute of the organization of biological systems [1,2] ranging from the growth of spatially discrete bacterial colonies in petri dishes over the patchy structure of plant and animal populations up to the spatial distribution of human settlements in the landscape [3]. Clustering of biological units such as individuals or populations can be driven by both environment dependent and -independent processes. Environments are typically heterogeneous at different spatial scales ranging from subtle differences in the physico-chemical environment of individual organisms up to large scale variation in habitat structure across landscapes driven by historic geomorphological processes and broad environmental gradients such as climate and productivity. Certain areas also typically contain more limiting resources than others or better meet the requirements of certain species than others [4]. As a result these localities are more likely to be colonized and sustain populations. This type of deterministic distribution of organisms in space based on the quality of the environment is known as environmental filtering or species sorting [5]. This paradigm, however, assumes that there are no restrictions in the movements of organisms and that ultimately all suitable patches will be occupied. In reality, this assumption is often not met as migration is often not that efficient and certain suitable patches will remain unoccupied: a pattern known as dispersal limitation [6]. Individuals and populations can also be unable or unwilling to migrate and resettle even though local conditions are no longer suitable [7]. While in animals and plants inability to migrate is a main reason why suitable patches remain unoccupied, social, cultural, financial and political barriers might play a similar role in humans [8]. Additionally, species sometimes expand into unsuitable areas where they would normally go extinct but nonetheless manage to persist as a result of continuous arrival of new migrants from sources (source sink dynamics) [9,10]. Examples of this in human societies include, for instance, villages or a city such as Ancient Rome that cannot sustain themselves and would perish if resources and people were not continuously brought in from other sources through exchange or trade. Finally, historical patterns such as the point of entry of a species or race in a region or the location of the first population or settlement can have important consequences for the expansion of a species into an area resulting in persisting founder effects [11,12]. Consequently, in the case of time series analyses it can be feasible to use historic distributions as predictors of later patterns (temporal autocorrelation [13]). Disentangling the relative importance of history, environmental and spatial processes explaining distribution patterns, however, remains an important challenge both in ecology [14], anthropology and archaeology [15,16,17]. In general, analyses of ancient settlement patterns typically conclude by pointing out a single environmental variable of presumed importance [15] without rigorously assessing the explanatory value of different sets of variables explaining reality [17]. Different statistical approaches have been developed that can take into account the spatial structure of sites such as classical isolation by distance analysis using intuitive but limited partial Mantel tests [18,19] or multivariate ordinations including spatial descriptors such as polynomials constructed from X and Y coordinates (trend surface analysis [20]). The recent development of advanced spatial descriptors (PCNM -principal coordinates of neighboring matrices; MEM -Moran's eigenvector maps [21,22,23]), however, provides important new opportunities since these are more powerful at detecting spatial variation and allow to identify the scales at which spatial clustering occurs. This is important since aggregation might be beneficial at certain scales but detrimental at others. For instance, while people may be more inclined to settle close to other settlements because they can trade with them or because of the abundance of a certain resource, starting up a new settlement too close to existing settlements may be a bad choice as it can lead to social-economic problems such as resource competition which may lead to conflict [24] or abandonment of sites and shifts in settlement patterns [15]. What is more, by including sets of predictor variables representing spatial variables as surrogates for migration-based processes, environmental variables indicative of environmental filtering and historic factors in multivariate ordination models it is possible to use a variation partitioning procedure [25] to separate the unique effects of different sets of variables as well as the variation that is commonly explained by different variation components. For instance, the effects of an environmental condition that is confined to a certain geographic area may be detected as spatially structured environment. As such, the method is able to distinguish whether spatial distribution patterns are the result of spatially clustered environmental conditions or environment independent processes such as source sink dynamics or dispersal limitation [26]. While this method has been extensively used in ecology, its potential use in other disciplines such as the social sciences remains largely unexplored.
We applied this method on a dataset of archaeological artefacts collected in a region in south western Turkey and spanning a time period of almost 8000 years from the Late Neolithic/Early Chalcolithic (6500-5500 BC) up to the Byzantine period (610-1300 AD). We investigate changes in the relative importance of history (presence of earlier settlements), environmental and spatial variables explaining settlement distributions over time and discuss the relative importance of different processes that can explain clustering of human populations at different spatial scales. For this the dataset was subdivided into six periods which could be reliably distinguished based on period-characteristic material culture (1: Late Neolithic/Early Chalcolithic (6500-5500 BC); 2: Late Chalcolithic/Early Bronze Age I (4000-2600 BC); 3: Early Bronze Age II (2600-2300 BC); 4: Archaic-Classical/Hellenistic (750-200 BC). 5: Hellenistic (333-25 BC); 6: Byzantine (610-1300 BC).
Ethics Statement
The research permit was granted by the Turkish Ministry of Culture and Tourism, General Directorate of Cultural Properties and Museum. All necessary permits were obtained for the described study, which complied with all relevant regulations.
Study Area
The research area consists of the southern part of the Burdur plain which is located in the Turkish Lake District and surrounded by the western Taurus Mountains (Fig. 1). It is situated in a tectonic 'graben' system [27,28] of which the central part is occupied by Lake Burdur. The level of this lake has fluctuated considerably during the lake's history, however. Around 20 000 BP, the lake reached its highest level and has declined ever since [29]. The retreating lake resulted in a flat plain and the lacustrine deposits provided fertile soils suitable for agriculture. Two rivers drain the southern part of the Burdur plain, the Duger Ç ayı and the Boz Ç ayı. From an archaeological point of view, the Burdur plain is considered as an important area. Previous excavations at Hacilar [30], Kuruçay Höyük [31,32] and the ongoing excavation of the University of Istanbul at Hacilar Büyük Höyük [33] have revealed unique information on the Late Neolithic, Chalcolithic and Early Bronze Age periods in Anatolia. These excavations make the Burdur Plain one of the best studied regions of Anatolia for these time periods and even beyond. However, little is known about other possible settlements in the vicinity of these excavated sites. To remedy this, the Sagalassos Archaeological Research Project started a series of intensive survey seasons in the Burdur plain in 2010, which resulted in the discovery of hamlets and farmsteads dating from Late Prehistoric to Ottoman times [34,35].
Data Collection
The area was surveyed by a team of researchers walking transects of 5061 m spaced 20 m apart and collecting all manmade artefacts predating the 1920's in the landscape. For the present analysis, the study area was divided in a regular grid of 9986 cells of 90690 m. Based on GPS coordinates, artefacts collected in transects were assigned to corresponding grid cells in the dataset. From the collected artefacts only those that could be reliably attributed to a certain time period were retained for further analyses. Although absolute synchronicity of sites cannot be attested via the relative dating of archaeological survey, the fact that identical pottery fabrics, probably stemming from a single production center, were used on sites from the same time period suggests synchronicity (unpublished data). An overview of different artefacts and the time periods to which they correspond is provided in Table S1. In order to improve the resolution of the dataset it was decided not to simply attribute artefacts to certain grid cells based on the exact geographic location (all or nothing principle) falling within a certain cell. Instead, we used a simple weighting method based on the distance of each artefact to the four nearest grid cell centroids according to (Eq. 1). In this formula A xy represents the artefact abundance assigned to a certain grid cell A with centroid coordinates x and y. N = the number of artefacts found in this cell, ai = the Euclidean distance of an artefact in cell A to the centroid of that cell and bi, ci and di = the Euclidean distances to the other three nearest cell centroids. As such, artefact abundance (calculated separately for artefacts from each time period) of each grid cell is no longer an integer number, but the column sum still correctly represents the total number of artefacts collected in the area. This approach can be considered as an elegant way to smoothen the response data, reducing the importance of the exact location of individual artefacts, which often will have been moved due to localized disturbances such as plowing. Using this approach a site x artefact abundance response datamatrix was constructed with six columns corresponding to
Environmental Data
Environmental properties of each cell (the unit of the analysis) were attributed to the cell's centroid and included elevation (m asl), distances to the nearest river, spring and hill and the percentage of different land cover types in radii of one and four km, respectively. Considered land cover categories included hills, badlands, valley floor, swamp and lake, as reconstructed from available GIS data layers. A last variable represents the estimated visible area of an object of 1 m high for an observer of 1.50 m and assuming a maximum visibility of 10 km. A detailed overview of the 15 environmental variables considered in this study is presented in Table S2.
Data Analysis
The distribution of archaeological artefacts from each of the six considered time periods was analysed using a variation partitioning procedure [25,36] based on redundancy analysis (RDA) models [23] with significances tested using 999 Monte Carlo permutations. This procedure decomposes the total variation in the response dataset into a pure spatial component (S|E), a pure environmental component (E|S), a component of the spatial structured environmental variation (E>S) and the unexplained variation. Only significant predictor variables identified using a forward selection procedure [37] based on the adjusted R 2 stopping criterion were retained in constructed models [38]. Artefact abundances were Hellinger transformed [39] prior to analyses (divide the abundances by the row sum and take the square root of the resulting ratiosartefact).
In order to analyse the importance of spatial autocorrelation at different spatial scales, a set of Moran's Eigenvector Maps (MEM's) was constructed [21,40]. In a regular matrix of sites these variables are wave-functions with different wave lengths corresponding to spatial correlation at different spatial scales [22,40] (Fig. S1). Only MEM's that have significant positive spatial autocorrelation as calculated using Moran's I [41] were used in the analyses. Forward selection was performed on this set of eigenfunctions. Only significant MEM variables retained in constructed models for each time period were included. Analogous to the variation partitioning procedure outlined above, spatioenvironmental covariation was corrected for by including significant environmental variables as covariables in these analyses [23].
Secondly, in order to assess the potential importance of existing settlements during the previous time period as determinants of current settlement patterns, the artefact abundances in time period T minus1 were included as an additional variation component history [H] besides space [S] and environment [E] in a second set of variation partitioning analyses.
Finally, by analyzing the fit of all MEM variables corresponding to significant positive autocorrelation with response variables of interest it is possible to investigate at which spatial scales, spatial clustering occurs in the dataset. Frequency distributions of the wavelengths (l) of the MEM variables retained in RDA models after forward selection are generated in order to assess variation in the scales of spatial clustering that are relevant during different time periods. Wavelength is expressed in km and for a regular grid it can be interpreted as the distance between the centers of neighboring clusters. To test the ability of MEM's with increasing wavelengths in explaining observed variation in artefact abundances, the amount of variation (adjusted R 2 ) explained by pure spatial variation (S|E) was calculated for consecutive sets 10 MEM's corresponding to increasing spatial scales. Additionally, the fitted site scores of the dominant first canonical axis of RDA models explaining the abundance of archeological artefacts using MEMs are plotted to highlight areas where artefact distributions are successfully predicted by MEM predictors.
All analyses were carried out in R version 2.15.0 (R Development Core Team 2012) using the packages PCNM (MEM variables), AEM (Moran's I spatial autocorrelation), vegan (Hellinger transformations, RDA, variation partitioning) and packfor (forward selection).
Results
The first 5249 MEM's had positive eigenvalues. Of these only 1215 were characterized by significant positive spatial autocorrelation based on Moran's I and were consequently retained in further analyses. Overall RDA models predicted a substantial fraction of observed variation ranging between 8 and 26%. Variation partitioning revealed that both spatial and environmental variables explained a significant proportion of variation in our datasets, even after correction for collinearity with other variation components. Considering artefact abundance during the previous time period as a separate historical variation component [H] generally resulted in better models explaining more variation, except for time period A-CH (Table 1, Fig. 2).
In general, a higher abundance of artefacts was found in valleys and in closer proximity to springs and hills. The prevalence of swamp, in turn, had a negative effect on artefact abundance ( Table 2). Significant spatial variables retained in our models included latitude and longitude, in general supporting a higher abundance of artefacts in the south-western corner of the area, which contains many hills and springs. MEM's fitted a broad range of scales of spatial clustering with wavelengths (l) ranging from 150 m to almost 90 km and there were no consistent differences between the considered time periods Table S3). Overall, however, most MEM's retained after forward selection corresponded to clustering with relatively small inter cluster distances varying between 100 m and 5 km (Fig. 3). Despite the higher abundance of small scale MEM's (l ,2 km) retained after forward selection, partial redundancy analyses correcting for significant environmental variation, showed that sets of the larger scale MEM's retained after forward selection typically explained more variation than sets of smaller scale MEM's and this effect was most pronounced in later time periods (Fig. 3). Similarly, maps showing the fit of the first canonical axis to artefact abundance data, also show larger spatial clusters of cells where artefact abundances are adequately predicted by MEM variables in later time periods (Fig. 4).
Discussion
Results illustrate how patterns of human settlement can be decomposed into spatial correlation at different spatial scales. Part of this correlation was shown to be caused by spatially structured environmental conditions while most spatial correlation was independent of environmental gradients considered in our analyses. As outlined in detail below, these results reflect a number of general processes generating spatial structure in human societies. Additionally, despite the incompleteness of the archae-ological record these relatively simple models considering a relatively small set of environmental variables already explained up to 26% of observed variation in artefact abundances. This indicates that human settlement in this region was characterized by a relatively strong deterministic component.
Environmental Filtering
Our results, first of all, showed clear significant links between settlement patterns and local environmental conditions. In nature different processes can result in a close match between environmental gradients and species distributions. For many organisms that cannot control their own migration such as plants and many invertebrates, environmental filtering occurs as a result of random dispersal followed by differential establishment success [42]. Other groups and particularly higher organisms typically have better dispersal abilities and more complex nervous systems enabling them to use environmental cues to determine whether a locality is suitable for settlement: a process known as active habitat selection [43]. The combination of superior dispersal ability and the intellectual ability to make rational decisions makes that particularly the latter will probably be the most prominent process driving differential human settlement in environmentally heterogeneous landscapes, as observed in this study. At larger spatial scales and when suitable areas are in scarce supply, however, it is likely that human settlement may also reflect random dispersal followed by differential mortality. A potential example of this could be the colonisation of islands in the isolated parts of the Pacific Ocean by seafaring Polynesian people [44]. Examples of environmental filtering emerging from this study include preferential settlement close to water sources (springs, rivers, lakes), hilltop lookouts and in lowland valleys, as also reported for settlements other regions of the Near East and the Aegean [45]. The availability of water appears to have been important during the entire history of the area, which is logical as water is indispensable for settlements and especially for (early) farming communities and their settlement location choice [46]. In contrast, a significant effect of the abundance of lowland valley around settlements was only detected from the Late Chalcolithic-Early Bronze Age onwards. There are solid indications that agriculture was already practiced in the area for about 1000 years prior to this time period [47,48]. Yet, it is very likely that higher populations sizes and an increase in the number of settlements from that time onward can explain why the predominant settlement in valleys is only detected in our dataset since the Late Chalcolithic-Early Bronze Age due to higher statistical power. Abundance of good land for agriculture, however, may not have been the only motivation for settling in valleys as other advantages such as the availability of resources such as good hunting grounds, wild plants, clay sources [49,50] or mobility and communication [51] may have been important. Areas dominated by badlands and marshland seem to have been avoided. While swamps may offer good hunting grounds for waterfowl as seen during the Neolithic period of Ç atalhöyük [52] and could be used for collecting wetland plants [53], this type of land is unsuitable for agriculture and has the additional disadvantage that it can act as an important source of disease vectors such as mosquitoes transmitting malaria and other diseases [54,55].
Spatial Processes
Besides environmental filtering, our analyses show clear evidence of spatial clustering independent of the considered environmental variables. In fact, the majority of explained variation was described by pure spatial variation. A similar observation was made for the Early Neolithic settlement pattern in Thessaly, Greece [56]. This pattern may arise due to different processes. First of all, it can be an important indication for expansion of settlements into areas which, according to our models, would be deemed suboptimal and even unsuitable in terms of environmental conditions linked to resources. It is possible that these settled areas were not self sustaining and persisted by importing resources from elsewhere. This, however, is contradicted by current ideas on human settlements at that time which are assumed to be self supporting [57]. The presence of sites under suboptimal environmental conditions might point towards a nondomestic/non-agricultural function of these sites as sanctuaries [58], cemeteries [59], artisanal workshops [60] or even temporary camps for transhumant herdsman [61]. Secondly, individuals might also choose not to settle in presumed optimal areas close to certain critical resources if this means that they are further away from other resources. Under such conditions it could be more efficient to settle in apparently suboptimal localities that are situated at reasonable distances from a set of resources [50]. Thirdly, spatial grouping of settlements can be promoted because of inherent benefits such as increased protection, better risk management in times of crop failure when communities can rely on one another [62], improved interactions and exchange of knowledge and goods [63] and, lastly, social advantages such as the establishment of marriage networks [64,65] and the opportunity to engage in social gatherings and communal events like feasting [66,67]. Finally, spatial clustering may be associated with environmental characteristics that were not or could not be considered in this study. This scenario is likely since a considerable amount of relevant environmental information for human settlement during the considered time periods cannot be inferred based on recent observations. Current knowledge on historic conditions is notoriously incomplete and, as a result, it is possible that, for instance, environmental conditions that used to be spatially clustered in the past will now be described by spatial variables and detected as pure spatial variation. Spatial variables were selected describing both large and small scale correlation. In general artefacts were typically clustered at scales smaller than 1-2 km with 75% of the MEM spatial descriptors that were retained after forward selection correspond- ing to correlation at scales ,2 km. This indicates that typical intersettlement distances were probably larger than this threshold ranging from anywhere between 1 and several km, a distance which matches well with commonly accepted estimates of the distance that can be covered while walking during one day [68]. Nonetheless, despite this, sequential analysis of the variation in the response dataset explained by blocks of 10 MEMs corresponding to increasing spatial scales revealed higher levels of explained variation by large scale MEM's (typically .20 km) even when correcting for environmental variation. Aggregation at this scale could result from different processes including the outward spread of a population from a certain centre of origin [12] or can be the result of the presence of certain unknown resources as discussed higher up.
Spatio-environmental Covariation
While we found unique effects of environmental conditions, independent of spatial variables [E] as well as unique effects of spatial correlation independent of similarities in environmental conditions [S], some variation was explained by spatio-environmental covariation, i.e. the variation that is explained jointly by space and environment [S>E]. As a result, one cannot unequivocally attribute this explained variation to spatial or environmental processes. Often, environmental conditions themselves will be clustered in space resulting in a correlation between environmental similarity and spatial proximity, which is captured by the [S>E] component in the analyses. In the current study this component seems to be less important than pure spatial variation but more important than pure environmental variation. Such a pattern was anticipated since particularly at larger spatial scales environmentally similar conditions tend to be spatially clustered [26].
Historic Factors
Overall, our results not only reflect spatial but also temporal autocorrelation. In the absence of strong environmental change or important demographic events such as disease outbreaks or wars, it is logical that suitable localities will remain settled throughout the history of an area. Additionally, social and ideological elements such as ancestral worship and location bound ideologies [50,51] can contribute to persistence of settlements in the same location. Indeed, artefact data suggested that settlement patterns during preceding time periods were generally a good indicator of settlement patterns in the different time periods considered in this study. What is more, taking historic patterns into account led to models that explained up to 6% more variation in settlement patterns than models that did not take this into account. For most considered time periods history had a significant unique effect on settlement patterns, both independent of spatial and environmental variation. This does not hold true for the A-CH period. This discrepancy is due to a chronological gap in the dataset just before this period, as artefacts from the Middle to Late Bronze Ages as well as from Early Iron Age were very scarce probably reflecting a very low level of human activity in the area. Finally, since ultimately earlier settlement patterns are affected by space and environment (resulting in autocorrelation) it is no surprise that a substantial proportion of variation [E>S>H] was jointly explained by space, environment and history.
Stochasticity and Unexplained Variation
The large proportion of unexplained variation can be due to several factors. First of all, a high proportion of unexplained variation is typical for this type of multivariate analyses [69]. Not all settlements will have been detected and not all environmental variables that are of potential concern such as the proximity of Table 2. List of significant environmental and spatial predictor variables retained after a forward selection procedure in RDA models explaining the abundance of archaeological artefacts from each of the six considered time periods . Period T1 T2 T3 T4 T5 T6 NEO_ECH LCH_EBI EBII A-CH HELL BYZ Env. NEAR_SPRNG important ancient sources of minerals such as clay or obsidian could be quantified [60]. Secondly, the relatively low density of human populations at that time ensures that a lot of areas which are suitable for settlement remained unsettled. As such, in terms of potential settlement, the dataset is highly unsaturated with a disproportionately large amount of empty cells and a very small amount of occupied cells. Therefore it is striking that despite a relatively small signal/noise ratio important and generally highly statistically significant patterns emerge which may reflect different deterministic drivers of human settlement. Finally, besides noise, the unexplained fraction also includes all variation generated by stochastic processes as well as rational motivations of people independent of the spatial proxies for responsible processes considered in this study.
Perspectives
Overall, this study illustrates how an integration of historical, local and regional processes can contribute to a better understanding of patterns of human settlements and may generate novel hypotheses. Specifically for the studied region, this study showed that, although there are some changes, the dominant drivers of human settlement implicated by the studied correlates have remained much the same during the studied periods. One could argue that the inclusion of large numbers of MEM gives a large weight to space in models. However, since this issue is largely resolved due to the use the use of adjusted r square values [25], the method is suitable for comparative analyses. As such, if applied correctly, MEM based variation partitioning provides a powerful tool for comparative analyses among different regions and even for large scale meta-analyses covering many datasets [69]. The MEM approach allows to detect presence of spatial structure in their datasets and identify relevant spatial scales, enabling researchers to identify whether these can be attributed to environmental filtering or environment independent processes related to migration and history. Nonetheless, particularly explaining patterns of clustering independent of environmental conditions in terms of responsible processes remains an important challenge. Figure S1 Examples of Moran's eigenvector maps corresponding to different spatial scales. Overview of 10 Moran's eigenvector maps (MEM's) illustrating the increasingly smaller spatial scales described by MEM's with increasing ranks. Red represents positive peaks of the spatial wave functions, while blue corresponds to negative dips. (TIF) Table S3 Overview of different MEM variables retained in redundancy analysis models after forward selection for each of the seven considered time periods. Numbers correspond with the rank (R) of each of the generated MEMs with a positive Moran's I (range: 1 -3264). l corresponds to the wavelength of each MEM expressed in km. Smaller wavelengths correspond to increasingly smaller scales of spatial clustering. (DOCX) | 2016-05-04T20:20:58.661Z | 2013-07-02T00:00:00.000 | {
"year": 2013,
"sha1": "0632c20c3be9c7b031c8f9472913f0a53b834182",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1371/journal.pone.0067726",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0632c20c3be9c7b031c8f9472913f0a53b834182",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
219311828 | pes2o/s2orc | v3-fos-license | Secondary Intention Healing After Functional Surgery for In situ or Minimally Invasive Nail Melanoma
is missing (Short communication).
Nail melanoma (NM) is a rare subtype of cutaneous melanoma of the nail unit (1,2). Previously, amputation was considered the treatment of choice regardless of disease stage (3). However, recent studies have shown that conservative surgery has excellent oncological, functional, and cosmetic outcomes for NM in situ (NMIS) or minimally invasive (Breslow thickness ≤ 0.5 mm) NM (MINM) (3,4). Functional surgery requires excision of the nail unit with at least 5-mm margins. Owing to the limited skin reservoir in the nail unit, surgical defects cannot be closed using primary closure. To date, various reconstructive methods, including local flap, free flap, full-thickness skin graft (FTSG), and secondary intention healing (SIH) have been reported.
SIH is a good method for NM without the need for sophisticated reconstructive procedures and loss of donor tissues (5,6). The main limitations of SIH are concerns about infection and long healing time. However, to our knowledge, there are no detailed data in the literature about the recovery time and outcomes of SIH. This information is relevant to surgeons as well as patients with impeding surgery with NM. Therefore, this study aimed to evaluate the healing time, functional and cosmetic outcomes, postoperative complications, and subjective patient satisfaction of SIH after conservative surgery for NM.
MATERIALS AND METHODS
Patients who underwent functional surgery with pathologically confirmed SIH for NMIS or MINM at our institution from 2015 to 2018 were included in the study. This study was approved by the institutional review board (IRB number 1807-174-963).
Total excision of the nail unit was performed with 5-mm free margins. The safety margin was calculated from the lateral nail folds, hyponychium, and nail matrix. When Hutchinson sign was present, the margins were measured from the pigmentation. The total nail unit including the periosteum was excised, as the distance between the matrix or nail bed and the phalangeal bone is generally short (<1 or 2 mm). Extreme caution was applied during incision at the proximal margin to preserve the extensor hallucis longus tendon. After excision, paraffin tulle coated with chlorhexidine was applied (Bactigras ® ; Smith & Nephew, London, UK) and multiple layers of sterile gauze. Peha-haft ® (Paul Hartmann, Heidenheim an der Brenz, Germany) was used for compressive dressing. Cephradine (500 mg, 4 times daily for 3-7 days) with either acetaminophen (650 mg, 3 times daily) or aceclofenac (100 mg, 2 times daily for ≥ 1 week) were prescribed to the patients. The dressings were changed 3 times weekly. After granulation tissue had completely covered the phalangeal bone, weekly dressing changes were performed at our department and the patients were educated about changing their dressings at home. After complete re-epithelialization, the patients stopped applying dressings and used Vaseline as emollient.
To evaluate the healing process photographs were taken weekly. During the visit patients were assessed for early postoperative complications (infection, bleeding) and late complications (nail spicules, sensory change, limited extension, recurrence).
Time to coverage of the phalangeal bone by granulation tissue without bare areas (T1) and time to re-epithelialization (T2) were assessed. The functional outcome, cosmetic outcome, and subjective satisfaction were evaluated in the outpatient clinic during the follow-up or through a phone call on 20 July 2019. Functional outcome was evaluated using the Quick-Disabilities of the Arm, Shoulder, and Hand (DASH) measure for finger lesions (score 0-100) and the Foot Function Index (FFI) for toe lesions (score 0-230). Both tools are objective and well validated for functional evaluation of the hand and foot (7,8). Cosmetic outcome was evaluated using the Vancouver Burn Scar Assessment Scale (VBSAS) (9). Patients were asked to rate their subjective global satisfaction as NRS score (range 0-10). After informing the patients that amputation was an alternative treatment option, they were asked to provide their satisfaction rating again.
Descriptive statistics (mean, standard deviation, median, range) were obtained. All statistical analyses were performed using SPSS 22.0 software (IBM, Armonk, NY, USA).
RESULTS
A total of 12 patients were evaluated. Eleven patients had NMIS and one patient (patient 9) had MINM (Breslow thickness 0.2 mm). Median age was 51 (range 28-69) years. Eight patients were women (75%). The median follow-up period was 19 (range 7-27) months. The patients' clinical information is detailed in Table SI 1 . After en-bloc nail excision, granulation tissue covered the entire phalangeal bone without bare areas at a mean of 4.2 ± 2.3 (range 2-9) weeks. Re-epithelialization was completed at a mean of 10.6 ± 2.8 (range 5-15) weeks (Fig. 1).
Quick-DASH scores were evaluated in 9 patients. Mean Quick-DASH score was 11.6 ± 7.1. The mean time of Quick-DASH evaluation was 15.5 ± 6.3 weeks postoperatively. Three patients reported more than mild difficulty for items 1, 2, 5, 6, and 8. FFI scores were evaluated in 3 patients. The mean time of FFI evaluation was 12.3 ± 0.6 weeks postoperatively. Two patients felt no discomfort resulting from the surgery in their daily life. The remaining patient gave 1 point for walking 4 blocks, 1 point for standing tip toe, and 2 points for walking fast. The mean total VBSAS score was 4.6 ± 1.3. The score for pliability was rather high (2.4 ± 0.8), whereas the other mean scores were < 1.0.
With respect to acute postoperative complications, bleeding over the compressive dressing was observed in one patient, which was controlled by electrocauterization of the vessels and use of compressive dressing. Tinea pedis was found in one case (case 3 at week 4), which was resolved using a topical anti-fungal agent. Concerning delayed complications, nail spicules occurred in 3 patients (25%), sensory change in 4 patients (33%), and extension limitation in 2 patients (17%). Local recurrence was detected 8 months after surgery in one case (8%) that initially had wide ill-defined Hutchinson signs. It was treated with re-excision.
The subjective global satisfaction with respect to the surgical outcome was high (mean NRS score 8.4 ± 1.0). The reassessed subjective global satisfaction after informing the patients that amputation was an alternative treatment option was higher (mean 9.7 ± 0.8) (p = 0.011, Wilcoxon signed-rank test).
DISCUSSION
These results suggest that SIH after conservative surgery for NM leads to acceptable re-epithelialization time as well as good functional and cosmetic outcomes without serious complications. Moreover, the patients reported high subjective satisfaction.
Various reconstructive surgeries for NM have been reported, such as local flaps, including cross-finger flap, free flap, FTSG, and SIH (10)(11)(12). Except for SIH, the other techniques necessitate donor tissues. Therefore, subsequent surgical complications, such as graft/flap loss can occur. In contrast, SIH has the advantages of simplified wound management, avoidance of sophisticated reconstructive procedures, no donor defect, more natural granulation tissue, and optimal cancer surveillance (13). In addition, SIH may provide better cosmetic outcome with less hyperpigmentation than FTSG (6). A recent survey assessing patient outcome after digit-sparing conservative surgery of NM in situ revealed a high overall satisfaction score (14). Considering the mean DASH score of 31.3 after ray amputation and 21.7 after digit amputation (15), the current study revealed a better score, with a mean of 11.6 ± 7.1 after functional surgery with SIH.
This study has some limitations. First, the number of patients was small. Secondly, this study does not provide direct comparisons of various surgical techniques. Randomized controlled trials with large sample sizes are needed. Thirdly, additional novel materials, such as artificial dermis to facilitate the healing process have not been used in the current study. Further studies are necessary to investigate whether the use of an artificial dermis expedites the regeneration process after total excision of the nail unit.
In conclusion, these data suggest that SIH is a good reconstructive method for the defect after conservative surgery for NM, with acceptable re-epithelialization time, excellent functional and cosmetic outcomes, and high patient satisfaction. | 2020-06-05T13:02:15.285Z | 2020-06-03T00:00:00.000 | {
"year": 2020,
"sha1": "7d2e1409f10b1a445163b23b167ccd323b9bf05c",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.2340/00015555-3541",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6756e7966c555c952678ae913068ff793b3466b7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233210790 | pes2o/s2orc | v3-fos-license | Incorporation of feeding functional group information informs explanatory patterns of long-term population changes in fish assemblages
The objective of this study was to evaluate long term trends of fish taxa in southern Lake Michigan while incorporating their functional roles to improve our understanding of ecosystem level changes that have occurred in the system over time. The approach used here highlighted the ease of incorporating ecological mechanisms into population models so researchers can take full advantage of available long-term ecosystem information. Long term studies of fish assemblages can be used to inform changes in community structure resulting from perturbations to aquatic systems and understanding these changes in fish assemblages can be better contextualized by grouping species according to functional groups that are grounded in niche theory. We hypothesized that describing the biological process based on partial pooling of information across functional groups would identify shifts in fish assemblages that coincide with major changes in the ecosystem (e.g., for this study, shifts in zooplankton abundance over time). Herein, we analyzed a long-term Lake Michigan fisheries dataset using a multi-species state space modeling approach within a Bayesian framework. Our results suggested the population growth rates of planktivores and benthic invertivores have been more variable than general invertivores over time and that trends in planktivores can be partially explained by ecosystem changes in zooplankton abundance. Additional work incorporating more ecosystem parameters (e.g., primary production, etc.) should be incorporated into future iterations of this novel modeling concept.
INTRODUCTION
As aquatic habitat losses and alterations have accumulated over the past century, a growing number of fish species have either been extirpated from native areas or gone extinct (Burkhead, 2012). And while these taxonomic losses have occurred at an exceedingly high pace, particularly in freshwater fishes, without long term datasets to characterize these ecosystem (Lauer, Allen & McComish, 2004). This concept is highlighted within the Lake Michigan fish community where a reduction in Mottled Sculpin and Johnny Darters coincided with an increase in Round Gobies, three trophically similar species that do not exhibit the same abundance trends (Lauer, Allen & McComish, 2004). Thus, treating each species independently ignores the similarities in abundance trends between Mottled Sculpin and Johnny Darter and grouping all species together in the same functional group masks the different trajectories of all three species. In this case, taking an approach at either extreme (i.e., no pooling or complete pooling) can remove valuable information about species diversity. In contrast, an intermediate approach that shares information across species but also provides species specific trends could be more appropriate. The approach outlined in this study is to incorporate functional traits from life history information into taxonomic analyses, preserving taxonomic information while also considering known ecological niche based roles in the biological process to improve long term explanations of fish assemblage change (i.e., statistical "partial pooling" with random effects).
The specific objective of the current study was to evaluate long term trends of taxa while incorporating their functional role in the southern Lake Michigan ecosystem. To this end, the long-term changes in the near-shore fish community of southern Lake Michigan were described using a multi-species state-space model of population growth rates that integrates functional feeding groups. Based on the aforementioned, the approach assumes partial exchangeability across species within each group by incorporating a random effect at the biological process level (i.e., statistical "partial pooling" with random effects). We hypothesize that describing the biological process based on partial pooling of information across functional groups might identify shifts in the fish assemblage that coincide with some of the already documented major changes in the food web (e.g., zooplankton abundance).
Overview of the experimental program
The experimental program in southern Lake Michigan occurred between 1984 and 2016 and integrated a robust fisheries survey between 1984 and 2016, zooplankton survey between 1997 and 2015, and statistical modeling (Fig. 1). The objective of this study was to evaluate long term trends of fish taxa in southern Lake Michigan while incorporating their functional role in the ecosystem. A map of the four fish sampling sites, and one zooplankton sample site is depicted in Fig. 2. Fish sampling was conducted using a semi-balloon bottom trawl. Sites were not consistent across the study period as additional sites were added to expand the long-term monitoring program. See "Study area and data collection" for more detail on sites. All captured fish were identified to species level and counted during fisheries data processing. Fish species were grouped according to both functional feeding guilds (Trautman, 1981;Poff & Allan, 1995;Hondorp, Pothoven & Brandt, 2005;Truemper et al., 2006;Happel et al., 2015). Zooplankton data were acquired from a U.S. EPA database. Zooplankton were sampled at a fixed site using vertical tows of a 153-mm mesh net. Long-term trends in the fish assemblage were described using a multi-species exponential population growth model in a state space framework (Kéry & Schaub, 2012;Doll et al., 2020). This modeling approach separates the biological process model from the observation process model to separate variation from the two sources. The biological process model was parameterized to incorporate shared information across fish species within a functional feeding group by incorporating a random effects term (see Eqs. (2) and (3) below). The biological process model is further parameterized to explore the relationship between the population growth rate and zooplankton abundance. Parameters of the model are estimated using Bayesian inference. Results and conclusions were made using the full joint posterior probability distribution of model parameters.
Study area and data collection
Southern Lake Michigan is generally flat, less than 20 m deep and sandy (Janssen, Berg & Lozano, 2005). Fish were sampled at up to four sites between 1984 and 2016 (Fig. 2) using a semi-balloon bottom trawl with 4.9 m headrope, 5.8 m footrope, 38 mm stretch mesh body with a 32 mm stretch mesh cod end lined with 13 mm stretch mesh liner. Two sites (M and K) were sampled between years 1984 and 1988, three sites (M, K, and G) were
Zooplankton data
Zooplankton data were extracted from the Great Lakes National Program Office (GLNPO) database on November 12, 2019 (United States Environmental Protection Agency Great Lakes National Program Office, 2019). In particular, the seasonal variation and spatial variability were reduced using only the summer samples and from site MI 11 (42.38333, −87.00000, depth = 128 m; Fig. 2). Zooplankton were sampled with a 153-mm mesh net using a vertical tow starting at 100 m deep. Samples were narcotized with soda water and preserved with sucrose formalin. Each sample was split using a Folsom plankton splitter until approximately 200-400 animals were present in a split. Two splits were counted and identified with a stereoscope. Taxonomy followed Balcer, Korda & Dodson (1984), Hudson et al. (1998), Brooks (1957), Evans (1985), and Rivier (1998). Dry weights were estimated using a length-weigh regression following United States Environmental Protection Agency (2013). Annual mean dry weights for Calanoid copepod adults, Cyclopoid copepod adults, Daphnia, non-daphnid herbivorous cladocerans, and predatory cladocerans were summed for an annual total biomass estimate. Zooplankton data were standardized to z-scores to improve model fitting efficiency. See United States Environmental Protection Agency Great Lakes National Program Office (2019) for complete description of data and protocols used by the Environmental Protection Agency.
Functional groups
Species were grouped according to both functional feeding guilds described by Poff & Allan (1995) using basic life history information outlined in Trautman (1981) and several Lake Michigan specific dietary studies of taxa (Hondorp, Pothoven & Brandt, 2005;Truemper et al., 2006;Happel et al., 2015). Functional feeding group classifications for observed species with descriptive statistics is shown in Table 1. Notably, the lack of diversity in the assemblage of Lake Michigan permitted to assign the feeding guilds consistent with predominant age structures. Overall, the emerging dataset included planktivore, general invertivore, and benthic invertivore guilds (Table 1).
Biological process model
Temporal trends of all species were evaluated using a state space modeling approach (Kéry & Schaub, 2012;Doll et al., 2020). State space models have been widely used in fisheries and ecology to better understand long-term changes in population structure. Specific applications include stock assessments (Nielsen & Berg, 2014;Aeberhard, Flemming & Nielsen, 2018), animal movement and migration (Jonsen, Myers & James, 2006;Patterson et al., 2008), identify metapopulation structure (Ward et al., 2010), and understanding foraging tactics of gray seals Halichoerus grypus (Breed et al., 2009). The base model is similar to the model used in Doll et al. (2020). The species specific Table 1 Functional feeding group classifications for observed species with average (standard deviation (sd)), first quartile, third quartile, and maximum annual catch per unit effort. biological process model (i.e., population dynamics model) was assumed to follow the exponential growth model:
Species
where N t+1 represents the population size during time t+1, N t represents the population size at time t and t represents the population growth rate at time t and t ranges from 1 to the total number of years where data were collected. Equation (1) represents the true but unknown process of the population for species s (i.e., no observation error). The model specification assumes the population growth rate parameter of species within the same functional feeding group are similar where species within the same functional feeding group share the same mean population growth rate, f ;s;t with a random effect for species within the functional group: where f is the mean population growth rate for functional groups f, s ;f is the standard deviation of the population growth rate for functional feeding group f, a f is the intercept when describing the population growth rate as a function of zooplankton biomass for functional feeding group f, b f is the slope of the effect of zooplankton biomass on the population growth rate for functional feeding group f, X t is the observed zooplankton biomass in year t, and f ;s;t is species specific mean population growth rate for each year t. Zooplankton data are not available for the full time period of fisheries data, therefore, we modeled the biological process in two stages; years 1984-1996 with an overall mean population growth rate and 1997-2015 where the annual population growth rate for each functional feeding group varied by observed zooplankton biomass (Eq. (3)). The initial population, N s,1 , is not defined, therefore we are estimating it as a separate parameter using a non-informative prior probability distribution (Table 2).
Observation process model
A second set of equations are used to link the population dynamics model to the observation dataset.
where y i,s is the observed counts of species s during the survey and m i;s is the mean parameter of the survey for observation i. The mean and variance of the Poisson distribution are equal and defined as: The log link was used to model dependencies in the mean as a function of covariates. Further, because count data can be overdispersed, we incorporated a normal random effect in the linear predictors following Kéry & Schaub (2012). The standard Poisson and Negative binomial distribution were also considered to account for overdispersion but they were deemed inappropriate based on model diagnostics and thus the details are not included here. The model is described as a Poisson-lognormal model with a random effect for site and an overdispersion parameter.
where g s;site is a random effect term for species s at sampling site, and e i;s is a random effect for the individual to capture overdispersion. Bayesian inference was used to fit the model (Doll & Jacquemin, 2018) in the programing languages R 3.6.1 (R Core Team, 2019), Stan (Stan Development Team, 2018a), and rstan 2.17.3 (Stan Development Team, 2018b). All parameters were given noninformative priors (Table 2). Three concurrent Markov Chain Monte Carlo (MCMC) chains were used. Each chain consisted of 15,000 total iterations (split between the three chains). The first 2,000 steps of each chain were discarded for a total of 9,000 saved steps. The splitR and visual inspection of traceplots were used to assess convergence. The chains have converged when the splitR is close to one. Values less than 1.1 suggests the MCMC chains have converged. Parameters are summarized by the median and posterior 95% credible intervals (CRI).
RESULTS
Annual trends in CPUE of each species are depicted in Fig. 3. Temporal trends in CPUE of individual species were highly variable (Fig. 3). Johnny Darter, Yellow Perch, Spottail Shiner Notropis hudsonius, Troutperch Percopsis omiscomaycus, Rainbow Smelt Osmerus mordax, and Bloater Coregonus hoyi exhibited peak abundance in the 1980's and 1990's whereas Alewife exhibited peak abundance in the 2000's with Round Goby, Longnose Sucker Catostomus catostomus, and White Sucker Catostomus commersonii abundance peaking in 2010's. Spottail Shiners were the most abundant species overall followed by Standard deviation for site random effect of transect survey Cauchy (0, 2) + Note: + The positive values of the Cauchy distribution resulting in a half-Cauchy prior probability distribution. The half-Cauchy distribution was used as a prior probability distribution for each standard deviation parameter following the recommendations of Gelman (2006). The normal distribution was selected for other parameters as a default prior probability distribution to have minimal influence on the posterior probability distribution.
Yellow Perch, Alewife, Round Goby, Bloater, and Rainbow Smelt (Table 1). The remaining four species averaged less than 10 fish/h. Trends in zooplankton abundance are depicted in Fig. 4. Over the past 20 years, zooplankton abundance followed a decreasing trend between 1997 and 2015 ( Fig. 4) with peak abundance within the time series as 59,522 mgDW/m 3 in 1999 and the lowest abundance as 6,720 mgDW/m 3 in 2008. Overall, average abundance of zooplankton was 29,006 mgDW/m 3 (SD = 14,559) per year. Species specific population growth rate over time are shown in Fig. 5 and categorized by functional feeding group. Yearly population growth rate was variable across species and functional groups (Fig. 5). Variation in yearly population growth rate for each functional feeding group is summarized as processes error in Fig. 6. The general invertivores were more stable with the annual population growth rate fluctuating closely around 0 with the exception of Troutperch (TRP). In contrast, benthic invertivores and planktivores exhibited much greater variability. This variability is reflected in the standard deviation of the functional feeding group population growth rate (i.e., process error; Fig. 6) where the standard deviation of planktivores and benthic invertivores were greater than general invertivores (Fig. 6). The functional feeding group mean population growth rate during the period before zooplankton data and the period with zooplankton data are depicted in Fig. 7. Overall mean population growth rate between 1984 and 1996 suggests a stable population for each functional feeding group (Fig. 7A). In contrast, the expected population growth rate at a mean zooplankton biomass for each functional feeding group were generally negative (Fig. 7B). Figure 8 depicts the trend between zooplankton biomass and the population growth rate for each functional feeding group. The population growth rate of general invertivores was negatively related to zooplankton biomass (median slope = −0.19; 95% CRI [−0.47 to 0.09]), whereas to the population growth rate for planktivores was positively related to zooplankton biomass (median slope = 0.22; 95% CRI [−0.19 to 0.62]), but no strong relationship was observed between the population growth rate of benthic invertivores and zooplankton biomass (median slope = −0.07; 95% CRI [−0.43 to 0.29]). This relationship suggests general invertivores tended to have a stable population (mean population growth rate approximately 0) with respect to zooplankton biomass during years with the lowest zooplankton biomass and a declining population with high zooplankton biomass (Fig. 8).
In contrast, the population growth rate of planktivores changed from a declining annual rate at the lowest zooplankton biomass to an increasing annual rate at the highest zooplankton biomass (Fig. 8). Finally, the benthic invertivores exhibit a stable population at all levels of zooplankton biomass (Fig. 8).
DISCUSSION
The analysis used here demonstrates the ease of which ecological relationships, such as functional level traits, can be combined with long term taxonomic datasets to draw inference at multiple levels without losing important taxonomic or functional resolution. Results of this research also contribute to a better understanding of long term taxonomic and functional changes in the southern Lake Michigan ecosystem. Community surveys documenting long term changes in Lake Michigan have identified a reduction in Alewives, increases in Rainbow Smelt and Yellow Perch, and no long-term changes in Spottail Shiner and Trout-Perch (Jude & Tesar, 1985). A more recent study documented similar species level changes (Bunnell, Madenjian & Claramunt, 2006). For example, Bunnell, Madenjian & Claramunt (2006) described the fish community in the 1970's as being dominated by Alewife with a shift in the 1980's to low abundance and increased native fish abundance (e.g., Burbot Lota lota, Deepwater Sculpin Myoxocephalus thompsonii, and Yellow Perch). The analysis presented here is largely consistent with these observations but incorporated an ecological concept that links many species at the functional level. For example, see Fig. 3 for trends in raw data. Here we documented taxonomic and functional changes of the near-shore fish community over a 30-year period (Figs. 3 and 7). Specifically, planktivore and benthic invertivore populations were more variable when compared with general invertivores (see Fig. 6). This is likely a function of exotic species exhibiting contrasting trends as general invertivores only included native species whereas planktivores and benthic invertivores included Alewife and Round Goby, respectively. The comparison of native vs non-native species has important implications for refilling of functional niche spaces as native species are often outcompeted. The analysis was extended by incorporating one potential mechanism behind these changes through the addition of zooplankton biomass as an explanatory variable. Specific to zooplankton, we documented a positive relationship with planktivores, no relationship with benthic invertivores, and negative relationships with general invertivores (see Fig. 8). The interesting trend with general invertivores likely reflected that zooplankton was not the primary forage for all general invertivores. Overall, however, the taxonomic changes translated into predictable patterns of functional feeding groups that were partially related to changes in zooplankton density. It should be noted that the fish and zooplankton data were collected at spatially distinct locations and the distance between the two could confound our results. However, we believed the directional relationship was supported by the data and knowledge of the system. For example, a spatial comparison of zooplankton nearshore vs offshore Lake Michigan found no significant difference in biomass and all sites across the depth gradient exhibited a similar decline since the 1970's (Pothoven & Fahnenstiel, 2015).
Coinciding with these changes in the population growth rate of functional feeding groups in southern Lake Michigan are a series of major perturbations in phytoplankton, zooplankton, invasive species, and nutrient levels. For example, observed annual trends in benthic invertivore species include the invasive Round Goby that was first observed in 1993 (Charlebois et al., 1997). The invasion of Round Goby has been linked to the reduction of other benthic invertivores such as Johnny Darters (Lauer, Allen & McComish, 2004), likely leading to the increased uncertainty in benthic invertivores. Additionally, chlorophyl-a and total phosphorus in Lake Michigan have decreased (Pothoven & Fahnenstiel, 2013) and the water has become clearer since 1998 (Yousef et al., 2017). The increased water clarity coincides with a decrease in zooplankton, thus, it would be expected that feeding groups that rely on zooplankton would follow the same trend. However, the population growth rate of general invertivores in this study is negatively related to zooplankton abundance (see Fig. 8). This would be counterintuitive given strict Figure 7 Posterior predicted population growth rate for each functional feeding groups prior to zooplankton data (1984-1996; A) and during (1997-2015; B). Estimates with zooplankton represent expected population growth rate at mean zooplankton biomass. Solid circles represent medians of the posterior distribution, solid vertical bars represent bounds of the 95% Bayesian Credible Interval, violin plots represent a mirrored density plot of the full posterior probability distribution, and solid horizontal bars are reference bars for 0.
Full-size DOI: 10.7717/peerj.11032/ fig-7 dietary requirements, however, general invertivores do not rely solely on zooplankton as their primary diet source. Further, one species of general invertivores, Yellow Perch, exhibit euryphagous characteristics and shift their diet throughout their life. Specifically, Yellow Perch transition from a diet primarily consisting of zooplankton to a diet primarily of amphipods, isopods, chironomids, and fish when they reach approximately 120 mm (Turschak et al., 2019). Further, Yellow Perch are not fully recruited to the sampling gear used in this study until they are age-2 (~125 mm ;Forsythe, Doll & Lauer, 2012). Thus, there is likely a lurking variable connecting observed trends in Yellow Perch (and general invertivores) with trends in zooplankton abundance. A potential explanation is the negative relationship between Alewife abundance and Yellow Perch recruitment (Forsythe, Doll & Lauer, 2012). Planktivorous species such as Alewife, Rainbow Smelt, and Bloater, were expected to be positively related to zooplankton given their dietary habits and indeed our observations support this. Thus, the increase in planktivorous fish as a function of zooplankton abundance could explain the negative relationship between general invertivores and zooplankton abundance. Although our results were consistent with many other observations in Lake Michigan, caution must be taken when trying to extrapolate our results to the greater Lake Michigan ecosystem dynamics. There are many complex interactions that were not included in our model. For example, many fish species in our dataset are not influenced by total zooplankton abundance but rather, abundance of specific size of zooplankton (Bremigan, Dettmers & Mahan, 2003). Larval Yellow Perch, for example, feed predominantly on small copepods and small copepods have declined in southern Lake Michigan (Bremigan, Dettmers & Mahan, 2003). It is important to note that there are additional parameters, such as benthic macroinvertebrates, primary production, and nutrient loading that have also been hypothesized to be related to changes in fish communities (Colby et al., 1972;Bunnell, Madenjian & Claramunt, 2006;Jeppesen et al., 2007;Johannsson et al., 2000), that could also be included into a model such as described above. These parameters were not included due a lack of consistent long-term data describing primary production, nutrient loading, and other macroinvertebrates. Nevertheless, our results remain useful by providing insight at multiple levels of taxonomic and functional groups. Additional insight could be gained by increasing the complexity of interactions across species with the same functional group and adding additional biotic and abiotic covariates that are hypothesized to be important.
CONCLUSIONS
The current study evaluated long term trends of taxa while incorporating their functional role in the southern Lake Michigan ecosystem. A multi-species state-space model of population growth rates integrating functional feeding groups was used to describe long-term changes in the near-shore fish community of southern Lake Michigan used to demonstrate the usefulness of building ecological realism in models describing species trends. Overall, the results of the presently reported study addressed an important topic in assemblage literature related to combing or separating datasets in a gradient of pooling approaches. Such improved knowledge demonstrated how a partial pooling approach (i.e., building a model that is structured in a hierarchical framework to share information across groups) can be taken to incorporate functional traits into a taxonomic dataset. To supplement existing information, future studies of long-term changes using null models should be investigated. | 2021-04-13T05:19:26.449Z | 2021-03-29T00:00:00.000 | {
"year": 2021,
"sha1": "ef2bd71f832fc9f868fb69798cd194c795ac1b53",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7717/peerj.11032",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ef2bd71f832fc9f868fb69798cd194c795ac1b53",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
254917830 | pes2o/s2orc | v3-fos-license | Preliminary Analysis of Voluntary Information on Organic Milk Labels in Four European Union Countries
: The concern for the environment among European consumers is growing and in the future the need for sustainable shopping is expected to increase. Through transparent on-packaging communication with consumers, organic producers have the opportunity to show attributes of organic production system and build a strong market position. The aim of the study was to analyse voluntary packaging information on organic milk from four European markets in the context of organic food quality, i.e., Germany, the Netherlands, Italy and Poland. More specifically, the textual content of 106 organic milk packages was analysed and voluntary information on each package was categorized according to process-and product-related organic milk attributes. The assortment and content of voluntary packaging information varied across the four countries. The largest number of products was found on the German market (37) and the smallest on the Polish market (14). Dutch milk had the greatest amount of voluntary information on animal welfare, product locality, environmental protection, quality confirmation, naturalness and nutritional value. German milk had the most information on enjoyment and conditions of processing, while the Italian milk on the social perspective. The products available on the Polish market had the least voluntary information. Pasteurized organic milk had noticeably more information about organic quality attributes than micro filtrated and UHT milk.
Introduction 1.Legal Framework and Principles of Organic Farming
European consumers are paying progressively more attention to sustainable, especially organic food consumption [1].Achieving sustainable consumption requires that consumers consider not only their own needs (e.g., taste, price, convenience, etc.), but also a product's social responsibility attributes (e.g., animal welfare, environment, fair trade) during purchase [2,3].The pro-environmental attitude among consumers directly influences the purchase (and consumption) of organic products and paying attention to such features as healthiness, trustworthiness, quality, control system, authenticity, safety [4].Organic production is in line with these trends in consumer behavior because according to the Council Regulation 848/2018 organic production is define as "an overall system of farm management and food production that combines best environmental and climate action practices, a high level of biodiversity, the preservation of natural resources and the application of high animal welfare standards and high production standards in line with the demand of a growing number of consumers for products produced using natural substances and processes.Organic production thus plays a dual societal role, where, on the one hand, it provides for a specific market responding to consumer demand for organic products and, on the other hand, it delivers publicly available goods that contribute to the protection of the environment and animal welfare, as well as to rural development."[5].The regulation indicates high quality of organic products, which is the result of standards for health, the environment and animal welfare in the production of organic products but also ensuring that producers receive a fair return for complying with the organic production rules.Complementary to this International Federation of Organic Agriculture Movements (IFOAM) specifies the principles of organic production such as: -Principle of health-organic agriculture (OA) should sustain and enhance the health of soil, plant, animal, human and planet as one and indivisible; -Principle of ecology-OA should be based on living ecological systems and cycles, work with them, emulate them and help sustain them; -Principle of fairness-OA should build on relationships that ensure fairness with regard to the common environment and life opportunities; -Principle of care-OA should be managed in a precautionary and responsible manner to protect the health and well-being of current and future generations and the environment [6].
Organic Sector and Corporate Social Responsibility (CSR)
Organic food is therefore a very unique food sector, which in its fundamental principles fit into a Corporate Social Responsibility (CSR) strategy which is increasingly common among manufacturers of various sectors (e.g., food, fashion, beauty products).Companies in their voluntary activities take into account social interests, environmental aspects, or relations with various stakeholder groups and the company's environment.By doing so, they contribute to the formation of conditions for sustainable social and economic development and, as importantly, to increasing the competitiveness of the company [7].Thus, the requirement for organic producers is not only to comply with organic production principles, but also to properly and successfully communicate this unique quality with the present-day consumers.
Communication of the Organic Sector with the Consumers
Communication of specific characteristics of organic food products is not easy because consumers have different level of knowledge and beliefs and they can be confused by the multitude of different labels on food products [8].For instance, the understanding and recognition of the European organic product logo varies from country to country [9].In Poland, 33% of consumers can understand and recognize the organic logo [10].For those who buy organic products more regularly, the rate is 45%, but 23% of respondents mistakenly consider products with the crossed-thorn mark used on gluten-free products as organic food.
The organic food market in the European Union is growing rapidly and was valued at 37.4 billion euros in 2018 [11].However, within European Union, there are significant differences between individual countries.Most organic food retailers are located in Germany, France and Italy.Central & Eastern European (CEE) countries, such as Poland, Hungary, and Romania, have traditionally been important growers and exporters of organic crops.However, internal markets are slowly developing also in these countries.
An important communication tool between food producers and consumers is an packaging product labelling.According to the Regulation (EU) No 1169/2011 [12] "'labelling' means any words, particulars, trademarks, brand name, pictorial matter or symbol relating to a food and placed on packaging."The regulation also lists legally mandatory information that is required to be provided to the final consumer by Union provisions (Regulation (EU) No 1169/2011 [12] and Council Regulation 848/2018 [5]).The organic product label must include elements such as: the EU logo, the identification number of the certification body to which the producer is subject and the indication of the place of production of the raw materials from which the final product is made (EU Agriculture, non-EU Agriculture, EU/non-EU Agriculture).
Moreover, it is possible for producers to provide additional, voluntary content, e.g., information on the production process or the environmental impact of production.Food information provided on a voluntary basis should not misinform or confuse the consumer.A potential advantage of voluntary information is that an appropriate communication can provide more transparency on organic food quality.This is likely to increase consumer confidence in organically produced food, which is particularly important in developing countries [13].
Consumers and Eco-Labeling
Consumer studies show a positive relationship between eco-labelling and environment ally-friendly purchase intentions [14][15][16].Consumers who received information about the ethical features of organic farming, such as animal welfare and environmental sustainability, showed a greater willingness to pay for organic milk than for conventional milk [17][18][19].The concern for the environment among European consumers is growing and in the future the need for sustainable products could increase.Therefore, informing about such aspects of production may be beneficial for organic food producers.According to Żakowska-Biemans (2015), organic food is most often bought by consumers representing the "healthy" segment.This group of consumers is interested in labels and is looking for labels about food production methods and their environmental impact.At the same time, these consumers have higher incomes and they can more easily afford organic food, which is more expensive [20].
Given the legal obligations and consumer expectations, organic manufacturers are faced with a challenge.There are no instructions or guides on what is worth putting on the packaging to effectively show the high quality of organic products.It is difficult for producers to communicate the quality of their products and the rationale for a premium price on markets that offer other competitively priced products [21].It should be kept in mind that organic products compete for consumers' attention with other products too such as pesticide-free, local/regional, vegan, climate-protected, non-GM, fairtrade, etc.
Organic Food Quality Criteria
Even regulators, organic experts and researchers have difficulties to define organic food quality criteria.IFOAM Standards (2008) also describe the principles related to ethical values: responsibility, integrity, care, health, sustainability, naturalness [6].However, there is no clear definition nor explanation what exactly should be understood under the mentioned notions which are listed.Council Regulation 848/2018 provides some quality criteria that are not clearly defined, e.g.,: "true nature", "processing with care" or "natural production techniques".Council Regulation 848/2018 tells that processing methods should 'guarantee that the organic integrity and vital qualities of the product are maintained through all stages of the production chain'.In a chapter 3. Point (74) Definitions it explains what is integrity: 'integrity of organic or in-conversion products' means the fact that the product does not exhibit non-compliance which: (a) in any stage of production, preparation and distribution affects the organic or inconversion characteristics of the product; or (b) is repetitive or intentional [5].
Organic food quality problems are discussed in several scientific papers related to the topic [22][23][24][25].Kahl et al. (2012) defined organic food quality through two aspects: process-related and product-related.The aspects are further defined by specific criteria.Process-related criteria can be environmental (e.g., criteria indicating the impact of production process on soil, plants, animals, atmosphere) and societal (i.e., considering social, cultural and economic perspectives).Product-related criteria are safety, nutrition, enjoyment/pleasure, vital qualities, organic integrity and true nature).Kahl et al. (2012) underline that there is a necessity to work on these definitions [22].Beck et al. (2012) proposed similar criteria of organic food quality, i.e., sensory properties, nutrition/health, specific organic properties and authenticity/traceability. Authors also listed attributes for the examination of organic food quality, i.e.,: vital quality, naturalness, organic integrity, careful production, true nature, integrity, animal welfare, holistic production, fairness [23].
In the absence of clear definitions of the above terms describing the quality of organic products, processors are left to their own inventiveness in informing consumers about the qualities of their products.The main method for communicating this information is through suitably designed product labels.This can enable organic products to have a stronger market position and be more competitive on the market [26].
The Aim of Study
As mentioned above, there is a lack of any regulation or indication of voluntary information on the packaging of organic products.There is also a lack of scientific research in this area.We therefore considered it appropriate to carry out the research presented here.The aim of our study was to identify themes in voluntary labelling of organic milk in some EU countries, among others trying to compare the situation of well-developed organic markets and less developed ones.Basing on Willer et al. [11] we have selected 4 EU countries-Germany as a highly developed organic food market, the Netherlands and Italy as medium developed markets, and Poland as a less developed market.The delimitation of our research lies in the fact that we have assumed in advance a certain limited scope of analysis.We are aware that the European Union currently consists of 28 countries and that 4 countries are just a fragment of the European Union with many cultural differences, also with regard to the consumption of organic products.Our countries represent central Europe (Germany, Poland), northern Europe (the Netherlands) and southern Europe (Italy).The most western part of Europe (Spain, Portugal) is not represented.We discuss wider the delimitations and limitations of our study in a chapter 5. Limitations and recommendations at the end of the manuscript.
Research Questions and Hypotheses
The main research question posed in this paper is how do producers of organic milk communicate with consumers?Does the way in which they communicate depend on the level of development of the organic market in their country?In addition, we wanted to find out whether it is possible to differentiate the information on the packaging concerning the widely understood production process of the milk from the information concerning the product itself.Finally, we wanted to find out which processing methods and packaging are used by organic milk producers in selected EU countries.In order to verify these research questions, five hypotheses were formulated.
The research hypotheses were set at the beginning of the study.On the basis of the available scientific data, we have assumed that (1) Organic milk available on the European food market includes a voluntary labelling information on organic quality attributes; (2) Voluntary packaging information could be categorized by process-related and product-related criteria of organic food quality evaluation; (3) The assortment of organic milk and content of voluntary packaging information varies depending on the market (German, Dutch, Polish and Italian); (4) Content of voluntary packaging information varies depending on the milk processing method (UHT, pasteurization, microfiltration).Finally, we have assumed that (5) The predominant milk processing system in each country depends on the degree of development of the market for organic products.
Sample Collection
In 2019, an inventory of organic cow's milk in four European countries with a different level of organic retail sales and organic per capita consumption was conducted, i.e., Germany as a highly developed organic food market, the Netherlands and Italy as medium developed markets, and Poland as a less developed market [27].
Products came from specialized organic stores and conventional supermarkets.Milks for special dietary needs (e.g., lactose-free) and flavoured milks were excluded.
In Germany (Münster), one organic store and seven conventional supermarkets were included in the study.In the Netherlands (Utrecht and Wageningen) one organic and two conventional, in Italy (Rome) one organic and seven conventional, and in Poland (Warsaw) three organic stores and five conventional supermarkets were included.It should be stressed that there is no set method for carrying out the research we have undertaken.The authors of the study chose a set of shops in which to analyse the packaging of organic milk.
Content Analysis of Voluntary Information on Organic Milk Products
Milk packages were photographed from all sides and voluntary textual information was extracted from the packaging and noted in an Excel file.The extracted text was categorized into categories, according to the product related aspects and process-related aspects (see Section 2.3 "Categorization methodology").Furthermore, the categorized voluntary information was further analysed and for each category, multiple criteria and subcriteria were defined (see Supplementary Materials for a detailed description of categories).Finally, the messages in each sub-category were counted and reported in the Section 3.
The names of the criteria and sub-criteria were selected by the authors based on scientific publications on organic food quality assessment [22][23][24][25].The authors have repeatedly checked and discussed the relevance of individual voluntary textual information to the relevant criteria and sub-criteria categories.
If the content of packaging voluntary information belonged to multiple criteria and sub-criteria categories, the authors interpreted it as different messages and assigned it to each belonging category.This explains the phenomenon that some criteria have a higher number of packaging messages than the number of milk products analysed.
Categorization Method
The categorisation method was elaborated by the authors of this paper and is a novel methodical approach which has not been used before.Of course, other authors have also previously dealt with the quality criteria of organic food important for the consumers.Here we should mention such papers as Chryzochou (2010) [28], Żakowska-Biemans (2011) [29] and Song et al. (2016) [30].In the context of animal welfare, it is worth mentioning the work of Borkfelt et al. (2015) [31] and Scozzafava et al. (2020) [32].
The voluntary information on organic milk packaging has been classified as processrelated and product-related.Among the process-related information, groups of information have been selected and named "criteria".The authors have identified such criteria as:
•
animal (cows) welfare-information concerning cow feeding and breeding conditions; • product locality-information related to the place of origin of production and support for local producers; • social perspective-information related to honestly rewarding producers and supporting the local community; • environmental protection-information related to the lack of negative or existence of positive impact of the production process on the environment.
The information on product-related aspects was divided into 5 criteria: • quality confirmation-information on the high product quality ensured by labelling, certification, selection and control; • enjoyment-information describing sensory attributes and the impact of organic milk consumption on well-being; • naturalness-in this category we focused on observing the context in which manufacturers emphasize the "naturalness" of their products and searched for all the information containing the key word natural.There is no official definition of organic naturalness-here we tried to find out how the producers emphasize this attribute of organic production system; • nutritional value-messages about processing method's impact on nutrition aspects, information about positive/negative nutrients (macro elements, vitamins, minerals); • conditions of processing-voluntary information on specific processing methods: stages, conditions of thermal processes, influence on the final product and its shelf life.
The Assortment of Analysed Organic Milk Products in Different Countries
Table 1 shows that the highest number of milk products was found in Germany (37), intermediate in the Netherlands (27) and Italy (28) and the lowest in Poland (14).Furthermore, Table 1 shows that in Germany microfiltration is a predominant processing method (40% of products), followed by UHT and pasteurization.In the Netherlands, pasteurization is predominant (71%), followed by UHT, while none of the products were microfiltrated.In Italy, on the other hand, microfiltration and UHT represent 46% each, with pasteurisation accounting for a small share.Finally, in Poland pasteurisation represents 57%, UHT 29% and microfiltration 14% of products.Table 1 also shows that in 3 countries (Germany, the Netherlands and Italy), multilayer packaging was the predominant type of packaging for organic milk, while Polish producers mostly used plastic bottles.
The content of Voluntary Packaging Information on Organic Milks
Tables 2 and 3 show the voluntary packaging information belonging into processrelated and product-related criteria categories.Table 2 shows that the largest amount of process-related voluntary information belonged to the criterion animal welfare (218 messages), with subcategories such as non-GMO feed, welfare control and species appropriate husbandry.For example the milk from Germany had the following message "Animal husbandry appropriate to the species and the natural feeding of the cows are the basis for the typical full-bodied taste of this milk" and the milk from Italy with a message "Our alpine organic milk is the best organic milk thanks to the welfare of animals and the natural feed of the cows".The packaging of German milks also provided a description of meadows and green areas to which organically farmed cows had access, for example: "On our Arla organic farms the cows stand on lush green meadows whenever possible and eat fresh grass, clovers or herbs".
Voluntary information on environmental protection was the second most frequent process related information on the milk packaging (176 messages), pertaining to information on nature protection and environmentally friendly packaging.For example, "Together we stand for sustainable organic quality, which places respect for animals and the long-term preservation of our soil at the centre of our work; in this way we protect the environment and preserve it as the basis of life for humans and animals" (Germany); "And the packaging?We are also making it more and more environmentally friendly.Pack by pack.In this way, we contribute together to less climate impact."(The Netherlands); "When selecting our organic products, we strive for the best conditions for people, animals and the environment" (The Netherlands).
Many messages were related to the product locality criterion (64).For example, "At least 60% of the feed must come from their [producers'] own company or from the region."(The Netherlands); "At Jumbo we like to know our farmers personally.For example, we get our milk from Landgoed Het Hengelman in Twente, where Jos and Dorthy Elderink keep more than 100 dairy cows surrounded by beautiful meadows"(The Netherlands); "We are a Polish dairy cooperative owned by farmers running farms in the area of the Green Lungs of Poland."(Poland).
Milks also hadmessages related to the social perspective criterion (43), for example, messages from German milks: "By purchasing this milk you make an important contribution to the future of the local agriculture and the Bioland farmer families"; "We Arla dairy farmers are the owners of Arla dairy.We stand for the fact that products made from our milk are manufactured with great care.By buying this product you support our commitment.";"Farmers receive a fair price for this Alnatura-stable Alpine milk so that they can manage their farms in the long term.By buying this milk you as a customer help to maintain and promote the local organic dairy industry." Table 3 shows that product quality confirmation criteria was the most frequently observed product-related voluntary information on milk packages (167 messages).The most frequent sub-criterion was labels of organic farming associations, initiatives or companies.
The information on the conditions of milk processing was only slightly less frequent (159 messages).We found the most information of this type on German milks, where most often details of processing (exact time, temperature, consecutive processes) were given.The manufacturers also provided information about the impact of the processing method on the shelf life and on the preservation of nutrition and taste.
Information on gentle processing rarely appeared on the analysed milk packages.Only 18 products (all German) had this type of information.For example, "Thanks to our special heating process, with which the milk is gently processed, the full taste and valuable ingredients are retained for a particularly long time"(Germany); "Our delicious fresh lowfat milk gets its longer shelf life through our particularly gentle filtration process, which reduces the germ content of the milk" (Germany).Information about careful processing appeared even less frequently, i.e., three times (one German milk and two Italian milks).For example, "Valuable raw milk and careful processing are the basis for the controlled high quality of our Weihenstephan organic fresh milk."(Germany) and "The milk undergoes a careful microfiltration process that allows it to last longer, but which respects the taste and nutritional value of raw milk."(Italy).In addition to the above criteria, there were also messages related to enjoyment/pleasure (111), naturalness (100) and nutritional value (65).In the enjoyment/pleasure criterion, the most messages were about the unique, delicious taste of organic milk and its freshness.For example, "Naturally organic.This is how fresh milk tastes best" (Germany); "Enjoy a piece of nature without genetic engineering with every sip of our good and tasty organic milk"(Germany); "Take it and enjoy the best that nature has to offer" (The Netherlands); "Campina Organic is made with care and passion and you can taste it" (The Netherlands); "Fresher than fresh" (The Netherlands); "Did you know that every cow gives its own milk?Not every bottle tastes exactly the same.In the summer we eat fresh grass and hay in the winter.You can taste it.Just like the spicy grassland where we graze and the humus-rich sandy soil of the farm" (The Netherlands); "The cows get the care they deserve and from a healthy environment comes tasty and nutritious milk!" (The Netherlands); "Cows are fed with fodder grown at more than 1000 m in the frame of untouched nature that provides milk with a unique flavour" (Italy).
In the nutritional values category, the most common was information about the content of vitamins and microelements and the presence of nutrients and health improving properties.For example, "It [milk] is naturally a source of: calcium for strong bones and teeth, protein for contribution to growth and recovery of your muscles, vitamins B2 and B12, to help you get energy from your diet"(The Netherlands); "Milk is a source of calcium, an element that plays an important role in maintaining healthy bones and teeth.Two glasses of milk provide 38% of the daily calcium requirement" (Italy); "Milk is packed with healthy proteins, which are indispensable for our body.Protein gives us energy and helps us to grow, develops our brains and maintains the muscles."(The Netherlands).
Since no official definition of organic product naturalness exists, to identify messages related to the naturalness criterion, we identified messages that contained words such as nature and natural.These words appeared the most often in the context of the care for nature and natural nutrient content.For example messages like: "For tomorrow's nature" (The Netherlands); "The best milk with responsibility for animals, people and nature."(Germany); "From nature with love and conscience" (Italy); "Love for nature and the cows.
That is what drives the farming families of Campina Organic.For generations.The cows graze on green, flowery meadows, where nature can take its course."(The Netherlands); "As a farmer with heart and soul, I believe a lot in naturalness.That's why I feed my cows traditionally-and, of course, without genetic engineering."(Germany); "I became an organic farmer because nature is the best role model for me.That is why the cows on my farm live as naturally as possible."(Germany); "Jersey cows' milk naturally contains more fat and protein" (The Netherlands).
A comparison of the voluntary information on organic milk packaging between the countries analysed is shown in Figure 1.It compare the countries studied in terms of information on packaging.The horizontal axis shows the average amount of information given on the product packaging in each country.This reflects the intensity of consumer information on a given topic in each country.For example, the Netherlands is clearly at the forefront when it comes to the frequency of information on organic milk packaging.This applies to several categories-'animal welfare', 'environmental protection', 'quality confirmation', 'naturalness', 'product localness' and 'nutritional value'.Germany, on the other hand, leads by far in the categories 'processing conditions' and 'eating pleasure'.Italy leads only in the category 'social perspective', while Poland does not lead in any category.
Figure 1 also synthesises the quality information on milk packaging.The leading categories are 'animal welfare', 'environmental protection', 'processing conditions' and 'quality confirmation'.The least popular categories were 'product locality', 'nutritional value' and 'social perspective'.An analysis of the sentences on packaging showed that such information appears on organic milk packaging in Germany, the Netherlands and Italy.This information can be divided into groups such as animal (cow) welfare, product locality, environmental protection and social perspective.
Figure 2 shows voluntary information on organic milk in relation to the processing method, i.e., pasteurisation, low pasteurisation with microfiltration and ultra-high temper-ature (UHT) processing.A total of 38 pasteurised, 28 microfiltered and 37 UHT milks were analysed.
Italy.This information can be divided into groups such as animal (cow) welfare, prod locality, environmental protection and social perspective.Figure 2 shows voluntary information on organic milk in relation to the process method, i.e., pasteurisation, low pasteurisation with microfiltration and ultra-high t perature (UHT) processing.A total of 38 pasteurised, 28 microfiltered and 37 UHT m were analysed.From Figure 2 it follows that pasteurised milk had the most voluntary information.Compared to other processing methods, this milk had clearly more information about From Figure 2 it follows that pasteurised milk had the most voluntary information.Compared to other processing methods, this milk had clearly more information about quality confirmation, animal welfare, environmental protection, naturalness, sensory qualities, product locality, nutritional value.
Microfiltered milk had the most information about processing conditions.Typically, the messages on the packaging explained the microfiltration process, listed the processing steps he and the impact on the product properties.For example: "The milk undergoes a careful microfiltration process that allows for a longer shelf life but preserves the taste and nutritional value of the raw milk" (Italy); "By using a special production process (microfiltration), the taste of the milk is preserved longer and the valuable ingredients are largely retained" (Germany).Compared to other processing methods, organic UHT milk had the most information of the social perspective type, while the least information was given on product location, sensory qualities, nutritional value and processing conditions.
Verification of Hypothesis 1: 'Organic milk available on the European food market includes a voluntary labelling in-formation on organic quality attributes'
Results of the study confirmed hypothesis 1 in all countries-voluntary information has been found at most of the analysed milk packages.Admittedly, the amount and variety of this information is at a different level in each country, but everywhere such information is included on organic milk packaging.
Verification of Hypothesis 2: 'Voluntary packaging information could be categorized by process-related and product-related criteria of organic food quality evaluation'
Hypothesis 2 was also confirmed in the study.It was proven that voluntary information on milk packaging can be categorised according to both process and product-related criteria for assessing organic food quality.Admittedly, the authors did not have an easy task when categorising process and product criteria.Indeed, this categorisation, according to Kahl et al. (2012), is very complicated due to the lack of precise guidelines for the quality assessment criteria of processed organic food [22].More clarity in this regard would help both legislators and organic food producers and processors.
In an attempt to distinguish between the different types of information on packaging, the authors had to select key words and create specific criteria according to previous publications [22,23,25].For process criteria, attributes such as animal (cow) welfare, environmental protection, product locality and social perspective were extracted.For product attributes, we extracted quality certification, sensory value, naturalness and processing method.
From various European surveys of organic milk consumers (Germany, UK, Italy, Austria, Switzerland), the most important of the ethical attributes tested were: "animal welfare", "regional production" and "fair prices for farmers" [19,[32][33][34][35]. Consumers showed an increased willingness to pay for organic foods with these additional ethical attributes.Consequently, the researchers suggest that organic processors should increasingly focus on additional ethical attributes in production and communication with consumers.German consumers perceive labels as good advice in the purchasing process, especially if they are looking for products without genetic engineering, fair trade, regional origin and products that guarantee animal welfare and organic production [36][37][38].Furthermore, for most consumers, reinforcing animal welfare with other types of consumer values, such as functional or emotional value, can motivate them to purchase animal-friendly products [39].
Thus, it can be said that providing voluntary information about practices that fit in with CSR is beneficial to companies, and the survey results show that organic producers are doing so.In Yu et al. (2021) research it has been proven that corporate social responsibility (CSR) image of organic food company may influence on the consumption behavior and codeveloping behavior of customers-it can effectively promote consumer trust, continuous purchase, and active engagement in the co-development of products and services [40].
Verification of Hypothesis 3: 'The assortment of organic milk and content of voluntary packaging information varies depending on the market'
The study verifies hypothesis 3-the range of organic milk and the content of voluntary information on the packaging are different depending on the organic production market in a given country (Germany, the Netherlands, Poland and Italy).The assortment of organic milks presented in Table 1 was the biggest in Germany, followed by the Netherlands, Italy and the lowest in Poland, which confirms the hypothesis 3. Figure 1 (see Results) shows a diversity of the label information in four analysed countries.The wealth and diversity of information on milk packaging is greatest in Germany and Netherlands, less in Italy and least in Poland.
It can be assumed that producers who put such information on organic milk are aware of consumer expectations, want to present their product in a transparent way and pay attention to the best possible presentation of the attributes of their organic production.Such information is very poor on the milk available in Poland.This can be explained by the fact that in Poland the organic market is in an early growth phase.Only about 30% of Polish organic consumers admitted that they check the presence of eco-labels on products, and one of the barriers to buying organic food is a lack of trust in the certificate [10].
Of the product-related criteria, the group of information related to quality certification was the most numerous.Publications on this topic state that German consumers have more trust in national organic labels than in the EU-Eco-Label [41].Perhaps this is why the number of labels of organic farming associations, initiatives or companies on German products is so high.Research shows that if the label 'organic' is trusted by consumers, they can transfer the belief in high quality to other attributes of the product: taste and healthiness [34].According to cited research, Danish consumers have an even stronger tendency to infer good taste and wholesomeness from the label 'organic' than German consumers.This may be due to the fact that in Denmark the labelling system for organic products has been in place for a longer period of time and enjoys a very high degree of consumer confidence, even for consumers who do not buy such products regularly [42].Among the product-related criteria, processing conditions appeared to be of lower frequency in the countries analysed.German products had the highest amount of such information, and in this country producers most often mention gentle or careful processing (18% of processing messages).Microfiltration is a relatively new processing method and probably therefore micro filtered milk producers choose to explain exactly what the process is, the steps involved and the advantages.
Package information on the sensory and nutritional qualities of milk can be important for consumers focused on the personal benefits of eating organic food.In some markets, nutritional information may be more profitable for producers than other aspects of quality.For example, in Poland, consumers choose organic food for its good health effects and personal benefits, while environmental or ethical issues are of secondary importance [10,43].In summary, hypothesis 3 was fully confirmed in the research.The richness and variety of voluntary information on milk packaging are greater the more developed the market for organic production in a country.The results are in line with objective figures showing the level of development of the organic market in the countries surveyed in 2019.Organic retail sales in 2019 in million € were highest in Germany (11970), followed by Italy (3625), the Netherlands (1211) and Poland (314).Consumption of organic products per capita in 2019 [€/person/year] was highest in Germany (144.2), followed by the Netherlands (71.0), next in Italy (59.8) and lowest in Poland (8.3) [27,43].
Verification of Hypothesis 4: 'Content of voluntary packaging information varies depending on the milk processing method (UHT, pasteurization, microfiltration)'
We can say that hypothesis 4 is fully confirmed.Figure 2 (see Results) shows the voluntary information on organic milk depending on the processing method.Overall, the least voluntary information was given for UHT milk, slightly more for microfiltered milk and the most for pasteurised milk.According to the authors, this is due to the fact that pasteurised milk is the oldest and most well-known way of processing milk, which makes it easier for producers to highlight the quality differentiators that consumers look out for and trust.Therefore, producers of pasteurised milk have the most extensive system of communication with consumers, presumably to differentiate their product from others.
Verification of Hypothesis 5: 'The predominant milk processing system in each country depends on the degree of development of the market for organic products' Hypothesis 5 is not fully confirmed.The authors hypothesised that countries with a higher development of organic agriculture are characterised by milk processing methods that are most beneficial from the point of view of the nutritional value of the product from its environmental impact.However, the analysis of the acquired data did not unequivocally confirm the latter hypothesis, as the results on how milk is processed in different countries can hardly be related to the degree of development of organic production in each country.Microfiltration is by far the best way to process milk, as it preserves its nutritional properties to the maximum extent [40].However, in the Netherlands, for example, which is otherwise advanced in terms of the development of organic farming, this method is not often usednot a single type of milk processed in this way was found in the study.Here, pasteurisation dominates, as in Poland.At the same time, UHT milk is relatively popular in Germany and Italy, which is not beneficial for consumers, as the UHT method radically alters the composition of milk lowering its natural nutritional value [43]).At the same time, these countries use microfiltration much more frequently than Poland.This result is consistent with the fact that this technology is relatively young and countries such as Poland have only recently applied it.This is related to the development of environmental awareness and technology-Poland started to introduce friendlier food processing methods later than the other countries represented in this study.As can be seen from the analysis described, the milk processing system only partially reflects the degree of development of the organic market in a country.It is likely that the predominant milk processing system in a country is strongly influenced by the habits of both producers and consumers, but further research would be needed to verify this.
Limitations and Recommendations
The authors of this study are aware of the limited scope of the research carried out and therefore of the limited possibilities for conclusions.
First of all, the research concerns only 4 EU countries, which is a small sample in the context of the 28 EU member states.Due to the research capacities of the ProOrg project consortium, two central European countries were selected-Germany and Poland, one country closer to the north of the continent-the Netherlands-and one southern European country-Italy.The western part of the continent (Spain, Portugal) and Scandinavia are not represented.The EU is culturally very diverse and this also applies to the issue of organic food consumption and consumer expectations in this respect.In addition, in each country the range of organic milk was examined in a several shops in one city, what makes it difficult to generalise to the whole country.Finally, our results only refer to 2019, while major global and local changes followed-the COVID 19 pandemic and Russia's armed invasion of Ukraine.These events have changed the picture of the world and the European Union-probably also in terms of organic food consumption.
For the above reasons, the results obtained are preliminary and cannot be generalised to the entire European Union.They can mainly be applied to Central Europe, where most observations were made.Besides, our results have a preliminary diagnostic character and shed light on the issue of voluntary information on organic milk packaging.They provide a basis for the preliminary claim that this information is richer the more developed the market for organic production is in a given country.However, this claim needs to be confirmed in further research over more years, in more EU countries and in more cities/shops in each country.
The future direction of the research should also include an analysis of a larger number of different types of organic milk in the countries studied and an attempt to find answers to some unresolved questions, e.g., what factors influence the choice of specific processing methods for organic milk in each country.Another issue worth further investigation is the effectiveness of voluntary information on milk packaging in the context of consumers' purchasing decisions.If such information positively influences purchasing decisions, organic milk producers should aim for more extensive and suggestive information on packaging.
Conclusions
The study analyse voluntary information on organic milk packaging from four European markets in the context of organic food quality, i.e., Germany, the Netherlands, Italy and Poland.Indeed, voluntary information is an important way of communicating with consumers and can promote the product as valuable and thus encourage consumers to buy.The study confirmed most of the research hypotheses made at the outset.
The analysed organic milk available on the German, Dutch, Italian and Polish food markets had voluntary information on packaging regarding process and product quality attributes.Dutch milk, compared to the other countries, had the most voluntary information on animal (cow) welfare, product localisation, environmental protection, quality confirmation, naturalness and nutritional value.German milk had the most information on sensory qualities and processing conditions.Milk available on the Italian market contained the most information on the social perspective, while Polish milk had only information on the environmental quality, nutritional value of the product and processing method.Thus, it can be concluded that the analysed milk from Germany, the Netherlands and Italy had mainly information on process organic quality criteria, while Polish milk had mainly product-related information.
At the same time, the amount of voluntary information on packaging was lowest in Poland, slightly higher in Italy, and highest in Germany and the Netherlands.This corresponds to the degree of development of the organic market and the range of organic milk in the respective country.In the German market, which is the most developed, the highest number of products was found with 37 milks.In the intermediate developed markets, the number of products was 27 (Netherlands) and 28 (Italy).In Poland, the least developed organic market, only 14 products were found.
The content of the voluntary information on the packaging differed according to the milk processing method.Compared to other processing methods, pasteurised organic milk had clearly more information on quality confirmation, animal welfare, environmental protection, naturalness, pleasantness, product localisation, nutritional value.Organic microfiltered milk had the most information on processing conditions.Organic UHT milk had the most information about the social perspective, while other attributes of organic production such as nutritional value and processing conditions were noticeably less observable.
In Germany, the Netherlands and Italy, the predominant packaging type for organic milk was multi-layered packaging, while in Poland it was the plastic bottle.
Based on the above statements, it can be concluded that in countries with a more developed market for organic products, producers inform consumers more extensively about the environmental and nutritional qualities of organic milk than in countries with a less developed organic market.The implication is that producers in the latter countries could learn a lot from their more experienced colleagues in terms of consumer communication.Organic producers by providing information about the characteristics of their products can build their brand and demonstrate Corporate Social Responsibility and through this take an active part in creating the conditions for sustainable social and economic development and increase the competitiveness of their company.
Another conclusion concerns the way organic milk is processed-still a large proportion is processed using the UHT method, which, according to scientific studies, leads to unfavourable changes in the composition of the milk and alters its natural qualities.Education on this subject should be spread among organic producers in all countries.At the same time, it should be borne in mind that the best method of processing milk is still under debate among scientists.Providing producers and consumers with a clear characterisation of the different processing methods would help to improve the nutritional value of the milk on offer and to build up clear content on organic milk packaging.
Finally, based on the data collected, it can be concluded that the nature of organic milk packaging in the countries studied raises a number of environmental and health concerns.Both the multi-layered packaging and the plastic bottle, which dominate as current packaging, are controversial from an environmental point of view, as they contribute to littering and pollution.There should be a push to change the packaging of organic milk towards safe biodegradable or light glass packaging.The organic sector should point the way towards packaging that is most beneficial to the environment and the health of the public.
Figure 1 .
Figure 1.Average number of packaging messages per product in the analysed countries, divi into criteria.
Figure 1 . 18 Figure 2 .
Figure 1.Average number of packaging messages per product in the analysed countries, divided into criteria.Sustainability 2022, 14, x FOR PEER REVIEW 12 of 18
Figure 2 .
Figure2.Content of voluntary information on organic milks processed by pasteurization, microfiltration and ultra-high temperature processing (UHT) (a total number of messages; all countries together).
Table 1 .
The assortment of analysed organic milk products in different countries.
Table 2 .
Process-related packaging voluntary information on organic milks.
Table 3 .
Product-related packaging voluntary information on organic milks (table shows total numbers of messages per country in each criteria and sub-criteria category). | 2022-12-21T16:07:47.913Z | 2022-12-16T00:00:00.000 | {
"year": 2022,
"sha1": "d2b9189bb35ebdf0c682b664bd6229fb01a9b5b9",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/14/24/16901/pdf?version=1671186557",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "9423330bdf0ed10f8ca899d9a677f76e7feb3206",
"s2fieldsofstudy": [
"Environmental Science",
"Business"
],
"extfieldsofstudy": []
} |
228833439 | pes2o/s2orc | v3-fos-license | Joint Nonnegative Matrix Factorization Based on Sparse and Graph Laplacian Regularization for Clustering and Co-Differential Expression Genes Analysis
,
Introduction
With the development of state-of-the-art sequencing technology, a large quantity of effective experimental data has been collected. ese data may imply some unknown molecular mechanisms. Bioinformatics is faced with the task of analyzing massive omics data.
e Cancer Gene Atlas (TCGA, https://tcgadata.nci.nih.gov/tcga/) includes gene expression profile data (GE), DNA methylation data (DM), copy number variation data (CNV), protein expression data, and drug sensitivity data. ese data are from approximately 15,000 clinical samples of more than 30 kinds of cancers [1]. ese massive data enable researchers to study the mechanisms of cancer production, diagnosis, and treatment at different biological levels.
e joint analysis of multiomics data can make up for lost or unreliable information in single omics data. In recent years, scientists have performed considerable research on the cancer mechanisms based on the joint analysis of cancer multiomics data. For example, Christina et al. integrated the gene expression data and copy number variations of breast cancer, identified possible pathogenic genes, and discovered new subtypes of breast cancer [2]. Wang and Wang used similarity network fusion to jointly analyze mRNA, DM, and microRNA (miRNA) data and identify cancer subtypes further [3]. In the existing joint analysis methods, those based on matrix decomposition are remarkable. Liu et al. integrated mRNA, somatic cell mutation, DNA methylation, and copy number variation data.
ey established a block constraint-based RPCA model to identify differentially expressed genes (DEGs) [4]. Integration and analysis of these heterogeneous multiomics data provide an in-depth understanding of the pathogenesis of cancer and promote the development of precision medicine. Recently, unsupervised integrative methods based on matrix decomposition have attracted considerable attention among the existing methods for integrating and analyzing multiomics data. Zhang et al. constructed a joint matrix factorization framework (jNMF) to discover multidimensional modules of genomic data [5]. Yang and Michailidis introduced a new method named integrative NMF (iNMF) for heterogeneous multiomics data [6]. Strazar et al. incorporated orthogonality regularization into iNMF (iONMF) to integrate and analyze multiple data sources [7]. Joint nonnegative matrix decomposition metaanalysis (jNMFMA) [8], multiomics factor analysis (MOFA) [9], and Bayesian joint analysis [10] have been successfully applied to the integration and analysis of cancer omics data. To avoid the influence of redundant information, many sparse modeling methods have been proposed. Typical applications are as follows: e weighted sparse representation classifier (WSRC) model combined with global coding (GE) [11] was used to predict interactions between proteins based on protein sequence information.
e network regularization sparse logic regression model (NSLR) [12] was used to predict survival risk and discover biomarkers. Sparse coregularization matrix decomposition was used to find mutant driver genes and so on [13].
In recent years, graph/network-based analysis as a powerful data representation tool has been applied to the modeling and analysis of complex systems [14][15][16][17]. In general, entities can be regarded as nodes, and the interaction between entities can be regarded as edges in the graph. Graph-based approaches can explore the local subspace structure and obtain the low-dimensional representation of high-dimensional data. Zhang and Ma proposed a subspace clustering algorithm based on a graph to detect the common modules highly correlated with cancer by jointly analyzing the gene expression and protein interaction networks [18]. Mixed-norm Laplacian regularized low-rank representation (MLLRR) was used to cluster samples [19]. Cui proposed an improved graphbased method to predict drug-target interactions [20]. Liu et al. introduced the contributions of deep neural networks, deep graph embedding, and graph neural networks along with the opportunities and challenges they faced [21]. Wu et al. proposed a multigraph learning algorithm called gMGFL that search and choose a group of decision subgraphs as features to move bags and bag labels to the instance [22].
Recently, sparse regularization has played a very important role in data analysis. e L 0 -norm, L 1 -norm, L 2,1 -norm, etc. are all typical sparse regularization methods. Among these many sparse constraints, L 2,1 -norm regularization stands out in terms of computational time and performance. e L 2,1 -norm can obtain a sparse projection matrix in rows to learn discriminative features in the subspace. Zhang used the L 2,1 -norm constraint on the coefficients to ensure that they are sparse in rows [23]. e L 2,1 -norm was applied to the predictor to ensure that it is robust to noise and outliers [24].
Considering the role of graph regularizations and L 2,1norm constraints in matrix factorization, we propose joint nonnegative matrix factorization based on sparse and graph Laplacian regularization (SG-jNMF). SG-jNMF can make the best of the potential associations and complementary information among multiomics data. e main highlights of this approach are as follows.
(1) Graph regularization is incorporated into the joint nonnegative matrix factorization model, and undirected graphs are constructed for input data in this method. Local graph regularization can preserve the local geometrical structure of the data space. erefore, SG-jNMF can use the low-dimensional characteristics of the observed data to find intrinsic laws and improve the performance of the integrated analysis method.
(2) L 2,1 -norm regularization can deal with each row of the matrix as a whole and can enhance the sparsity among the rows. erefore, involving the L 2,1 -norm can remove redundant features and noise in the data and further explore the clear cluster structure. (3) Two forms of SG-jNMF are proposed. SG-jNMF1 projects multiomics data into a fusion feature space. e fusion matrix contains complementary and differential information provided by multiomics data, so that more accurate results can be obtained when identifying Co-DEGs. SG-jNMF2 projects multiomics data into a common sample space, which results in more accurate clustering results.
e rest of this paper is arranged as follows: In Section 2, we start with a brief review of jNMF. Next, we introduce the SG-jNMF method, optimization process, and computational complexity analysis. Section 3 gives out the experimental results of clustering and feature selection. Finally, we summarize the whole paper and give some suggestions for future work in Section 4.
Joint Nonnegative Matrix Factorization.
e jNMF method was first proposed by Zhang et al. [5]. It can project multiple input data matrices into a common subspace, to integrate the information of each input data for analysis. Each type of genomic data as original data can be denoted as X I ∈ R M×N (I � 1, 2, 3, . . .). W ∈ R M×K is the common basis matrix, and H I ∈ R K×N is the corresponding coefficient matrix. e objective function of jNMF can be written as Obviously, jNMF is the same as NMF when P � 1. erefore, jNMF is the generalization model of NMF for multiple input datasets. Similar to NMF, multiplicative update rules are used to minimize the objective function. W and H I are iteratively updated according to the following rules. (3) e jNMF method can be used to integrate and analyze multiomics data. It decomposes multiomics data matrices into multiple independent coefficient matrices and a common fusion matrix at the same time and projects high-dimensional omics data into low-dimensional spaces. erefore, the abundant differential and complementary information of cancer multiomics data can be efficiently used, and multiomics datasets are analyzed simultaneously to obtain hidden information with biological significance.
Joint Nonnegative Matrix Factorization Based on Sparse and Graph Laplacian Regularization.
Manifold learning has become a popular research topic in the domain of information science since it was first proposed in science in 2000 [25,26]. Assuming that the data are uniformly sampled in a high-dimensional space, manifold learning can find the lowdimensional structure in the high-dimensional space and obtain the corresponding embedding mapping. Manifold learning looks for the essence of things from observed phenomena and finds the internal laws of data. e manifold assumption states that data points that are geometrically adjacent usually have similar characteristics. erefore, an e graph regularization with G is as follows: where T r (·) is the trace of the matrix, L is the graph Laplacian matrix, and L � D − U. D is a diagonal matrix and D i,j � j U i,j . Intuitively, the smaller the R k value is, the closer the two data points are. By minimizing R k , we can obtain a sufficiently smooth mapping function on the data manifold.
To decrease the influence of noise and outliers on real data, sparse regularization is usually used to penalize the coefficient matrix. e L 0 -norm, L 1 -norm, and L 2,1 -norm are all typical sparse regularization methods. e solution of L 0 -norm is a NP-hard problem. L 1 -norm is widely used because it has better optimization solution characteristics than L 0 -norm. L 1 -norm will tend to produce a small number of features, while the other features are all 0. erefore, it can be used for feature selection. However, L 1 -norm regularization is usually time-consuming. L 2,1 -norm regularization on the coefficient matrix can generate a row sparse result, and the calculation of the L 2,1 -norm is simple and convenient [23]. In this article, the L 2,1 penalty is incorporated in SG-jNMF [27]. e L 2,1 -norm of a matrix Z is defined as 2.2.1. SG-jNMF1. ere are two forms of SG-jNMF methods in this article. As shown in Figure 1, the SG-jNMF1 method projects multiomics data into a common feature space. Graph regularization and a sparse penalty are applied to the fusion feature matrix. e feature matrix is constrained by graph regularization, and as much intrinsic geometric information of the original multiomics data are preserved as possible. e L 2,1 -norm is used to constrain the feature matrix to reduce the influence of outliers and noise, and the objective function of integrating nonnegative matrix decomposition is constructed. e optimization problem can be expressed as where L I1 is the Laplacian matrix. L I1 � D I1 − U I1 , where U I1 is a symmetric matrix, which is the weight matrix constructed in graph regularization. D I1 is a diagonal matrix, and its diagonal elements are equal to the sum of the corresponding row elements or the sum of the column elements of the matrix; i.e., D I1ii � n j�1 (U I1ij ). With randomly positive initializing matrices Wand H I , the following update rules are executed until the algorithm converges: where Q is a diagonal matrix, the diagonal element is and ε is an infinitesimal positive number.
SG-jNMF2.
As seen from Figure 1, the SG-jNMF2 method projects multiomics data into a common sample space. Constraints are enforced on the common sample matrix. is method can be used to cluster multiomics data. e model can be shown by the following expression: Similarly, the algorithm iterates until it converges according to the following rules: where L I2 is the Laplacian matrix. L I2 � D I2 − U I2 , where U I2 is a symmetric matrix, which is the weight matrix constructed in graph regularization. D I2 is a diagonal matrix, and its diagonal elements are equal to the sum of the corresponding row elements or the sum of the column elements of the matrix; i.e., D I2ii � n j�1 (U I2ij ). B is a diagonal matrix, and the diagonal element is B jj � 1/ ����������� � m i�1 (W ij ) + ε. Obviously, the objective functions of the two kinds of SG-jNMF method are both nonconvex. We can obtain the optimal solutions by minimizing the objective functions. e optimization process is shown as follows.
Optimization of SG-jNMF.
Since the optimization processes of the two forms of SG-jNMF method are very similar, we only provide that of the first method. We use the multivariable alternating update rules to solve the optimization problem. Specifically, the following update steps are repeated until the algorithm converges.
Optimization of W.
When H I is fixed, the optimization of W is performed by minimizing the following objective function: e corresponding Lagrangian function is as follows: where Φ � [ϕ il ] and Ψ � [ψ Ia ] are the Lagrangian multipliers of Wand H I , respectively. Next, we take the first partial derivative of this Lagrangian function with respect to W: According to the KKT conditions [28], the following updating rule can be obtained: e corresponding Lagrangian function is as follows: and H I runs to convergence according to the following formula:
Convergence and Running Time.
In this paper, we also demonstrate the convergence of the method through experiments. Taking the pancreatic adenocarcinoma (PAAD) dataset as an example, the convergence of the five methods is shown in Figure 2. e error function used in this article is defined as follows: Compared with the other four methods, SG-jNMF can converge to the smallest error value with the fastest speed.
Besides, we also tested the running time of the above methods on the PAAD dataset. e means of these five methods running 10 times on a PC are shown in Table 1. As seen in Table 1, iGMFNA has the shortest running time, followed by SG-jNMF. is is due to the introduction of sparse constraints in SG-jNMF. e running time of iNMF, iGMFNA, jNMF, and SG-jNMF methods is satisfactory.
Computational Complexity Analysis.
In this part, we discuss the extra computational complexity of SG-jNMF compared to jNMF. We use big O symbol to represent the computational complexity of the algorithm. On the basis of the updating rules (3) and (4), we can easily count the arithmetic operations of each iteration in jNMF. Obviously, the cost for each iteration in jNMF is O(MNk). It should be noted that U I is a sparse matrix for SG-jNMF. In addition to the multiplicative updates, constructing a K-nearest neighbor graph requires O(N 2 M) operations [28]. Assume that the update stops after t iterations, and the overall cost for jNMF is O(tMNk). e overall cost for SG-jNMF is O(N 2 M) + O(tPMNk).
Results and Discussion
3.1. Data Processing. TCGA project includes a lot of gene expression profile data, DNA methylation data, copy number variation data, protein expression data, drug sensitivity data, and so on. In-depth study of these data can help us to master the mechanism of cancer occurrence and development and provide technical support for prevention, diagnosis, and treatment of cancer. In this article, four cancer datasets which are all downloaded from TCGA (https://tcgadata.nci.nih.gov/tcga/), namely, PAAD, esophageal carcinoma (ESCA), cholangiocarcinoma (CHOL), and colon adenocarcinoma (COAD), are used in these experiments. Details are listed in Table 2. To avoid the matrix dimension problem in algorithm execution, the number of genes in the four datasets is aligned to 19,876. First, RPCA is used to reduce the effects of noise and redundant information [29]. Second, the same number of samples and characteristics is retained for multiomics data of the same kind of cancer. en, the matrices are normalized according to the standard deviation of the data such that each element of the matrix is evaluated between 0 and 1.
Clustering.
When SG-jNMF2 method projects multiomics data into a common sample space, it contains all the sample information provided by the input multiomics data. To assess the clustering performance of this method, SG-jNMF2 is used to cluster the tumor samples on CHOL, PAAD, COAD, and ESCA datasets. ere are four methods (iNMF, iONMF, iGMFNA, and jNMF) that perform the same experiments on the same datasets.
Selection of Parameters.
For SG-jNMF2, clustering performance is affected by the regularization parameters. In this experiment, we empirically set the same value for λ I with different omics data from the same cancer [30]. erefore, there are three parameters, λ, β, and K, that need to be adjusted. λ is the graph regularization parameter, β controls the sparsity of factorization, and K is the number of nodes in the undirected graph constructed in the manifold. From Figure 3, when K is set to 3, the accuracy on the four datasets reaches a maximum. As seen from Figure 4, λ should be set to 1,000 on PAAD. When λ is equal to 0.1, the accuracy on COAD can achieve the maximum. When λ is equal to 10 − 3 , 10 −5 , and 1, the accuracy on CHOL can achieve the maximum. When λ is equal to 10 4 , the accuracy on ESCA can achieve the maximum. From Figure 5, when β is set from 10 −5 to 10 1 for PAAD, the accuracy reaches the maximum. For ESCA and COAD, β should be set from 10 4 to 10 5 . For CHOL, the value of β does not matter much.
Evaluation Indicators.
Several indicators are used to evaluate the clustering performance of SG-jNMF2: accuracy, recall, precision, and F1-score. Accuracy is defined as where N is the total number of samples in the dataset and δ(x, y) is a singular function. When x is equal to y, the value of the function is equal to 1; otherwise, it is equal to 0. map(r j ) maps the clustering label r j to the real label s j . e other three indicators used to evaluate clustering performance are defined as follows:
Complexity
where TP means the number of true positives, FP is the number of false positives, and FN denotes the number of false negatives.
Results.
In this experiment, each algorithm was run fifty times to reduce the impact of random initialization on the clustering results. We compared the accuracy, recall, precision, and F1-score of the four methods with SG-jNMF2. e mean and variance in the results are shown in Table 3. As seen in Table 3, SG-jNMF2 achieves the highest values on the four indicators mentioned above, except the recall value on the ESCA dataset. e contributions of sparse and graph regularization constraints of the algorithm are listed in Table 4. Performance improvements are measured by Δ ind � (Ind i − Ind j )/(Ind j ), where Ind i is the indicator of SG-jNMF and Ind j is that of the comparison method. In particular, sparse constraints improve accuracy by 49.70%, and sparse and graph regularization constraints improve accuracy by 78.87% on the PAAD dataset. Recall and F1score achieve more than 50% improvement on the CHOL dataset. When sparse constraints are introduced, only the recall on ESCA is reduced by 0.53%. e results on other datasets have also improved to varying degrees. In summary, Complexity 7 the performance of the integrated NMF in analyzing multiomics data greatly improves by introducing sparse constraints and graph regularization constraints.
Identifying Co-DEGs.
First, three matrices (DM, GE, and CNV of PAAD) are input into the SG-jNMF1 model and are projected into a common feature space. Second, we sum the common feature matrix in rows. Finally, we sort the elements in the sum vector in descending order. e top 100 genes are selected as Co-DEGs. ese 100 genes are compared with pancreatic cancer genes exported from GeneCards (URL:http:// www.genecards.org). Co-DEGs with relevance scores above 4 are listed in Table 5. CDKN2A is frequently mutated or deleted in many tumors. It plays an important role as a tumor suppressor gene. Studies have shown that the mutation of CDKN2A is closely related to the development of pancreatic cancer in families [31]. It is frequently seen in many tumors that mutation and overexpression of CCDN1 can alter the process of the cell cycle. Wang et al. identified pancreatitis-associated genes and found that CCND1 was involved in the pathway of pancreatic cancer [32]. Research on transcriptome sequencing shows that PTF1A maintains the expression of genes in all cellular processes. Deletion of PTF1A leads to an imbalance, cell damage, and acinar metaplasia, which is directly related to the development of pancreatic cancer [33]. Scientists have explored the effects of GRP on human intestinal and pancreatic peptides. erefore, SG-jNMF1 can effectively integrate the information of multiomics data to identify Co-DEGs closely related to the disease.
We also use SG-jNMF1 to integrate three gene expression datasets from ESCA, CHOL, and COAD to identify Co-DEGs associated with all three diseases. Partially Co-DEGs and their relevance scores with ESCA, CHOL, and COAD are shown in Table 6. e relevance score of CHEK2 with ESCA is up to 77.66. Allelic variation in CHEK2 has a strong relationship with the risk of esophageal cancer [34]. Relevance score of CHEK2 with COAD is 29.65. e germline variation in CHEK2 is also closely related to the [35]. Frequent mutations in BRPA have been widely reported in human malignancies, including esophageal cancer, cholangiocarcinoma, and colon cancer [36][37][38]. is provides a computational method for the study of Co-DEGs in multiple diseases.
Conclusions
In this paper, we propose an integrative matrix factorization method (SG-jNMF) used to analyze heterogeneous multiomics data. e novel method jointly projects multiomics data matrices into a common low-dimensional space. Two forms of SG-jNMF enable multiomics data to be analyzed from both the sample and feature perspectives. is integrative analysis method can consider the local association of data and decrease the interference of noise and redundant information in the heterogeneous multiomics data. Experimental results show that the new method is superior to existing methods in analyzing heterogeneous multiomics data. Another significant advantage of SG-jNMF is that it can flexibly handle multiple input data of various types. is flexibility means that the input data can be different types of data (GE, ME, CNV, etc.) for the same disease or the same type of data for different diseases. We can use this method to identify Co-DEGs associated with a particular disease and detect common Co-DEGs associated with several diseases.
is provides an efficient calculation method for biological and medical research. Next, we will use the correlation between Co-DEGs to build a gene coexpression correlation network, and further study the function of gene modules and related pathways.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 2020-11-19T09:15:24.111Z | 2020-11-16T00:00:00.000 | {
"year": 2020,
"sha1": "2a18f6ebfd5c7f8dbb681f781a3905f24a8c8cdf",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/complexity/2020/3917812.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6418ba76ab892f62be061922b4b32ae9f398e418",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
139203786 | pes2o/s2orc | v3-fos-license | Simulation process of the heat transfer in multilayered structures
The mathematical model of heat transfer in a metal- expanded polystyrene- metal system is considered in the paper, taking into account the stratification and specificity of heat sources that appear in the process of thermal destruction of expanded polystyrene. Specificity of heat sources is based on the transition of expanded polystyrene from solid to gaseous state during the heating of the prototype in the furnace behind the temperature regime of a standard fire. Considered the problem of the process of nonstationary heat conduction, the simulation was performed taking into account the transition time of expanded polystyrene into the gaseous phase, and also taking into account the time of action of additional heat sources using a numerical scheme based on the finite difference method modelling. The time of the loss of the heat-insulating capacity of the three-layered wall system was determined. A comparative analysis of the results of theoretical and experimental studies to assess their fire resistance was carried out.
Introduction
The urgency of saving energy resources today is an important task of the construction industry. In this regard, new methods of energy saving are developed, aimed at the rational use of economic resources. This concerns the initial stages of designing buildings and structures for various purposes, as well as the final stage of their construction.
For one of these technologies include the production of multilayer walling, in which they aim to combine the appropriate level of heat and sound insulation, fire resistance, hygiene requirements and the mechanical strength at the lowest cost, while ensuring optimal installation and construction conditions. Such multi-layered enclosing structures consist of a frame and fillers. The frame is made of profiled metal, OSB (Oriented Strand Board) and QSB (Qualitative Strange Board -high-quality and high-strength wood chipboard) plates, also known for the use of magnesite boards. As filler polystyrene (hereinafter -PS), mineral wool, polyurethane foam is used.
Analysis of recent research and publications
Since experimental studies of fire resistance of such enclosing structures are quite lengthy and require significant financial costs, the actual task is to construct mathematical models of such processes in order to reduce the costs of these studies and extend the results to other materials and multi-layer structures.
The process of heating of the system under consideration is accompanied by complex interconnected processes of heat transfer between the PS and the surrounding medium, chemical and physical transformations in the investigated temperature range, including chemical destruction of polymers, their melting, and possibly combustion with loss of mass, changes in the density and physical structure of materials [1,2,3].
Attention should also be paid to a wide range of numerical values of thermophysical characteristics of materials in a multilayer structure, according to which materials are divided into heat conducting and heat-insulating materials [4,5]. All these materials are present in the investigated design simultaneously, which significantly complicates the construction of the model and the obtaining of numerical results.
For a theoretical study of heat transfer in a system, it is necessary to use a mathematical model, taking into account the layering of the system and the specifics of heat sources. The resulting mathematical model should provide a simple means of assessing the fire resistance of the system and be as simple as possible for practical use.
Consideration of the specific geometrical dimensions of the system, large areas of the walls at their relatively small thickness, the assumption is, of their relatively uniform heating. In a fire on one side along the areas length and width, as well as their complexity of physical and chemical processes that occur inside the insulation, the transience processes are modelled with the use of the mathematical model of non-stationary thermal conductivity. Consideration of the processes depend on one-dimensional coordinate system layering and a possible dependence of temperature, heat flux, etc. materials characteristics of corresponding layers.
It is necessary to solve the system of equations describing such a model by numerical method, since it is inexpedient to change the solution method when improving the model.
The aim of the study is to determine the time of loss of the thermal insulation capacity of a three-layer wall system consisting of two layers of profiled steel 0.5 mm thick and a layer of PS 100 mm thick.
Statement of the problem of thermal conductivity for the metal-PS-metal system
We consider the one-dimensional heat conduction problem for a three-layer plate, which we simulate an actually existing multilayer panel consisting of metal-PS metal (Fig. 1). We consider that in the PS when a certain critical temperature is reached, chemical reactions and destruction begin to occur with the release of heat, that is, the process takes place in two stages.
The first stage lasts until the time moment cr , when the temperature 2 T in the PS reaches a critical value cr T .
The heat conduction equations and the initial conditions have the form [4,6]: On the limiting surfaces of the body-medium heat exchange takes place according to Newton's law: Conditions for an ideal thermal contact were created on the interface surfaces of the system: Here i T -temperature of the i -th layer of the system, 1 T -the temperature of the media in which the fire occurs and from the opposite side, respectively; At the second stage for time cr we assume instantaneous conversion of PS into a smoke-gas-air mixture for which the right side of the heat equation includes a heat source. Such an assumption reduces the estimated fire resistance time and is therefore acceptable.
That is, for cr in region 2 of the heat equation will be written: Accordingly, the conditions of contact (5) have the form: , where .
Here q -power of thermal sources , 2 where g the index refers to values related to the smoke / gas / air mixture. We assume that thermal sources operate throughout the q time.
To determine the source q in the heat conduction equation, which characterizes the intensity of energy sources that are nonlinear with respect to temperature and is directly related to the rate of chemical reactions, it is necessary to model the kinetics of chemical reactions that occur in the polymer quite accurately. The rate of these reactions is significantly influenced by heat fluxes and temperature. The quantitative calculation of the effect of heat fluxes and temperature on the rate of chemical reactions in polymers is not completely solved by the scientific problem [1,2,7,8,9]. Therefore, in this paper we use a simplified empirical model that takes into account only the power and time of action of thermal sources when a certain critical temperature is reached.
Numerical scheme for solving the heat conduction problem for a metal-PS -metal system
The construction of a numerical scheme based on the finite difference method [10,11] is carried out as follows. We believe that the solution of the problem exists, unique and sufficiently smooth for approximation. The region of continuous variation of the coordinate along the thickness of the plate x and time replace by a discrete set of nodes, and instead of functions continuous arguments x and consider the functions discrete arguments that are defined in the grid nodes by coordinate and time. The grid breakdown over the spatial variable has the form: satisfies the systems of finite-difference relations: When cr instead of equations (9), the following equations will be satisfied: (11) is the approximation of the conjugation condition to the temperature (5).
Equations (14) are the nodal values of the temperature at the initial instant of time.
In these approximations, we used a formal replacement by finite-difference secondorder relations of the derivatives with respect to the spatial variable and the first-order finite-difference first-order time derivative in the heat equation, asymmetric finitedifference relations for approximating one-sided derivatives in boundary and contact conditions, so that the difference scheme has approximation order Based on this system of finite-difference equations, the calculation algorithm is written, which is an explicit scheme of the method of finite differences.
It should be noted that the correct choice of steps and i h when solving a system of finite-difference equations is of great importance. In order to improve the accuracy of calculations, the steps for the coordinate and time should be chosen sufficiently small, however, in order to avoid the instability of the calculations, it is sufficient to satisfy the condition [10,11]:
The study results
The developed algorithm is implemented as a program in the Maple package . It may be noted that the change in temperature at the surface was set experimentally and heated to approximate linear spline time during programming. We consider the heat transfer in a system whose characteristics are presented in Table 1. We assume that the heat transfer coefficient on the heated surface by a fire (hereinafter medium 1) The results of calculating the change in the temperature field in the system with time are presented in Table 2.
The upper lines of the table show the values taking into account the heat release during chemical reactions, and in the lower rows, when not accounted for the heat release. As can be seen from Table 2, accounting for chemical reactions greatly affects the temperature change in the system. Using the obtained model was obtained by solution of the heat problem for the test sample (PS combustion) at T cr = T 0 + 140°C = 18°C + 140°C = 158°C, where T 0 = 18°Cinitial temperature. The calculated time for the loss of heat-insulating capacity of the prototype is established, which is 3 minutes 42 seconds.
You can pay attention to the strong acceleration of temperature changes on the surface bordering the surrounding environment. This indicates a strong effect of heat on chemical reactions that occur in PS.
It is also possible to note the practical coincidence of the temperature values at the boundary of the PS -metal and metal-medium, which can be explained by the very small thermal resistance of a thin metal layer with a significant thermal conductivity.
Comparative analysis of the results of theoretical and experimental studies of the metal-PS -metal system
In this division, the calculated and experimental results of studies of the metal-PS -metal multilayered enclosing structures system were compared with those obtained in [12]. For comparative analysis were taken averaged temperature distribution values for the two samples of mark PSW (polystyrene wall). In Fig. 2 shows the time dependence of the temperature change on the external unheated surface of the prototype, based on the results of experimental and theoretical studies. As we can see (Fig. 2), the calculated temperature variation curve repeats the experimental curve with small deviations. At the time of the onset of the limit state of fire resistance on the basis of loss of thermal insulating ability, the discrepancy between the temperatures obtained from the results of experimental and theoretical studies is 10°C, which is an error of 5%, which is acceptable in understanding of practical usage.
Conclusions
The problem of non-stationary heat transfer process in the test sample and the PSW formed by the quantitative calculation circuit unsteady temperature field in the thickness design scheme based on the explicit finite difference method.
Based on the results of computational and experimental studies, it has been established that the time for the loss of thermal insulation capacity for a prototype of the PSW mark is about 3 minutes 40 seconds. This is a low indicator for fire resistance and does not allow the use of such building structures in accordance with the requirements of regulatory documents of both Ukraine and other states. Obviously, the presence of such structures in buildings can create unfavorable conditions for life and health of people and lead to great material damage in the occurrence and development of a fire. Despite this, it is important to increase the fire resistance of such structures. | 2019-04-30T13:08:46.087Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "d8f34831ef9caf8d1da36f09d26dba70edf6ede1",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2018/106/matecconf_fese2018_00048.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "de77811c5644acd8951aaf51b361c08c268fe5a2",
"s2fieldsofstudy": [
"Engineering",
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
4708531 | pes2o/s2orc | v3-fos-license | Clinical Review of Antidiabetic Drugs: Implications for Type 2 Diabetes Mellitus Management
Type 2 diabetes mellitus (T2DM) is a global pandemic, as evident from the global cartographic picture of diabetes by the International Diabetes Federation (http://www.diabetesatlas.org/). Diabetes mellitus is a chronic, progressive, incompletely understood metabolic condition chiefly characterized by hyperglycemia. Impaired insulin secretion, resistance to tissue actions of insulin, or a combination of both are thought to be the commonest reasons contributing to the pathophysiology of T2DM, a spectrum of disease originally arising from tissue insulin resistance and gradually progressing to a state characterized by complete loss of secretory activity of the beta cells of the pancreas. T2DM is a major contributor to the very large rise in the rate of non-communicable diseases affecting developed as well as developing nations. In this mini review, we endeavor to outline the current management principles, including the spectrum of medications that are currently used for pharmacologic management, for lowering the elevated blood glucose in T2DM.
Type 2 diabetes mellitus (T2DM) is a global pandemic, as evident from the global cartographic picture of diabetes by the International Diabetes Federation (http://www. diabetesatlas.org/). Diabetes mellitus is a chronic, progressive, incompletely understood metabolic condition chiefly characterized by hyperglycemia. Impaired insulin secretion, resistance to tissue actions of insulin, or a combination of both are thought to be the commonest reasons contributing to the pathophysiology of T2DM, a spectrum of disease originally arising from tissue insulin resistance and gradually progressing to a state characterized by complete loss of secretory activity of the beta cells of the pancreas. T2DM is a major contributor to the very large rise in the rate of non-communicable diseases affecting developed as well as developing nations. In this mini review, we endeavor to outline the current management principles, including the spectrum of medications that are currently used for pharmacologic management, for lowering the elevated blood glucose in T2DM.
Keywords: diabetes, clinical management, chronic, insulin, primary care inTRODUCTiOn Diabetes mellitus (DM) is a complex chronic illness associated with a state of high blood glucose level, or hyperglycemia, occurring from deficiencies in insulin secretion, action, or both. The chronic metabolic imbalance associated with this disease puts patients at high risk for long-term macro-and microvascular complications, which if not provided with high quality care, lead to frequent hospitalization and complications, including elevated risk for cardiovascular diseases (CVDs) (1). The clinical diagnosis of diabetes is reliant on either one of the four plasma glucose (PG) criteria: elevated (i) fasting plasma glucose (FPG) (>126 mg/dL), (ii) 2 h PG during a 75-g oral glucose tolerance test (OGTT) (>200 mg/dL), (iii) random PG (>200 mg/dL) with classic signs and symptoms of hyperglycemia, or (iv) hemoglobin A1C level >6.5%. Recent American Diabetes Association (ADA) guidelines have advocated that no one test may be preferred over another for diagnosis. The recommendation is to test all adults beginning at age 45 years, regardless of body weight, and to test asymptomatic adults of any age who are overweight or obese, present with a diagnostic symptom, and have at least an additional risk factor for development of diabetes.
Furthermore, a condition called prediabetes or impaired fasting glucose (IFG), in which the fasting blood glucose is raised more than normal but does not reach the threshold to be considered diabetes (110-126 mg/dL), predisposes patients to diabetes, insulin resistance, and higher risk of cardiovascular (CV) and neurological pathologies (2,3). Type 2 diabetes mellitus (T2DM) can co-occur with other medical conditions, such as gestational diabetes occurring during the second or third trimester of pregnancy or pancreatic disease associated with cystic fibrosis. T2DM may also be iatrogenically induced, e.g., by use of glucocorticoids in the inpatient setting or use of highly active antiretroviral agents like protease inhibitors and nucleoside reverse transcription inhibitors in HIV-positive individuals (4). Chemical diabetes or impaired glucose tolerance (IGT) may also develop with the use of thiazide diuretics, atypical antipsychotic agents, and statins (5,6).
Type 2 diabetes mellitus is a common and increasingly prevalent disease and is thus a major public health concern worldwide. The International Diabetes Federation estimates that there are approximately 387 million people diagnosed with diabetes across the globe (7). According to Centers for Disease Control and Prevention, in 2012, 29.1 million adults, or 9.3% of the population, were identified with diabetes in the United States (US). Also in the same year, 86 million people had prediabetes condition and 15-30% of them developed into full-blown diabetes (8). In general, 1.4 million newly diagnosed cases in the US are being reported every year. If this trend continues, it is projected that in 2050 one in three Americans will have diabetes. Patients with diabetes have increased risk of serious health complications including myocardial infarction, stroke, kidney failure, vision loss, and premature death. Diabetes, with its associated side effects, remains the seventh leading cause of mortality in the US. The World Health Organization estimates that by 2030, mortality related to diabetes will double in number if not given deliberate attention (9). In addition, epidemiological studies report that diabetes causes more deaths in Americans every year compared to breast cancer and acquired immunodeficiency syndrome (AIDS) combined (10). The increasing trend in the incidence and prevalence of diabetes is worrisome and poses a great burden on medical costs and in our current healthcare system.
The ADA has released a range of recommendations called Standards of Medical Care in Diabetes to improve diabetes outcomes. The recommendations include cost-effective screening, diagnostic and therapeutic strategies to prevent, delay, or effectively manage T2DM and its life-threatening complications (11). Per recommendations of ADA and other organizations, modern approaches to diabetes care should involve a multidisciplinary team of health professionals working in tandem with the patient and the family (2). The primary aim of these approaches is to obtain optimal glycemic control through dietary and lifestyle modifications and appropriate medications along with regular blood glucose level monitoring. The burden of diabetes can be potentially reduced if the standard of care is implemented as well as patients' compliance and participation is clinically implemented.
The traditional presentations of T2DM occurring only in adults and type 1 diabetes mellitus (T1DM) only in children are not entirely correctly representative, as both diseases occur in both age groups. Occasionally, patients with T2DM may develop the morbid complication of diabetic ketoacidosis (DKA) (12). Children with T1DM typically present with polyuria and polydipsia and approximately one-third of them present with DKA, which may also be the first presenting feature (12). The onset of T1DM may be variable in adults, and they may not present with the classic symptoms that are seen in children. The true diagnosis may become apparent with disease progression. The heterogeneity of the presentations should be kept in mind while caring for the patient with T2DM.
The scope of this review encompasses current clinical guidelines on the pharmacological management of T2DM.
CLiniCAL DiAGnOSiS OF TYPe 2 DiABeTeS
Diabetes may be identified in low-risk individuals who have spontaneous glucose testing during routine primary clinical care, in individuals examined for diabetes risk assessment, and in frankly symptomatic patients. Early diagnosis of T2DM can be accomplished through blood tests that measure PG levels. FPG is the most common test to detect diabetes: a level of ≥126 mg/dL or 7.0 mmol/L confirmed by repeating the test on another clinic visit effectively diagnoses the disease. This test requires fasting for at least the previous 8 h and generates enhanced reliability when blood is drawn in the morning. Another criterion is the 2 h PG of ≥200 mg/dL or 11.1 mmol/L in a patient presenting with the traditional symptoms of diabetes such as polyuria, polydipsia, and/or unexplained weight loss. A positive 2-h OGTT will show a PG level of ≥200 mg/dL or 11.1 mmol/L after a glucose load containing 75 g of glucose solution in water. Two-hour PG OGTT is not commonly used in the clinic because, although it is more sensitive than FPG test, it is less convenient and more expensive for patients. Additionally, this test holds less relevance in routine follow-ups after confirmed diagnosis of diabetes is obtained.
In the past, the glycated hemoglobin (HbA1C) test was used mainly to monitor the adequacy of glycemic management and has strong predictive value for diabetes complications (13). HbA1C is a chronic marker of hyperglycemia and reflects patient's blood glucose level over a period of 3-4 months, coinciding with the lifespan of the red blood cells (RBCs). However, in 2009 after its standardization, the International Expert Committee recommended it to be used in diagnosing T2DM but not in T1DM and gestational diabetes (2). HbA1C level is reported in percentages, and a normal level is below 5.7%. The main advantage of the HbA1C test over other blood glucose tests is the convenience it offers to patients; it does not require fasting and can be done at any time of the day. However, this test is more expensive and may not be readily available in certain locations, which may limit its usefulness (14,15). HbA1C may be inaccurate in conditions such as anemia, hemolysis, and other hemoglobinopathies like sickle cell disease and hemoglobin (Hb) variants like HbC, HbE, and HbD, as well as elevated fetal hemoglobin. Thus, HbA1C assay in people of South Asian, Mediterranean, or African origin merit taking these issues into account (16). In conditions associated with increased RBC breakdown, such as in the advanced trimesters of pregnancy, recent hemorrhage, intravascular hemolysis or transfusion or erythropoietin treatment, only blood glucose estimation should be used to diagnose diabetes. There are limited data supporting the use of A1C in diagnosing T2DM in children and adolescents. Although A1C is not routinely suggested for diagnosis of diabetes in children with cystic fibrosis or symptoms that portend development of acute onset of T1DM, the ADA recommends HbA1C for diagnosis of T2DM in children and adolescents.
In order to accurately diagnose diabetes and in the absence of frank hyperglycemia (PG > 200 mg/dL) or hyperglycemic crisis, it is useful to repeat the same diagnostic test for confirmation. In situations where there are two different tests with conflicting results, the test which is positive should be repeated and a diagnosis of diabetes is made after a confirmatory test has been done (2). For individuals whose test result/s returned negative for diabetes, repeat testing at 3-year intervals is suggested (17).
The ADA and American Association of Clinical Endocrinologists recommend screening for prediabetes beginning at age 45 years or earlier for asymptomatic individuals with strong risk factors such as obesity (BMI ≥ 25 kg/m 2 ), hypertension and family history (first degree relative with diabetes) (18). IFG level of 100-125 mg/dL (5.6-6.9 mmol/L), IGT with a 2-h OGTT PG level between 140 and 199 mg/dL (7.9-11.0 mmol/L), or an HbA1C of 5.7-6.4% indicates prediabetes. Patients with an HbA1C level of >6% are considered high risk of developing diabetes, and early detection is necessary to prevent adverse outcomes. Patients diagnosed with prediabetes can be retested after a year; however, without proper intervention 70% of individuals diagnosed with prediabetes are most likely to progress to diabetes in 10 years or even less, depending on their risk factors (18). It is also important to note that prediabetes may be associated with obesity, dyslipidemia, and hypertension; therefore, lifestyle changes such as healthy diet, physical activity, and cessation of smoking, in addition to the introduction of pharmacological agents, are deemed important to stop or delay the timeline of development of diabetes.
CLiniCAL MAnAGeMenT OF TYPe 2 DiABeTeS
Comprehensive care for a patient with diabetes requires an initial evaluation of the patient's risk factors, the presence or absence of diabetes complications, and initial review of previous treatment/s (2). This will enable the healthcare providers to optimally manage patients with either prediabetes or diabetes. The cornerstones of diabetes management include lifestyle intervention along with pharmacological therapy and routine blood glucose monitoring.
Lifestyle Measures
Clinical trials have shown that lifestyle modifications are costeffective in preventing or delaying the onset of diabetes, with approximately 58% reduction in risk in 3 years (19). It is highly recommended by the ADA that patients with IGT, IFG or HbA1C level of 5.7-6.4% be counseled on lifestyle changes such as diet and exercise. On the other hand, for patients who are already diagnosed with diabetes, nutrition advice provided by a registered dietitian is recommended. A goal of moderate weight loss (≈7% of body weight) is an important component in the prevention and treatment of diabetes, as it can improve blood glucose levels, and can also positively impact blood pressure and cholesterol levels (19). Weight loss can be achieved through a healthy balanced diet, with control of total calories and free carbohydrates. However, for patients with diabetes adhering to a low carbohydrate diet, they should be informed on possible side effects such as hypoglycemia, headache and constipation (20). Other studies have suggested consumption of complex dietary fiber and whole grains to improve glycemic control (2,21).
Studies show that exercise can improve glycemic control (lower HbA1C level by 0.66%), with or without significant decrease in body weight, and improve the total well-being of patients (22). It is considered an integral part in the prevention and management of both prediabetes and diabetes. According to the U.S. Department of Health and Human Services, adults ≥18 years of age should do a minimum of 150 min/week of moderate intensity exercise (e.g., walking at a 15-to 20-min mile pace) or 75 min/week of vigorous physical activity (e.g., running, aerobics) spread over at least 3 days/week with no more than two consecutive days without exercise to achieve maximum benefits (2,18). For patients ≤18 years old, 60 min of physical activity every day is adequate.
Other lifestyle measures that need to be considered in the treatment plan for patients with diabetes are moderate alcohol consumption (≤1 drink for women, ≤2 drinks/men) and reduction in sodium intake especially in patients with comorbidities such as hypertension, habitual tobacco use, and lacking immunizations (influenza, diphtheria, pertussis, tetanus, pneumococcal, and hepatitis B). Consumption of alcohol, especially in a fasted state, can precipitate life-threatening hypoglycemia and coma and should be explicitly counseled to patients during their visits (23). Moreover, patient education, counseling, and psychosocial support are very important to successfully combat the deleterious effects of diabetes.
Pharmacologic Management
An "ominous octet" that leads to hyperglycemia, which occur in isolation or in combination, has been proposed for eight pathophysiological mechanisms underlying T2DM (24). These include (i) reduced insulin secretion from pancreatic β-cells, (ii) elevated glucagon secretion from pancreatic α cells, (iii) increased production of glucose in liver, (iv) neurotransmitter dysfunction and insulin resistance in the brain, (v) enhanced lipolysis, (vi) increased renal glucose reabsorption, (vii) reduced incretin effect in the small intestine, and (viii) impaired or diminished glucose uptake in peripheral tissues such as skeletal muscle, liver, and adipose tissue. Currently available glucose-lowering therapies target one or more of these key pathways.
Good glycemic control remains the main foundation of managing T2DM. Such approaches play a vital role in preventing or delaying the onset and progression of diabetic complications. It is important that a patient-centered approach should be used to guide the choice of pharmacological agents. The factors to be considered include efficacy, cost, potential side effects, weight gain, comorbidities, hypoglycemia risk, and patient preferences. Pharmacological treatment of T2DM should be initiated when glycemic control is not achieved or if HbA1C rises to 6.5% after 2-3 months of lifestyle intervention. Not delaying treatment and motivating patients to initiate pharmacotherapy can considerably prevent the risk of the irreversible microvascular complications such as retinopathy and glomerular damage (25). Monotherapy with an oral medication should be started concomitantly with intensive lifestyle management.
The major classes of oral antidiabetic medications include biguanides, sulfonylureas, meglitinide, thiazolidinedione (TZD), dipeptidyl peptidase 4 (DPP-4) inhibitors, sodium-glucose cotransporter (SGLT2) inhibitors, and α-glucosidase inhibitors. If the HbA1C level rises to 7.5% while on medication or if the initial HbA1C is ≥9%, combination therapy with two oral agents, or with insulin, may be considered (2,26). Though these medications may be used in all patients irrespective of their body weight, some medications like liraglutide may have distinct advantages in obese patients in comparison to lean diabetics (see below). A schematic of currently approved medications for T2DM is summarized in Table 1. A flowchart for guiding clinical decision making is presented in Figure 1.
Biguanide
The discovery of biguanide and its derivatives for the management of diabetes started in the middle ages. Galega officinalis, a herbaceous plant, was found to contain guanidine, galegine, and biguanide, which decreased blood glucose levels (31). Metformin is a biguanide that is the main first-line oral drug of choice in the management of T2DM across all age groups. Metformin activates adenosine monophosphate-activated protein kinase in the liver, causing hepatic uptake of glucose and inhibiting gluconeogenesis through complex effects on the mitochondrial enzymes (31). Metformin is highly tolerated and has only mild side effects, low risk of hypoglycemia and low chances of weight gain. Metformin is shown to delay the progression of T2DM, reduce the risk of complications, and reduce mortality rates in patients by decreasing hepatic glucose synthesis (gluconeogenesis) and sensitizing peripheral tissues to insulin (31). Furthermore, it improves insulin sensitivity by activating insulin receptor expression and enhancing tyrosine kinase activity. Recent evidence also suggest that metformin lowers plasma lipid levels through a peroxisome proliferator-activated receptor (PPAR)-α pathway, which prevents CVDs (31). Reduction of food intake possibly occurs by glucagon-like peptide-1 (GLP-1)-mediated incretin-like actions. Metformin may thus induce modest weight loss in overweight and obese individuals at risk for diabetes.
Once ingested, metformin (with a half-life of approximately 5 h) is absorbed by organic cation transporters and remains unmetabolized in the body and is widely distributed into different tissues such as intestine, liver, and kidney. The primary route of elimination is via kidney. Metformin is contraindicated in patients with advanced stages of renal insufficiency, indicated by a glomerular filtration rate (GFR) <30 mL/min/1.73 m 2 (32). If metformin is used when GFR is significantly diminished, the dose should be reduced and patients should be advised to discontinue the medication if nausea, vomiting, and dehydration arises from any other cause (to prevent ketoacidosis). It is important to assess renal function prior to starting this medication.
Metformin has an excellent safety profile, though may cause gastrointestinal disturbances including diarrhea, nausea, and dyspepsia in almost 30% of subjects after initiation. Introduction of metformin at low doses often improve tolerance. Extended release preparations seldom cause any gastrointestinal issues. Very rarely, metformin may cause lactic acidosis, mainly in subjects with severe renal insufficiency. Another potential problem arising from the use of metformin is the reduction in the drug's efficiency as diabetes progresses. Metformin is highly efficient when there is enough insulin production; however, when diabetes reaches the state of failure of β-cells and resulting in a type 1 phenotype, metformin loses its efficacy.
Metformin can cause vitamin B12 and folic acid deficiency (33). This needs to be monitored, especially in elderly patients. Though very rare, in patients with metformin intolerance or contraindications, an initial drug from other oral classes may be used. Although trials have compared dual therapy with metformin alone, few directly compare drugs as add-on therapy. A comparative effectiveness meta-analysis suggests that overall each new class of non-insulin medications introduced in addition to the initial therapy lowers A1C around 0.9-1.1%. An ongoing Glycemia Reduction Approaches in Diabetes: A Comparative Effectiveness Study (GRADE) has compared the effect of four major drug classes (sulfonylurea, DPP-4 inhibitor, GLP-1 analog, and basal insulin) over 4 years on glycemic control and other psychosocial, medical, and health economic outcomes (34). Though it will be a welcome development for introduction of oral agents for metformin for gestational diabetes, current FDA regulations do not support it.
Incretin Mimetics
Incretin effect is the difference in insulin secretory response from an oral glucose load in comparison to glucose administered intravenously. The incretin effect is responsible for 50-70% of total insulin secretion after oral glucose intake (35). The two naturally occurring incretin hormones that play important roles in the maintenance of glycemic control: glucose-dependent insulinotropic polypeptide (GIP, or incretin) and glucagon-like peptide (GLP-1); these peptides have a short half-life, as these are rapidly hydrolyzed by DPP-4 inhibitors within 1½ min. In patients with T2DM, the incretin effect is reduced or absent. In particular, the insulinotropic action of GIP is lost in patients with T2DM. Incretins decrease gastric emptying and causes weight loss. Because of impact on weight loss, these medications may find increasing use in diabesity.
Targeting the incretin system has become an important therapeutic approach for treating T2DM. These two drug classes include GLP-1 receptor agonists and DPP-4 inhibitors. Clinical data have revealed that these therapies improve glycemic control while reducing body weight (specifically, GLP-1 receptor agonists) and systolic blood pressure in patients with T2DM (36). Furthermore, hypoglycemia is low (except when used in combination with a sulfonylurea) because of their glucose-dependent mechanism of action.
GLP-1 Receptor Agonists
The currently GLP-1 receptor agonists available are exenatide and liraglutide. These drugs exhibit increased resistance to enzymatic degradation by DPP4. In young patients with recent diagnosis of T2DM, central obesity, and abnormal metabolic profile, one should consider treatment with GLP-1 analogs that would have a beneficial effect on weight loss and improve the metabolic dysfunction. GLP-1 analogs are contraindicated in renal failure.
Exenatide. Exenatide, an exendin-4 mimetic with 53% sequence homology to native GLP-1, is currently approved for T2DM treatment as a single drug in the US and in combination with metformin ± sulfonylurea. Because of its half-life of 2.4 h, exenatide is advised for twice-daily dosing. Treatment with 10 µg exenatide, as an add-on to metformin, resulted in significant weight loss (−2.8 kg) in comparison to patients previously treated with metformin alone. Exenatide is generally well tolerated, with mild-tomoderate gastrointestinal effects being the most common adverse effect.
Liraglutide. Liraglutide is a GLP-1 analog that shares 97% sequence identity to native GLP-1. Liraglutide has a long duration of action (24 h). Liraglutide causes 1.5% decrease in A1C in individuals with type 2 diabetes, when used as monotherapy or in combination with one or more selected oral antidiabetic drugs. Liraglutide decreases body weight; the greatest weight loss resulted from treatment with liraglutide in combination with combined metformin/sulfonylurea (−3.24 kg with 1.8 mg liraglutide). Liraglutide also diminishes systolic pressure (mean decrease −2.1 to −6.7 mmHg) (37). Liraglutide is well tolerated, with only nausea and minor hypoglycemia (risk increased with use of sulfonylureas). Serum antibody formation was very low in patients treated with once-weekly GLP-1 receptor agonists. The formation of these antibodies did not decrease efficacy of their actions on blood glucose lowering.
DPP-4 Inhibitors
Dipeptidyl peptidase 4 inhibitors include sitagliptin, saxagliptin, vidagliptin, linagliptin, and alogliptin. These medications may be used as single therapy, or in addition with metformin, sulfonylurea, or TZD. This treatment is similar to the other oral antidiabetic drugs. The gliptins have not been reported to cause higher incidence of hypoglycemic events compared with controls.
Dipeptidyl peptidase 4 inhibitors impact postprandial lipid levels. Treatment with vidagliptin for 4 weeks decreases postprandial plasma triglyceride and apolipoprotein B-48-containing triglyceride-rich lipoprotein particle metabolism after a fat-rich meal in T2DM patients who have previously not been exposed to these medications. In diabetic patients with coronary heart disease, it was demonstrated that treatment with sitagliptin improved cardiac function and coronary artery perfusion.
The three most commonly reported adverse reactions in clinical trials with gliptins were nasopharyngitis, upper respiratory tract infection, and headache. Acute pancreatitis was reported in a fraction of subjects taking sitagliptin or metformin and sitagliptin. An increased incidence of hypoglycemia was observed in the sulfonylurea treatment group.
In the elderly, DPP-4 inhibitors lower blood glucose but have minimal effect on caloric intake and therefore less catabolic effect on muscle and total body protein mass. In decreased doses, DPP-4 inhibitors are considered safe in patients with moderate to severe renal failure.
Because of glucose-independent mechanism of action, these drugs may be effective in advanced stages of T2DM when pancreatic β-cell reserves are permanently lost. These drugs provide modest weight loss and blood pressure reduction.
Urinary tract infections leading to urosepsis and pyelonephritis, as well as genital mycosis, may occur with SGLT2 inhibitors. SGLT2 inhibitors may rarely cause ketoacidosis. Patients should stop taking their SGLT2 inhibitor and seek medical attention immediately if they have symptoms of ketoacidosis (frank nausea or vomiting, or even non-specific features like tiredness or abdominal discomfort).
Insulin
If non-insulin monotherapy like metformin at the maximum tolerated dose does not achieve or maintain the A1C target over 3 months, then a second oral agent may be added to the regimen, a GLP-1 receptor agonist, or basal insulin. Insulin therapy (with or without additional agents) should be introduced in patients with newly identified T2DM and frankly symptomatic (catabolic features like weight loss, ketosis or features of hyperglycemia including polyuria/polydipsia) and/or severely elevated blood glucose levels [≥300-350 mg/dL (16.7-19.4
The clinical picture of T2DM and its therapies should be regularly and objectively elaborated to patients. Many subjects with T2DM shall require insulin therapy sometime during the course of the disease. For patients with T2DM with inadequate target glycemic goals, insulin therapy should not be postponed. Providers should advocate insulin as a therapy in a complete nonjudgmental, empathetic, and non-punitive approach to ensure superior quality of adherence. Self-monitoring of blood glucose (SMBG) (discussed below) contributes to significant improvement of glycemic control in patients with T2DM initiating insulin. Close and frequent monitoring of the patient is needed for any dose titration to achieve target glycemic goals, as well as to prevent hypoglycemia.
Basal insulin is the initial insulin regimen, beginning at 10 U or 0.1-0.2 U/kg, depending on the hyperglycemia severity (titrating by 2-3 U every 4-7 days till glycemic goal is reached). Use of basal insulin greater than 0.5 U/kg indicates the need for use of an additional agent. Basal insulin is usually added to oral metformin and possibly one additional non-insulin agent like DPP-4 or SGLT-2 inhibitor. NPH (neutral protamine Hagedorn) insulin carries low risk of hypoglycemia in individuals without any significant past history, and is low cost. Newer, longer acting, basal insulin analogs have superior pharmacodynamic profiles, delayed onset and longer duration of action but low risk of hypoglycemia, albeit at higher costs. Concentrated basal insulin preparations such as U-500 regular is five times more potent per volume of insulin (i.e., 0.01 mL ~5 U of U-100 regular) than U-100 regular. U-300 glargine and U-200 degludec are other potent, ultra-long acting preparations.
If basal insulin contributes to acceptable fasting blood glucose, but A1C persistently remains above target, mealtime insulin may be added. Rapid-acting insulin analog (lispro, aspart, or glulisine) may be used and administered just before meals. The glucose levels should be monitored before meals and after the injections. Another approach to control the periprandial glucose excursions may be to add twice-daily premixed (or biphasic) insulin analogs (70/30 aspart mix, 75/25 or 50/50 lispro mix). The total present insulin dose may be computed and then one-half of this amount may be administered as basal and the other half during mealtime, the latter split equally between three meals. Regular human insulin and human NPH-Regular premixed formulations (70/30) are less expensive alternatives to rapid-acting insulin analogs and premixed insulin analogs, respectively, but their unpredictable pharmacodynamic profiles make them inadequate to cover postprandial glucose changes.
Sometime, bolus insulin needs to be administered in addition to basal insulin. Rapid-acting analogs are used as bolus formulations due to their prompt onset of action. Insulin pump (continuous subcutaneous insulin infusion) may be used instead to avoid multiple injections. Often, patients and physicians are reluctant to intensify therapy due to the fear of hypoglycemia, regimen complexity, and increased multiple daily injections. There is a need for a flexible, alternative intensification option taking into account individual patient considerations to achieve or maintain individual glycemic targets. An ideal insulin regimen should mimic physiological insulin release while providing optimal glycemic control with low risk of hypoglycemia, weight gain, and fewer daily injections.
Inhaled insulin (Technosphere insulin-inhalation system, Afrezza) is now available for prandial use. However, the dosing range is limited. Use of inhaled insulin requires pulmonary function testing prior to and after starting therapy. It is contraindicated in subjects with asthma or other lung diseases.
During insulin therapy, sulfonylureas, DPP-4 inhibitors, and GLP-1 receptor agonists are stopped once more complex insulin regimens beyond basal insulin are used. In patients with inadequate blood glucose control, especially if requiring escalating insulin doses, TZDs (usually pioglitazone) or SGLT2 inhibitors may be added as adjunctive therapy to insulin.
Insulin injections can cause weight gain or loss. Insulin drives potassium into the cell and can cause hypokalemia. Components of the insulin preparation have the potential to cause allergy. Insulin injections, along with the use of other drugs like TZDs, can precipitate cardiac failure.
Stressful events like illness, surgery, and trauma can impede glycemic control and may lead to development of DKA or nonketotic hyperosmolar state, life-threatening conditions, which merits immediate medical attention. Any condition that deteriorates glycemic control necessitates more frequent monitoring of blood glucose in an inpatient setting; ketosis-prone patients also require urine or blood ketone monitoring. If accompanied by ketosis, vomiting, or altered mental status, marked hyperglycemia requires hospital admission. The patient treated with non-insulin therapies or medical nutrition therapy alone may require insulin. Patient must be aggressively hydrated and infections should be controlled.
Without adequate treatment, prolonged hyperglycemia can cause glucose toxicity that can progressively impair insulin secretion. Initiation of insulin therapy is critical to reverse the toxic effect of high blood glucose levels on the pancreas. Once persistent glycemic control is achieved, insulin can be tapered off and replaced with oral medications. At some point in the management of T2DM, β-cell reserves are exhausted, with phenotypic reversal to a T1DM kind of pathophysiological situation. Meticulous follow-up may identify such states and then the need for continued reliance on insulin therapy may be carefully explained to the patients.
Weight gain can raise a barrier to the use of insulin in T2DM. In the United Kingdom Prospective Diabetes Study (UKPDS) study, patients gained 6 kg with insulin therapy, when compared with 1.7-2.6 kg weight gain with sulfonylureas (39). More recently, the combination of GLP-1 receptor agonists and insulin has been useful in tackling the weight gain associated with insulin and circumventing the need for high doses in the presence of significant insulin resistance. Lipoatrophy with insulin injections is not seen now; however, lipohypertrophy due to failure to change the subcutaneous injection sites is still a common cause of poor insulin absorption and suboptimal glycemic control.
In the Action to Control Cardiovascular Risk in Diabetes trial, aggressive treatment of T2DM patients with higher CV risk was associated with higher all-cause and CV mortality. Post hoc analyses could not find correlation with faster rates of reduction of glucose, hypoglycemia, or specific drugs as the causes underlying this finding. Exposure to injected insulin was hypothesized to increase CV mortality. However, after adjustment for baseline covariates, no significant association of insulin dose with CV death remained (40). Older patients with cognitive dysfunction may not benefit from intensive therapy. Furthermore, hypoglycemia in the elderly may cause cardiac ischemia, arrhythmia, myocardial infarction, and sudden death (41).
Sulfonylureas
Sulfonylureas lower blood glucose level by increasing insulin secretion in the pancreas by blocking the KATP channels. They also limit gluconeogenesis in the liver. Sulfonylureas decrease breakdown of lipids to fatty acids and reduce clearance of insulin in the liver (42). Sulfonylureas are currently prescribed as secondline or add-on treatment options for management of T2DM. They are divided into two groups: first-generation agents, which includes chlorpropamide, tolazamide, and tolbutamide, and second-generation agents, which includes glipizide, glimepiride, and glyburide. The first-generation sulfonylureas are known to have longer half-lives, higher risk of hypoglycemia, and slower onset of action, as compared to second-generation sulfonylureas. Currently, in clinical practice, second-generation sulfonylureas are prescribed and more preferred over first-generation agents because they are proven to be more potent (given to patients at lower doses with less frequency), with the safest profile being that of glimepiride.
Hypoglycemia is the major side effect of all sulfonylureas, while minor side effects such as headache, dizziness, nausea, hypersensitivity reactions, and weight gain are also common. Sulfonylureas are contraindicated in patients with hepatic and renal diseases and are also contraindicated in pregnant patients due to the possible prolonged hypoglycemic effect to infants. Drugs that can prolong the effect of sulfonylureas such as aspirin, allopurinol, sulfonamides, and fibrates must be used with caution to avoid hypoglycemia. Moreover, other oral antidiabetic medications or insulin can be used in combination with sulfonylurea and can substantially increase the risk of hypoglycemia.
Patients on beta-adrenergic antagonists for the management of hypertension can have hypoglycemia unawareness. Sulfonylureas should be used with caution in subjects receiving beta blockers.
Meglitinide
Meglitinides (repaglinide and nateglinide) are non-sulfonylurea secretagogues, which was approved as treatment for T2DM in 1997. Meglitinide shares the same mechanism as that of sulfonylureas; it also binds to the sulfonylurea receptor in β-cells of the pancreas. However, the binding of meglitinide to the receptor is weaker than sulfonylurea, and thus considered short-acting insulin secretagogues, which gives flexibility in its administration. Also, a higher blood sugar level is needed before it can stimulate β-cells' insulin secretion, making it less effective than sulfonylurea. Rapid-acting secretagogues (meglitinides) may be used in lieu of sulfonylureas in patients with irregular meal schedules or those who develop late postprandial hypoglycemia while using a sulfonylurea.
Thiazolidinedione
Like biguanides, TZDs improve insulin action. Rosiglitazone and pioglitazone are representative agents. TZDs are agonists of PPAR and facilitate increased glucose uptake in numerous tissues including adipose, muscle, and liver. Mechanisms of action include diminution of free fatty acid accumulation, reduction in inflammatory cytokines, rising adiponectin levels, and preservation of β-cell integrity and function, all leading to improvement of insulin resistance and β-cell exhaustion. However, there are high concerns of risks overcoming the benefits. Namely, combined insulin-TZD therapy causes heart failure. Thus, TZDs are not preferred as first-line or even step-up therapy.
Other Glucose-Lowering Pharmacologic Agents
Pramlintide, an amylin analog, is an agent that delays gastric emptying, blunts pancreatic secretion of glucagon, and enhances satiety. It is a Food and Drug Administration (FDA)-approved therapy for use in adults with T1DM. Pramlintide induces weight loss and lowers insulin dose. Concurrent reduction of prandial insulin dosing is required to reduce the risk of severe hypoglycemia. Other medications that may lower blood sugar include bromocriptine, alpha-glucosidase inhibitors like voglibose and acarbose, and bile acid sequestrants like colesevelam. It may be noted that metformin sequesters bile acids in intestinal lumen and thus has a lipid-lowering effect, also the same mechanism may contribute to gas production and gastrointestinal disturbances.
Pharmacologic Management of Diabetes Complications
Important components of the Standards of Medical Care in Diabetes involves taking care of complications of diabetes and comorbidities including hypertension, atherosclerotic cardiovascular disease (ASCVD), dyslipidemia, hypercoagulopathy, endothelial cell dysfunction, nephropathy, and retinopathy. CVD is the most important cause of morbidity and mortality in patients with diabetes. The currently recommended goal blood pressure is ≤140/80 for patients with diabetes and hypertension. Angiotensin-converting enzyme inhibitors or angiotensin receptor blockers are the preferred antihypertensive medication (2). Optimal blood pressure and blood glucose control can effectively delay the progression of nephropathy and retinopathy in these patients. Patients with existing CVD should be continuously managed with aspirin, including providing primary prevention in subjects less than 50 years old. Patients with diabetes are also recommended to undergo annual lipid profile measurement, and those diagnosed with hyperlipidemia should be treated with statins with a low-density lipoprotein goal of <70 mg/dL (2). Moreover, it should be noted that an important aspect in the success of pharmacotherapy is patient's adherence and compliance to medications; therefore, close and regular patient follow-up, monitoring, and education are necessary.
Glucose Monitoring
Self-monitoring of blood glucose and HbA1C are integral components of the standards of care in diabetes. They are designed to assess the effectiveness of a treatment plan and provide guidance in selecting appropriate medications and dosage/s (2). SMBG allows patients to assess their own response to medication, minimize the risk of hypoglycemia, and determine whether they are achieving glycemic control. Optimal glycemic control is achieved when FPG is 70-130 mg/dL, 2 h post prandial <180 mg/ dL, and bedtime glucose is 90-150 mg/dL. However, testing six to eight times daily may burden patients and may result in noncompliance. Therefore, it is recommended to ensure that patients are properly instructed and are given regular evaluation and follow-up.
Self-monitoring of blood glucose is essential in patients with diabetes who are on intense insulin regimen (three to four injections of basal and prandial or insulin pump). It monitors and prevents hyperglycemia and possible side effect of hypoglycemia. Blood glucose level is usually checked prior to meals, prior to exercise, prior to driving, and at bedtime. Evidence is insufficient to prescribe SMBG for patients not receiving an intensive insulin regimen (26).
According to the current guideline, HbA1C level should be assessed regularly in all patients with diabetes. The frequency of HbA1C testing is flexible and depends primarily on the response of patients to therapy and the physician's judgment. HbA1C testing is performed at least every 6 months for patients who are meeting treatment goals; for patients who are far from their glycemic goals, HbA1C testing may be performed more frequently.
SUMMARY/COnCLUSiOn
Type 2 diabetes mellitus is one of the leading causes of renal failure, ASCVD, non-traumatic lower limb amputation, blindness, and death worldwide. It is a serious chronic medical condition that requires a multidisciplinary team approach, consisting of healthcare professionals, dietitians, patient educators, patients, and their families. Lifestyle intervention designed to manage body weight and treat obesity, as well as patient education, are essential for all patients with diabetes. Treatment options may be individualized and medication(s) chosen based on a patient's risk factors, current HbA1C level, medication efficacy, ease of use, patient's financial situation/insurance/costs, and risk of side effects such as hypoglycemia and weight gain. Effectiveness of therapy must be evaluated as frequent as possible using diagnostic blood tests (HbA1C), as well as monitoring for development of diabetic complications (e.g., retinopathy, nephropathy, neuropathy). Furthermore, aggressive efforts from physicians and motivating patients for compliance are the two important aspects of the prevention and management of diabetes. Sociocultural issues should be carefully considered. For example, during religious fasting (e.g., during the holy month of Ramadan), the use of pharmacologic agents that induce hypoglycemia should be used with care and insulin doses (for example, premix formulations) should be appropriately titrated and the patient should be educated for blood glucose monitoring and breaking of fast as needed (43).
By the year 2030, >70% of people with T2DM shall reside in developing countries (44). Primary prevention of T2DM should be an urgent public health policy. The disease predominantly affects working-age people and therefore has a counterproductive economic impact, compounded by the frequent occurrence and interaction of T2DM with infectious diseases (such as AIDS and tuberculosis) (45). Evidence from landmark T2DM prevention trials indicates that lifestyle modification is more effective, cheaper, and safer than medication and provides sustained benefits. Lifestyle modification may be promising approach to T2DM prevention in developing countries. This will be useful for many ethnic groups in the U.S. as well, such as South Asian, Latino, Pima Indians, and African-American populations, which may face socioeconomic challenges similar to what is seen in developing countries. Cost-contained strategies to identify atrisk individuals, followed by the implementation of group-based, inexpensive lifestyle interventions ("comfortably uncomfortable" life, as lived by people in blue zones), seem to be the best options for resource-constrained settings. T2DM pathophysiology is increasingly understood as a mix of insulin resistance and secretory defects of β-cells (46).
Several options for pharmacologic therapy of lowering blood glucose are currently available, which have revolutionized longterm management of DM (47). Several antidiabetic drugs may have important CV complications, which the provider team should always be aware (48). The polypharmacy issues, management of diabetes, as well as hypertension, hyperlipidemia, and use of aspirin should be carefully explained to patients to ensure adherence to therapy to prevent significant CV morbidity and mortality. Careful attention should be paid to development of insulinopenic states by clinical assessment of C peptide and lack of control of HbA1C with multiple medications, and complete lack of secreted insulin conditions should be treated by initiation of appropriate insulin regimens. Every clinical encounter should also be utilized to explain the benefit of weight loss and motivated for such. Even though not yet conclusive, clinical trial and data support consideration of bariatric surgery as a possible strategy to monitor blood glucose levels and body weight, especially in morbid obesity (49). Balanced hypocaloric diets that cause weight loss must be adopted, and regular interactions with dietitian is a useful approach. Aerobic training and resistance training can control increasing lean mass in middle-aged and overweight/ obese individuals. Behavioral strategies for weight loss should be encouraged in primary care settings and appropriate maintenance of body weight prior to conception may help after development of gestational diabetes. Weight loss may be particularly challenging for incapacitated patients and subjects with disabilities, so comprehensive approaches should be undertaken. Newer molecular studies have demonstrated the transcriptional link between inflammatory pathways and increased adipose tissue storage, contributing to insulin resistance (50). Drug repurposing of the anti-inflammatory agent for aphthous stomatitis, amlexanox, is currently undergoing trials as newer agents for management of diabetes (51).
AUTHOR COnTRiBUTiOnS
AC conceptualized and led project and drafted manuscript. CD checked accuracy of clinical contents and provided numerous clinical pearls. VSRD checked accuracy of clinical contents and numerous clinical discussions. SK contributed to numerous clinical pearls and revisions. AC contributed to important clinical discussions and revisions. RR contributed to important clinical discussions and revisions. AM contributed to important clinical contribution, especially management with coexistent chronic diseases. NSS contributed to clinical concepts and numerous clinical discussions. MTM prepared initial outline of some aspects of the manuscript. KK contributed to numerous clinical discussions. AS contributed to clinical discussions. AB checked grammar and formatted the initial table. NP contributed to initial discussions. CKM checked accuracy of clinical contents. GPL contributed to important clinical contents and numerous clinical guidance. WM contributed to overall senior mentorship and guidance and support to project. | 2017-05-02T21:28:25.596Z | 2017-01-24T00:00:00.000 | {
"year": 2017,
"sha1": "03b3232f1108b050ce730c9b6ddfd131e7bec860",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2017.00006/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "03b3232f1108b050ce730c9b6ddfd131e7bec860",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
152439720 | pes2o/s2orc | v3-fos-license | Understanding the outsourcing decision in South Africa with regard to ICT
Information and Communication Technology (ICT) outsourcing is a strategic initiative adopted by many organisations across all industries. Outsourcing is seen as a means for organisations to concentrate and improve their core business functions. Despite the vastly dynamic environment businesses operate in, few global studies have uncovered the factors influencing outsourcing decisions, thus the need for this paper. The research was performed using a number of statistical tests and descriptive analysis methods to explore the literature and to determine the current status of South Africa’s ICT outsourcing market. Key findings reveal that cost is the most influential factor when deciding whether to outsource or not, irrespective of organisations size or type. Other important factors include concentrating on core-functions, and the availability of in-house expertise. The form of outsourcing used is not affected by the size of the organisation. Application Service Providers (ASP) is the most common form, followed by co-sourcing.
Introduction
With Information and Communication Technologies (ICTs) being ubiquitous, companies are more and more utilising them, irrespective of their core business functions.However, such technological resources might not always be available, thus the need to outsource to deal with this limitation.Outsourcing can be defined as the strategic use of external resources to perform one or more organisational activities (Drezner, 2004).ICT outsourcing has evolved from a costsaving initiative towards a more strategic objective, (Ketler & Willems, 1999) due to constantly changing business environments and market pressures.In doing so, organisations aim at leveraging their competitiveness (Thoms, 2004).This proves that organisations are clearly aware of the benefits produced through outsourcing.Nevertheless they are still faced with the decisions of whether to outsource, what to outsource and how to outsource (Costa, 2001).
In essence, the outsourcing phenomenon has been widely researched over the past decade, but the outsourcing methods as well as the factors influencing the outsourcing decisions are evolving.In addition, South African businesses are increasingly becoming integrated into the international community (Maxwell, 2008).The need to investigate how and why SA companies structure their outsourcing plan therefore prevails.The knowledge gathered through such a study might inform other SA organisations about how to best undertake their outsourcing activities with fewer risks.
The aim of this research paper is thus to explore the current factors influencing a South African organisation's decision to outsource.The most commonly outsourced ICT functions will also be revealed.Finally, the study evaluates the impact of organisational size and type on the outsourcing decision.
Background to outsourcing
The knowledge economy is ICT oriented and organizations are in constant need of high levels of knowledge and skills to remain abreast.Some therefore seek to achieve this high business value and expertise through outsourcing, and are willing to share their goals, strategies and objectives with external vendors (Scardino et al., 2005;Meyers, 2002).But what are those business functions being outsourced?
ICT functions being outsourced Dibbern et al. (2004) categorise ICT functions into commodities, differentiators or strategic functions.They claim that common outsourced ICT functions include project management, systems and network implementation as well as business and systems planning or analysis.Infrastructure is however one of the most mature outsourced functions within an organisation.It encompasses the outsourcing of networks, systems, data centres and desktop services (Scardino et al., 2005).Furthermore, infrastructure, more specifically network outsourcing, allows organisations to focus on their core business functions as well as on the optimisation of current infrastructures for cost reductions (Scardino et al., 2005).
Business Process Outsourcing in turn includes outsourcing of entire business functions ranging from a project inception up until implementation or support and maintenance to third party vendors (Cantara et al., 2005;Da Rold, Jester & Young, 2005;Dibbern et al., 2004).It allows organisations to focus on their core business functions, streamline and integrate processes, and reduce operational costs (Cantara et al., 2005).Industries associated with the outsourcing of entire business processes include governments, financial and accounting services, health care, and logistics organisations (Dibbern et al., 2004).Outsourcing initiatives are typically influenced by some factor(s) which arise from internal or external pressures.However, according to Wonseok, Gallivan and Kim (2006), there is a correlation between the method and outsourcing function and the level of switching costs and strategic importance of the function (Wonseok et al., 2006).Factors influencing the outsourcing decisions are further explored in the next section.
Factors influencing the decision to outsource
Studies performed at international levels reveal that factors influencing an organisation's decision to outsource have evolved from being a cost-saving initiative to a strategic objective (Costa, 2001;Goo, Kishore & Rao, 2000;Meyers, 2002).Such decisions are said to be based on the need for exposure to additional skills and expertise, political factors (promoting self-interest and following of trends), and access to best practices and improving staffing flexibility (Benamati & Rajkumar, 2002;Costa, 2001;Fink & Shoeib, 2003;Meyers, 2002).Many outsourcing decisions are also based on prior outsourcing experiences (Benamati & Rajkumar, 2002).Different factors pertaining to the outsourcing decision, as described in literature are discussed below.They relate to Cost Control, Concentration on core business function, in-house expertise, risk management and legal factors.
Costs control
Due to significant growth, ICT services in higher-cost areas are continuously challenged by competition from low-cost countries viz.India, and Canada (Scardino et al., 2005).However, the reported cost-savings incurred from the outsourcing process is only an indirect benefit while the direct impact lies in the reduction of personnel, cheaper labour, and more productive personnel i.e. concentrating on core business functions, lower total cost of ownership (TCO) and stabilising the increase in workload (Brody, Miller & Rolleri, 2004;Costa, 2001;Dibbern et al., 2004;Hormozi, Hostetler & Middleton, 2003;Thoms, 2004).However, according to the Global Outsourcing Report (Minevich & Richter, 2005), South Africa already was already marked as "global opportunity"for IT offshore investment (in 2005).It is therefore interesting to investigate on what are these factors related to cost, which urge SA companies to outsource while the country itself is considered as being competitive.
Concentration on core business functions and strategic advantage
Outsourcing, initially seen as an opportunity for organisations to downsize and reduce costs, has now developed into a strategic tool which impacts on corporate innovativeness, profitability and investments (Goo et al., 2000;Ketler & Williams, 1999).It can be seen as an opportunity to promote strategic alliances through partnership building especially since relationship management is a major success factor within the business environment (Sargent, 2006).
Organisations further enhance their competitive advantage by outsourcing of both core and non-core functions.These functions might be outsourced due to the lack of necessary resources to develop core functions.The potential gain from outsourcing the relevant core function may also be greater than that obtained from in-house resources (Costa, 2001;Thoms, 2004).Consequently, the decision of what to outsource, core, non-core or both, is dependent on the organisation's environment rather than on the nature of the function (Bloch & Spang, 2003;Thoms, 2004).The impact of South African business environments on outsourcing decisions should thus be studied as no such research has been undertaken to date.
In-house expertise
Organisations tend to outsource their ICT functions in order to reduce uncertainty and remain competitive (Wonseok et al., 2006).Costa (2001) expands this notion by explaining that the attempt to develop in-house skills or attracting individuals with the necessary skills is costly and time consuming, thus the need to outsource.Further studies should however be undertaken to investigate whether this is also valid in a South African business environment.
Risk management
Potential risks being faced by organisations while outsourcing, has a major impact on the outsourcing decision.
Issues encountered include the potential loss of control, complexity of infrastructure, division of labour, and cultural and language barriers (Benamati & Rajkumar, 2002;Lee, Huynh, Kwok, Pi, 2003;Ketler & Willems, 1999).Ramanujan and Jane (2006) and Parlov (2004) further add that other tangible and intangible issues incurred are often unnoticed by the client.These include breach of privacy, inferior quality and performance, and other hidden costs like cultural issues and contract management issues.These issues pose a high level of risk to the client if not well documented and understood.However, Goo et al. (2000) argue that organisations occasionally outsource their ICT functions in order to transfer any risks incurred during inhouse development, implementation or analysis to their outsourcing vendor.These studies having been performed internationally provided an external perspective on risk management during outsourcing.A South African perspective would be quite useful to local organisations wishing to outsource.
Legal factors
Organisations wishing to outsource usually mistake the terms and conditions of offshore outsourcing as being the same as that of inshore outsourcing (Huntley, 2006).This misconception usually leads investors to overlook the legal and regulatory compliances of the country being outsourced to.Issues like breach of privacy, corporate law and taxation issues of the relevant country, permits and licenses, and regulatory issues are often not well detailed or understood by the client (Ramanujan & Jane, 2006).It is therefore important to understand whether South African companies are aware of such legal implications while deciding to outsource, so that their future decisions are better informed.
Having identified the factor(s) influencing an organisations outsourcing decisions, the next section evaluates which outsourcing method is most efficient and effective, based on the organisations environment and strategic needs.
Types of outsourcing
Numerous outsourcing methods have been developed to accommodate the changing business environment.These methods are tailored to the environmental requirements of the outsourcing objectives.As one objective of this study is to determine which form of outsourcing is used the most and if the size of the organisation has an effect on which form is used, it is important to understand what are the different existing forms currently implemented worldwide.
Offshoring
Offshoring emphasises on the process, which includes the relocation of the business processes to lower costs or strategically advantageous locations outside national borders (Erber & Sayed-Ahmed, 2005;Thoms, 2004).Despite cultural dissimilarities, language barriers and time-zone differences offshoring taps into a wider and more global labour pool, which is necessary to the delivery of cost benefits, skills and scalability, demanded by clients (Iyengar, Karamouzis, Marriott & Young, 2006).
Inshoring
Inshoring of business functions is similar to offshoring, but both vendor and client remain within the same national borders (Erber & Sayed-Ahmed, 2005;Scardino et al., 2005;Thoms, 2004).Researchers agree that inshoring adds to the advantages available from offshoring because of the cultural similarity of employees working in the same country.In essence, this may result in fewer misunderstandings and fewer cultural barriers (Brown & Karamouzis, 2005;Scardino et al., 2005).Inshoring has not only been instrumental in local job creations and skills development but it has also created a better quality workforce by combining both business and technical skills (Thoms, 2004)
Cosourcing
Cosourcing is a form of outsourcing whereby a third party works in conjunction with a client while the internal group transitions itself to a new set of skills.Perceived as a popular method of outsourcing within internal audit firms, it is commonly known as a transition strategy (Thoms, 2004).A survey conducted by Serafini, Sumners, Apostolou and Lafleur (2003) showed that cosourcing was used by 44% of the respondents, who derived benefits like specialized knowledge, technical skills, staffing flexibility and best practices.However, in spite of these acquired benefits, vendors risk tying their profit to the performance of the client or organisation acquiring the cosourcing services (Cullen & Willcocks, 2003;Dibbern et al., 2004).
Smartsourcing
Smartsourcing is seen as a strategic form of outsourcing where organisations use a combination of inshoring and offshoring.In doing so, organisations retain control over their core activities and can commission the relevant activities to third parties (Wright, 2005).Koulopoulos (2004) adds that within a smartsourcing partnership between client and vendor, the vendor oversees the operational excellence and cost reductions within the organisation while the client refocuses its attention to the core innovative activities within the organisation.
Application service providers (ASP)
While not considered as an actual outsourcing method, ASP is as a multiyear or annuity relationship, which involves the transfer of daily management responsibility of custom or packaged applications to an external service provider (Anderson, 2006).Similar to the outsourcing principles, ASP's focus is to provide fast, predictable, cost-effective functionality, thus allowing the client to focus on core business functions, access best practices or processes, cost savings and better control of legacy systems (Young et al., 2005).Huntley (2006) found ASP's have recorded a relatively strong satisfaction rate from a surveyed sample of 343 organisations.These findings complement the rapid adoption rate of ASP's.
In the next section, an overview of the relationship between organisational size and type and outsourcing will be provided, as reported in literature.
Relationship between organisational size / type and outsourcing
Organisations across industries are adopting outsourcing as the benefits incurred are more and more attractive (Fulbright & Routh, 2004).However, the effectiveness of outsourcing and the methods used may be dependant upon the organisational type or size (Fulbright & Routh, 2004).
Organisational type
The retail market has steadily built a presence within ICT outsourcing.A study conducted by Waller, ( 2004) revealed that a supply chain initiative outsourced by the retailer has seen 3% improvements, reduced inventory levels and order cycle times.Cusmano, Mancusi and Morrison (2006) add that internationalisation of production and emergence of global-reaching innovative activities are amongst the drivers of recent transformations at business and systems level.Furthermore, outsourcing within the manufacturing and production industry is driven by factor price differences across countries and regions, as well as the commitment to internal resources in order to focus on core business functions (Cusmano et al., 2006).
Organisational size
Studies have revealed that outsourcing decisions within small-to-medium enterprises (SME) are driven by similar factors within large enterprises viz.cost, personal connections, access to a mass of skilled technical professionals and project management skills (Coward, 2003).Given similarities of outsourcing between SME's and large corporations, SME's however differ with regards to the specific outsourcing method they choose to adopt.With reference to software development outsourcing, Coward (2003) claims that SME's typically outsource to vendors close to home (within the relevant country) as they require a close cooperation between themselves and the vendor.This cooperation can only be achieved via frequent resolution of problems without language or cultural barriers.
A survey covering 1100 SMEs in the US was conducted by TEC International to determine the popularity of offshore outsourcing (Olsen, 2006).Results of this study indicated that at the time, only 5% intended to outsource ICT jobs overseas.Of these SME's, 12% claimed that they planned to outsource their manufacturing jobs while 73% claimed that they had no intention of engaging in offshore outsourcing.Furthermore, it was found that 19% of all organisations have an offshore outsourcing strategy.With the SME market representing the bulk of the market for both the US and South Africa, it can be assumed that offshoring may not be as large as it is portrayed (Olsen, 2006).Despite the low levels of offshoring encountered by SME's, smaller firms are more likely to outsource i.e. within or to a neighbouring country as they can rely on scale economies (Carr, 2005).
Having reviewed literature on different aspects of outsourcing, the next section details the research methodology employed during this study.
Research design
This section describes and justifies the research approach, research propositions, data gathering techniques and sample selection.The data analysis method employed is also discussed, followed by a brief overview of the limitations of the research.
Purpose of research
The purpose of this research was to obtain an understanding of outsourcing in South Africa; which business functions are being outsourced and what influences the outsourcing decisions.Different aspects of outsourcing were taken into account and tested against different sizes/types of organisations for relationships.This was done to determine how influential the different business aspects can be on outsourcing.
Quantitative research approach
For the purpose of this study, a quantitative research approach was adopted.This enabled researchers to effectively test the various hypotheses and objectives regarding the outsourcing decisions in South Africa as well as to establish patterns of relationships between the various variables of interest.
Research objectives
As highlighted in the introduction, the primary purpose of this empirical research project was to study the decision to outsource.This was further fragmented into objectives which were utilised to provide insight as to what the study aimed to accomplish.
Study design
The philosophical orientation was positivistic.The underlying principle of a positivistic philosophy is that the external reality can be objectively known and measured (Blumberg, Cooper & Schindler, 2005).Blumberg, et al. (2005) adds that positivism aims to develop knowledge by investigating the reality through theory and observing objective facts.Thus positivism proved to be a good-fit for quantitative analysis.
Subjects
A Judgement Sampling methodology was employed while selecting the sample population (Blumberg et al., 2005).The research study focused on decision-makers regarding ICT outsourcing in South African organisations.Fifty percent of respondents were taken from national lists such as the Johannesburg Stock Exchange, and the Chamber of Commerce, and 50% were from private contact lists of UCT academics.
The target population was Information Technology (IT) managers having a good understanding of the business' position with respect to its outsourcing decisions.The questionnaires were distributed via email, fax and post.
To ensure an accurate representation of the sample population, organisations of varying sizes viz.small (less than 50 employees), medium (more than 50, less than 200 employees) and large (more than 200 employees) were approached.Ketler & Willems, 1999) Table 1 illustrates the response rate obtained from distributing 1809 questionnaires.A study conducted by Ketler and Willems (1999) showed that the response rate the researchers should anticipate is as low as 9%.A low response rate of 8.7% was obtained, roughly in line with the response rate experienced by Ketler and Willems (1999).
Data collection
This section outlines the research instrument being adopted, its composition and how it was implemented.
Instrument design
Due to the quantitative nature of this research study, researchers have found that the most effective and efficient instrument for this project would be a questionnaire.The questionnaire consisted of close-ended questions in order to simplify the interpretation of the findings and to ensure simplicity and timeliness for participants.The questions were a combination of Likert Scale and checkbox type questions; all questions were designed to answer the research questions or enable suitable testing of the hypotheses.
The Business Functions being outsourced were investigated by asking respondents to rate each of them from a range of Not at All [1], Somewhat [2], Average [3], More than Average [4], and Substantially [5].The different business functions investigated included Project Management, Risk Management, Software Development, Business Analysis, Systems Analysis, Business Consulting, Systems Consulting, Hardware Implementation, Systems Implementation, and Network Implementation.The respondents were also required to rate the percentage to which they outsource each of those ICT functions on a scale of 0%, 0-10%, 10-25%, 25-50%, 50-75%, and 75-100%.
The business type was investigated by requesting the respondents to rate themselves as either of those organisation types: Retail, ICT, Consulting, Wholesale, Distribution, Manufacturing, Accounting/Finance, and Other (any answer provided by the respondent was acceptable).The different factors derived from literature and which might influence the outsourcing decisions were also measured within the South African context using the questionnaire.They include Cost, Legal Factors, In-House Expertise, Strategic Advantage, Business needs to concentrate on core functions, and Risk.The respondents were asked to rate the importance of these factors in influencing their outsourcing decision, as being either The form of outsourcing employed was investigated by asking the respondents to choose any of these options: Offshoring, Inshoring, Co-Sourcing, Smartsourcing, and Application Service Providers.
Finally, the organisation size was measured by asking the respondents to rate their number of employees as being either Small (<50), , or Large (>200).
Research findings
The results generated during this study are next presented, without any inference to the implications, which will later be discussed.
Respondent analysis
For the purpose of this empirical research project, data was gathered from South African organisations.50% of the respondents were from large organisations, while 35% and 15% were from small and medium sized enterprises respectively.Some of the respondents considered that their company did not fall under any of the categories suggested in the questionnaire; and thus selected the 'other' option.It can be noted that 'other' accounts for 16 of the 72 responses, while consulting accounts for 15.The other top two organisational types are accounting/finance (12) and ICT (11).Data from the other organisational types (Retail, Wholesale, Distribution, and Manufacturing) have not been included in the data analysis tests (Kruskal-Wallis), due to their limited number of responses, which would otherwise affect the integrity of the results.
Table 2 displays the replies, separated into organisational size type as well as their frequency of return.86% of the results were obtained from small and large organisations while medium firms only accounted for 14% of the results.This provides a good general reflection of the respondents.Furthermore, Table 2's results show that the top four types of firms have greater influence on the results.It is therefore valid to note that, for the sake of this empirical research project, most accurate results would be obtained from small and large ICT, accounting/finance, consulting and other organisations.Consisting of 72 respondents, the research sample was relatively small and therefore could not be used as a representation of the entire market.It however provided a basic understanding of the current situation in the market and opened doors for further study with larger samples.
Process of analysis
The research questions were answered through inferential analysis.Hypotheses, H1 and H2 compared the size of the business to the various outsourcing questions, while H3 and H4 compared the type of the organisation to the various outsourcing questions.To test these hypotheses, the statistical analysis methods chosen were Spearman correlation and Kruskal-Wallis, respectively.H5 compares the size of the organisation to the form of outsourcing used to determine if there is a relationship between the two variables.The chi-squared test will be used to test H5.
Reliability analysis
The reliability of the constructs used in the questionnaire was measured using Cronbach alpha.Generally, the Cronbach alpha value for a particular construct should be 0.7 or above for that construct to be deemed reliable (Hart, 2006).
Due to the exploratory nature of the research, the Cronbach alpha value should be > 0,6 (Hart, 2006).The average Cronbach alpha value obtained for all the variables is 0,849993, thus confirming that the variables are reliable and should provide accurate results.These variables were used during the analysis of the size and type of the organisation.
Descriptive analysis
Outsourced ICT business functions The first two research questions were: What are the most outsourced ICT business functions?'and 'To what extent are they outsourced?'The respondents were required to state how much they outsource a particular ICT business function from a provided list of ranges as previously described.The midpoints of each of these ranges were calculated and used so that an average level of outsourcing could be obtained.
The results showed that the most outsourced ICT business function is Network Implementation (59.84), and the least outsourced function is Risk Management (19.44).Software Development (54.77), Hardware Implementation (50.48) and Systems Implementation (48.89) form the remainder of the four most outsourced functions.This is summarised in the Table 3.
Factors influencing the outsourcing decision
The next two research questions asked were: 'What are the most influencing factors when making a decision regarding outsourcing?' and 'To what extent do they influence the decision?'In Table 4, the weighted average has then been separated according to organisation sizes.The result suggest that the medium sized organisations contributed the least to this study (15%), however, medium sized organisations still outsource all ICT business functions to the greatest extent, except with regards to Network and Hardware Implementation.(82.20).This may be due to the fact that the lack of in-house expertise causes the organisations to consider outsourcing.The average figures were then separated into organisational sizes.According to the results from Table 5, small organisations consider legal factors and risk more than medium and large organisations.This could be due to the fact that legal factors as well as risk factors have greater implications on the wellbeing of a smaller organisation.Risk is especially important as it could lead to the dissolution of the organisation if not taken into account.
Forms of outsourcing
The result suggests that ASP is the most popular choice (42%) when choosing a form of outsourcing.Co-sourcing is the second most used form of outsourcing (24%), while the remaining three are used to a lesser extent, with Inshoring (13%), Smartsourcing (11%), and Offshoring (10%).
Hypothesis testing
The following section analyses the results of the various hypothesis tests employed.This section is divided into three distinct subsections, each dealing with a separate type of statistical test.
Spearman correlation
Relationship between organisational size and the ICT business function(s) being outsourced Hypotheses H 1 and H 2 pertained to the relationship between the size of the organisation and which ICT business functions they outsource, as well as the influencing factors when making this decision.Since the size of the organisation is ordered (small-large) the Spearman Correlation test was used.
Table 6 displays the p-values for all outsourced business functions against the size of the organisation; this test was conducted to evaluate whether there is any correlation between the size of the organisation and what they outsource (H 1 ) and the extent to which each function is outsourced.In order to obtain a significant interpretation, the p-value must be smaller than 0.05.The test reveals that there is no significant correlation as all p-values are above 0.05.
Although none of the outsourced business functions are significantly related to the size of the organisation, Project Management and Systems Consulting revealed the most potential to correlate to the size of the organisation.Again, the results may not be representative of the industry status due to the small sample size.As all the values are positive in Table 6 it implies the extent does increase with size, but not significantly.Therefore, there is no significant evidence to infer that H 1 should be rejected.The size of the business therefore does not affect the business functions being outsourced.
Relationship between Organisational Size and the factors that influence the outsourcing decision H 2 was tested to determine if there is any relationship between the size of the organisation and the factors influencing decision-making regarding outsourcing.The Spearman correlation test results (see Table 7) reveal no correlation between the two variables.However, next to cost, risk shows the most potential to correlate with the organisational size.This reinforces what was discovered in Table 5; smaller organisations are influenced by risk to a greater extent than medium and large organisations.With the use of a larger sample, this test might have shown a significant p-value for correlation between the size of the organisation and the influencing factors.Therefore, there is no significant evidence to infer that H 2 should be rejected.This is because the p-values of the Spearman correlation test are too high.The size of the business does not affect which factors influence the outsourcing decision.
Kruskal -Wallis
Relationship between the type of the organisation and the extent to which each ICT business function is outsourced Hypotheses H 3 and H 4 pertained to the relationship between the type of the organisation and which ICT business functions they outsource, as well as the influencing factors when making this decision.As the 'type of the organisation' is not ordered data and there are more than three independent groups of sampled data, the Kruskal-Wallis non-parametric test proved to be most suitable.
Table 8 displays the relationship between the outsourced business functions and the organisational type (H 3 ).Findings show that there is no significance in any of the tests except one.The outsourcing of the various business functions have no relationship with the organisational type except for Network Implementation (p-value = 0.0411).Therefore, there is no significant evidence to infer that H 3 should be rejected.This is because the p-values of the Kruskal-Wallis test are too high.This is true for all outsourced functions except Network Implementation; pvalue = 0,0411, in which case there is strong evidence to infer that the null hypothesis should be rejected.The type of the business affects the outsourcing of Network Implementation.
Relationship between organisational size and the extent to which each factor influences the outsourcing decision H 4 was tested to determine if there is any relationship between the organisational type and factors influencing them when making a decision regarding outsourcing.Table 9 shows the relationship between the influencing factors and the organisational type.None of these tests revealed any significance.
Therefore, there is no significant evidence to infer that H 4 should be rejected.This is because the p-values of the Kruskal-Wallis test are too high.The type of business does not affect the extent to which each factor influences the outsourcing decision.
Chi -squared
For the fifth and final hypothesis H 5 , the aim was to determine if there is a relationship between the size of the organisation and the form of outsourcing used.As the aim of H 5 was to determine if the samples are different enough in some characteristic to show that there is a pattern with respect to the sample, the best test to use would be the chisquared test (Connor-Linton, 2003).The results of the chisquared test can be seen in Table 10.
The p-value of 0.8096 is very high, thus implying that there is no significance.There is therefore no evidence to infer that H 5 should be rejected.This is because the p-value of the Chi-Squared test is too high.The size of the business does not affect which form of outsourcing is used.
Discussions and implications
This section serves as a link between the literature review and the research findings.The aim is to determine if the research agrees or disagrees with the literature.Thereafter, possible reasons for this will be identified as well as the researchers' understanding of the situation.
Outsourced functions
Dibbern et al. (2004) states that the most common ICT functions being outsourced include project management, systems and network implementation, and business and systems analysis.The results of the current study seemingly contradict various findings of these researchers.However network implementation seems to be consistent with Dibbern et al. (2004).
Two highly outsourced functions include software development and hardware implementation.This is in line with the generalisation that many organisations implement information systems in their organisation to gain a competitive advantage, but do not have the expertise to do it themselves.The requirements of a system are actual software programs and hardware to run it and this could be the motivation behind these functions being the most outsourced functions.The study by Scardino et al. (2005) is in agreement with the current study, revealing that network outsourcing is popular due to the optimisation of infrastructures which allows organisations to focus on their core business functions and reduce costs.
Influencing factors
According to Costa (2001), Goo et al. (2000) and Meyers (2002), the factors that influence the outsourcing decision have evolved from being based on cost to more strategic decisions within the organisation.Benamati and Rajkumar (2002) as well as Fink and Shoeib (2003) have indicated that the organisation concentrating on its core activities could influence its outsourcing decision.The research findings of this study reinforce the findings of the above researchers.
In this study, cost is seen as being the most influencing factor in the outsourcing decision.Organisations are, however, also moving towards the outsourcing of smaller business functions so that they can concentrate on their core business activities.This was observed as the second most influential factor from this study.The motivation behind this is that for organisations to remain competitive, they need to concentrate on their core functions and capitalise on their speciality.Thus organisations that follow this practice outsource the less important, smaller more facilitative functions of the business.
This research has shown that in-house expertise is the third most influential factor.Costa (2001) explains that developing in-house skills or attracting individuals with the necessary skills is costly and time consuming.This would explain why it is so important to take it into consideration.It would save the organisation a lot of costs to outsource when the skills are not available in-house.
Forms of outsourcing
In a survey conducted by Serafini et al. (2003), 44% of respondents used co-sourcing, as the benefits derived included specialised knowledge and technical skills.According to Cantara et al. (2005), Application Service Providers (ASP) are becoming popular amongst organisations of all sizes.This is due to the fact that there is a great level of satisfaction with organisations using ASPs (Huntley, 2006).
The research findings have shown that co-sourcing only contributes to 24 % of the outsourcing, while ASPs account for 42% of outsourcing.This contradicts what the literature of Cantara et al. (2005) and Huntley (2006) has shown, i.e.ASPs are the most used form of outsourcing while cosourcing is the second most used form of outsourcing.
Size vs outsourcing decisions
According to Coward (2003), outsourcing decisions within small-to-medium organisations are driven by similar factors to large organisations.They do, however, differ with regards to what they outsource.Coward (2003) makes specific reference to software development outsourcing.
The research findings agree with the first statement about influencing factors but disagree with the second statement regarding outsourced business functions.According to this study there is no correlation between the size of the organisation and the factors influencing all organisations.Furthermore, no correlation to the outsourced functions has been observed in this study.The literature states that software development outsourcing is affected by the size of the organisation.However, the statistical analysis of this study has not indicated any correlation.
The lack of association between the size of the organisation and outsourced ICT business function or influencing factors indicates that all organisations sizes are affected by outsourcing.This further infers that their outsourcing decisions the same.
Type vs outsourcing decisions
The type of the organisation only plays a small part when it comes to making outsourcing decisions.Organisations that outsource are usually those that have a low technology culture and lack technological skills and skills required (Costa, 2001).
However the outsourced function of Network Implementation is not in agreement.This study indicates no correlation between the type of the organisation and influencing factors or outsourced functions, except with regards to Network Implementation.
No correlation between the type of the organisation and outsourced ICT business function or influencing factors means that all organisations types are affected by, and treat their outsourcing decisions in the same way.
Size vs form of outsourcing
Coward (2003) claims that small-to-medium organisations typically outsource to vendors within their country (insourcing) as they require close cooperation between themselves and the vendor.
The research findings reveal that there was no relation between the size of the organisation and which form of outsourcing they used.This is in contrast to the literature.
There is also no association between the size of the organisation and which form of outsourcing they use, signifying that all organisation sizes generally use the same forms of outsourcing.This suggests that the trend should continue, and since ASPs are leading the market due to their impeccable service and benefits, they should continue to be the primary providers of outsourcing.
Conclusion
This empirical research was conducted to determine how South African organisations view their outsourcing; what they outsource and what influences their outsourcing decisions.The aim was then to determine if there were any relationships between the size/type of the organisation and their outsourcing decisions.
Compared to the literature, the research findings revealed many contradictory facts about the outsourced functions that were unforeseen.The literature was taken from an international perspective, while the results were obtained from South African organisations.This could mean that South Africa is not in line with international outsourcing opinion.
Understandably, cost is the most influential factor when making a decision with regards to outsourcing.As with any acquisition, cost is considered as the most important factor, thus instantiating the findings of cost being the most important determinant in an outsourcing decision.
Outsourcing is not influenced by organisational size or type; findings have shown that it is the same across all organisation sizes and types.Similarly, the form of outsourcing used is also not affected by the size of the organisation.
This research has shown that outsourcing decisions are not affected or influenced by any of the variables tested in this empirical research.According to Benamati and Rajkumar (2002), many outsourcing decisions are based on prior outsourcing experiences.The findings revealed no correlation in any of the hypothesis tests; size/type of the organisation does not affect the outsourcing decision.This may be due to the fact that organisations base there outsourcing decisions on their prior experiences and explains why the size/type of the organisation did not influence their outsourcing decisions.It can thus be deduced that each organisation finds the best outsourcing strategy to satisfy their needs based on their past experiences.
Recommendations for future research
The results obtained in the research might not be a true reflection or accurate assessment of the outsourcing industry in South Africa.These results could be attributed to the following areas identified.Recommendations to improve this study have been identified below.
Increase sample size
The current sample consisted of seventy-two respondents.
The sample size could have caused inconsistency with the existent literature.A recommendation is to increase the sample size, which will allow a more thorough assessment of the actual situation in South Africa with regards to outsourcing.
Equal distribution of organisational sizes
The majority of the replies were from large organisations and there were very few replies from small organisations.A better distribution of organisation sizes would have reflected the results more accurately.It is suggested that a minimum sample size be at least 200 for each size of organisation.
Equal distribution of organisational types
There was a considerable difference between the amounts of replies from the different types of organisations.The distribution was not even and thus sufficient tests on the data could not be conducted.Future research should include a more even distribution of various types of organisations to render the results more impartial.
Increasing sample size, with more of each size and type of organisation should improve the integrity of the data and consequently the results.
Evaluate future trends in outsourcing
Despite India's popularity in the outsourcing market, organisations continue to seek for other inshoring opportunities.This aspect was excluded from this study, but could prove to be a worthy aspect for future outsourcing research.
Examine the outsourcing opportunities within South Africa
Currently, as neither a developed nor a developing country, South Africa has not gained similar benefits from outsourcing to that of countries like the US and India.South African outsourcing vendors have not made noticeable progress with regards to the supply of outsourcing services like that of countries like India.Future research has to be conducted to determine why a transitional country like South Africa is not on the frontline of the outsourcing market.
Another primary objective of this empirical research project was to determine what influences the outsourcing decisions; why are companies outsourcing?What are the influencing factors behind the decision?And how important are they when making the decision?The following research questions were designed to ascertain the degree of influence that these factors have on the decision to outsource.
Firstly, this study intended to answer research questions regarding the business functions and the significance of outsourcing for each function.In doing so, the purpose was to accurately discover which functions are being outsourced.Whatare the most outsourced ICT business functions?Towhat extent are they outsourced? 3 : The type of the business does not affect the extent to which each business function is outsourced H 4 : The type of the business does not affect the extent to which each factor influences the outsourcing decision H 5 : The size of the business does not affect which form of outsourcing is used
Table 4 : The extent to which the different size of organisations outsource their ICT functions
(Unimportant to Very Important).From these values, averages were calculated to determine what are the most influencing factors and to what extent they influence the organisations' decisions regarding outsourcing.All values have been adjusted out of 100.Based on the research findings the most influential factor is Cost (86.60), and the least influential factor is 'Legal Factors' (65.20).The remainder of the top three factors include 'the need to concentrate on core business functions' (84.00) and the availability of in-house expertise
Table 5
illustrates the extent to which each factor influences the various organisational sizes.The only noticeable difference can be noted in Legal Factors and Risk. | 2019-05-14T14:04:03.065Z | 2009-12-31T00:00:00.000 | {
"year": 2009,
"sha1": "3574dc9034fd528e5d8266b11420134f057a96c3",
"oa_license": "CCBY",
"oa_url": "https://sajbm.org/index.php/sajbm/article/download/549/477",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1bb71b10e833d4beccf586308b39f28ff4fbc955",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
198982109 | pes2o/s2orc | v3-fos-license | CLINICAL TRIAL HIGHLIGHTS – DYSKINESIA
Kevin McFarthing§, Parkinson’s advocate, Innovation Fixer Ltd, Oxford, UK § To whom correspondence should be addressed at kevin.mcfarthing@innovationfi xer.com Neha Prakash Parkinson's Disease and Movement Disorders Center Northwestern University Feinberg School of Medicine, Chicago, IL, USA Tanya Simuni Parkinson's Disease and Movement Disorders Center, Northwestern University Feinberg School of Medicine, Chicago, IL, USA
INTRODUCTION
To paraphrase the old saying, every silver lining has a cloud. Levodopa remains the gold standard of symptomatic relief because it works, but as Parkinson's Disease (PD) progresses and the therapeutic window of levodopa narrows, many people with Parkinson's (PwP) develop involuntary movements. This phenomenon is termed levodopa-induced dyskinesia (LID), despite the fact that the mechanism of LID is much more complex.
At this point we would like to comment on the nomenclature from the perspective of PwPs. The term "levodopa-induced dyskinesia" suggests that dyskinesia appears purely as a result of taking levodopa. In reality, it is now widely accepted that the emergence of dyskinesia in the course of the disease refl ects progressive neurodegeneration, not the duration of levodopa therapy. Unfortunately, the mistaken view that levodopa can only be taken for a certain length of time persists, leading to widespread "levodopa-phobia". Perhaps a small step towards dispelling this myth is to simply describe the symptoms as "dyskinesia".
The review below provides a summary of the prevalence and current understanding of the pathogenesis of dyskinesia followed by a Phase 3 study in spotlight of amantadine ER, the only molecule currently approved for management of dyskinesia in PD and then a review of the novel therapeutic options in development.
PATHOGENESIS OF DYSKINESIA
Dyskinesia are involuntary hyperkinetic movements presenting mostly as chorea or choreoathetoid form, but rare ballistic, dystonic or stereotypical variants have been described as well. Various subtypes of dyskinesia and body distribution have been recently summarized in a review by Espay et al. [1]. Briefl y, dyskinesia can be classifi ed as peak dose and diphasic. The body distribution, timing and even treatment strategies for the two subtypes differ [1].
The risk of developing dyskinesia is approximately 25-40% after 4-6 years of levodopa therapy and increases thereafter. Dyskinesia impacts both the social and functional aspects of one's life. Even though surveys validate a signifi cant negative impact of dyskinesia on social life, patients continue to prefer to be ON with dyskinesia instead of being OFF [2]. Understanding the pathophysiology of dyskinesia aids in developing newer and redirecting established drugs for adequate management.
For a long time, it was believed that levodopa therapy was a major cause of dyskinesia. However, preclinical data have demonstrated that levodopa therapy may sensitize the nigrostriatal system but does not induce dyskinesia in the setting of preserved dopaminergic circuit [3]. Data from numerous clinical studies have established that delaying levodopa initiation does not prolong the latency to dyskinesia onset. Accordingly, dyskinesia is not a result of duration of levodopa therapy but rather a combination of various intrinsic and extrinsic factors.
With the loss of striatal dopaminergic innervation, the aromatic amino acid decarboxylase within serotonin neurons is used to convert exogenous levodopa to dopamine. Consequently, the dysregulated dopamine delivery and maladaptive serotonergic transmission is linked to expression of dyskinesia [9]. Preclinical data on modulating 5-HT receptors to control dyskinesia has been promising and serve as the rationale for targeting the serotoninergic system.
Other major systems within the basal ganglia linked to dyskinesia include cholinergic, opioid, adrenergic and the cannabinoid system [1]. The current clinical trials and available therapies focus on symptomatic management but there is a need to ultimately direct our attention towards preventing the development of dyskinesia in the fi rst place.
PHASE 3 IN FOCUS -ADAMAS PHARMA'S GOCOVRI
Background: The Phase 3 in focus for this edition of Clinical Trial Highlights continues the theme of symptomatic relief of dyskinesia in PD. We will review two Phase 3 trials already completed on amantadine ER (Gocovri), previously known as ADS-5102 during development, EASE LID [1] and EASE LID 3 [2].
Gocovri is a capsule containing 137mg extended-release amantadine, an uncompetitive antagonist at the N-methyl-D-aspartate receptor known to have benefi t to relieve the symptoms of dyskinesia and currently the only available molecule for management of dyskinesia. The rationale of extended release is to provide a therapeutic level of amantadine in the blood for a longer period of time, in this case enabling once a day dosing. Two capsules are administered at bedtime to give a slow increase during sleep, peak levels in the morning and a sustained concentration during the day.
Comments:
The primary outcome measure was the change in the Unifi ed Dyskinesia Rating Scale (UDysRS) which has a range from 0 to 104. This is in common with most of the clinical trials for dyskinesia in PD.
The two trial plans are summarised in Table 1 below. The designs were very similar, with differences only in the additional extended timepoint of 24 weeks and some secondary outcomes, for example, the use of Clinician's Global Impression of Change (CGIC) in EASE LID.
Among the inclusion criteria for EASE LID were a score of at least 2 on question 4.2 of the Unifi ed Parkinson's Disease Rating Scale (UPDRS); at least two episodes of half an hour of troublesome dyskinesia when ON; and at least 3 administrations of levodopa per day. Exclusion criteria included a history of dyskinesia that was exclusively diphasic, OFF state, myoclonic, dystonic, or akathetic without peak-dose dyskinesia. The inclusion and exclusion criteria for EASE LID 3 did not specify these restrictions, although the baseline data would indicate that these criteria would be met comfortably.
Clinical Trial Highlights 452
The results from both trials are summarized in Table 2 below: There were no signifi cant differences in the UPDRS score (total or parts I, II or III) between Gocovri and placebo at either 12 or 24 weeks, suggesting Gocovri does not make other PD symptoms worse.
In the EASE LID study, adverse events (AEs) were recorded for 88.9% of Gocovri participants, compared to 60.0% in the placebo group. Most were mild to moderate, at 68.3% (Gocovri) and 53.8% (placebo). The most common AEs, at 5% in the active arm, included visual hallucinations, peripheral edema, dizziness, dry mouth, and constipation. Other AEs occurring in less than 5% of participants in the Gocovri group included nausea (4.8%), confusion (3.2%), and orthostatic hypotension (1.6%) [1].
Visual hallucinations were reported by 15 participants (23.8%) in the Gocovri group and 1 participant in the placebo group. One report in the active group was classed as severe but did not meet the criteria for a serious AE. Thirteen participants (20.6%) in the Gocovri group discontinued the study drug because of AEs as did 4 participants (6.7%) in the placebo group [1].
In the EASE LID 3 trial, AEs were reported for 84% of Gocovri participants and 50% on placebo. Most reported AEs were classed as mild to moderate (70% with Gocovri and 45% with placebo). The most common AEs with an incidence of 5% in the Gocovri group were dry mouth, nausea, decreased appetite, insomnia, orthostatic hypotension, constipation, falls, and visual hallucinations. One participant reported 2 Gocovri-related serious AEs (constipation and urinary retention) [2].
A further participant on Gocovri experienced suicidal ideation (assessed by the investigator as related to the study drug), and a second participant attempted suicide (assessed by the investigator as not related to the study drug). Nineteen percent of the Gocovri group and 8% of placebo participants discontinued the study because of AEs [2].
Although Gocovri has not been compared to immediate release (IR) amantadine in a directly comparative effi cacy study, there is a pharmacokinetic comparison showing Gocovri, administered once a day at bedtime, has a delayed time to maximum plasma concentration (12-16 hours), with a sustained level of amantadine throughout the day [3]. The steady state profi le of Gocovri was signifi cantly different to that of IR amantadine administered twice daily, such that the two formulations are not bioequivalent.
The results clearly show a statistically and clinically signifi cant improvement in ON time without troublesome dyskinesia and a concomitant reduction in time with troublesome dyskinesia. Further analyses of the participant diaries have been published using pooled data from both trials [4].
Osmotica Pharmaceuticals has recently launched an extended release amantadine preparation (Osmolex ER) in the US. The new drug is approved for the treatment of PD and for drug induced extrapyramidal reactions in adults. Two Phase 3 trials were conducted for dyskinesia, ALLAY-LID-I (NCT02153645) and ALLAY-LID-II (NCT02153632). Despite this, the New Drug Application (NDA) was based on bioequivalence to amantadine and Osmolex ER does not have the LID indication. It is not interchangeable with either amantadine or Gocovri.
Many PwP fi nd dyskinesia one of the most distressing and embarrassing symptoms of PD, restricting social interaction and causing other knock-on effects such as weight loss. Therapies that prevent dyskinesia or replace the troublesome kind will be valuable tools to use in PD. While Gocovri is a valuable addition to the treatment armamentarium, it does have a fairly high incidence of drug induced adverse effects and as such the development of novel therapeutics remains of value. These are reviewed further in this issue.
EXPERIMENTAL THERAPIES FOR DYSKINESIA IN THE CLINIC
There are eight therapies in clinical phase, summarised in Table 1 below. Seven of the programs are listed on www.clinicaltrials.gov, these will be described in more detail later in this article. There are around a million people in the US with PD of whom an estimated 150,000 to 200,000 suffer from associated dyskinesia [2]. This is one reason why LID has been classifi ed by the US FDA as an orphan disease.
When compared to clinical trials measuring the infl uence of a therapy on the progression of PD, studies measuring symptom relief require a much shorter assessment time. The duration of intervention for the dyskinesia therapies under review varies from seven days to twelve weeks, although the former (Oregon University) includes a twoweek titration period and an assessment at six weeks post-treatment initiation.
All the projects have the Unifi ed Dyskinesia Rating Scale (UDysRS) as the primary outcome, with only one study having additional primary outcomes. This focus on effi cacy is complemented by secondary outcome measures that include the Unifi ed Parkinson's Disease Rating Scale (UPDRS). Some studies have started to include digital technology as exploratory outcomes hoping to collect more real life data. All of the targets are alternative, non-dopaminergic neurotransmitter systems, aiming to reduce dyskinesia while ideally retaining the positive benefi ts of levodopa. One program is focused on the glutamate pathway, using negative allosteric modulation of metabotropic glutamate receptors.
Clevexel Pharma were developing CVXL-0107 (naftazone), a glutamate release inhibitor. This mode of action is thought to help relieve the symptoms of dyskinesia by reducing cortical input to the striatum; decreasing globus pallidus-mediated movement inhibition; and slowing down neurodegeneration through inhibition of excitotoxicity. Pre-clinical data then a multiple n=1 study [5] suggested that naftazone may have antiparkinsonian and antidyskinetic properties. A Phase 2a study (NCT02641054) was initiated to test the hypothesis but showed no difference between naftazone and placebo [6].
In addition, there are a number of molecules in earlier stages of development. Trevi Therapeutics are developing nalbuphine for dyskinesia. The program is in Phase 1 but has not yet been registered on www.clinicaltrials.gov. Four other projects are in preclinical stage. Vistagen Therapeutics are developing AV-101, a NMDA receptor antagonist which has preclinical data for dyskinesia in PD. While Vistagen's priority appears to be the current Phase 2 study for Major Depressive Disorder, they plan to move AV-101 into Phase 2 in 2020 [4]. Air Liquide Santé are developing inhaled xenon gas and Curemark have CM-PK, although very few details are available.
ADDEX THERAPEUTICS AND DIPRAGLURANT
Background: Addex Therapeutics have a technology platform aimed at discovering allosteric modulators of key drug targets. ADX48621, or dipraglurant, is a product of this platform and negatively modulates the metabotropic glutamate 5 receptor (mGluR5). It normalizes abnormal glutamate stimulation and mirrors the pharmacokinetic profi le of levodopa, an advantage in the treatment of dyskinesia [1]. Outcome Measures: The primary outcome measure was the number of participants with abnormal safety and tolerability assessment parameters after 4 weeks.
Secondary outcome measures were the severity of dyskinesia as measured by the modifi ed Abnormal Involuntary Movement Scale (mAIMS) after 4 weeks; change in PD severity as measured by participant diary at weeks 1, 2, 3 and 4, UPDRS part III at weeks 2 and 4, UPDRS total score at week 4; and participant and clinician-rated global impression of change in dyskinesia and PD at 4 weeks.
Comments:
The dipraglurant treatment group of 52 participants had a higher incidence of adverse events (AEs) -88.5% -than the placebo group of 24 (75%). While most participants completed the dose escalation, 2 participants in the active group discontinued due to AEs. No treatment effects were seen in safety monitoring variables. Though it also interacts with other receptors in the 5-HT system, it has strong affi nity towards 5-HT1A/B receptors thought to be primarily responsible for its action. Initially introduced in studies for pathological aggression in intellectually disabled patients, it has since been repurposed to study its effect in ADHD, dementia, and PD patients. Though clinical benefi t for aggression is still inconclusive, its safety and tolerability have been demonstrated in human trials in both oral and intravenous forms [1].
In preclinical animal models, eltoprazine was shown to signifi cantly reduce dyskinesia in levodopa primed models in a dose dependent fashion. When used in combination with levodopa in drug naïve models, it demonstrated protective effect. At lower dose, it was also shown to potentiate the anti-dyskinetic effect of amantadine. However, the benefi t in dyskinesia came with mild loss of anti-parkinsonian benefi t of levodopa [2].
Based on positive results from the preclinical data, PsychoGenics along with the Michael J Fox Foundation funded a double blind, placebo-controlled Phase 1/2a study exploring the safety profi le and effi cacy of eltoprazine for dyskinesia in PD participants. A total of 24 participants were recruited across two sites in Sweden. As a dose fi nding study, this trial looked at the ability of eltoprazine to suppress dyskinesia in PD participants after single dosing administered along with levodopa, while maintaining the benefi ts of levodopa. The three tested doses of Eltroprazine, i.e. 2.5mg, 5mg or 7.5mg, were pre-selected on the basis of safety profi le from previous trials in non-PD participants. Compared to randomized placebo dosing, 5mg single dose was shown to have statistically signifi cant reduction in dyskinesia up to 3 hr post dosing. The 2.5mg and 7.5mg doses showed clinical improvement but failed to reach statistical signifi cance. The dosing was safely tolerated without altering levodopa benefi ts [3]. Though the benefi ts were modest, the trial successfully paved the way for the Phase 2 studies. Study Design: This is a double blind, placebo-controlled, crossover, dose range fi nding interventional study designed to assess the safety, tolerability, and effi cacy of eltoprazine on dyskinesia in PD participants. They are exploring 3 treatment doses and will assess their effi cacy as compared to the placebo on the severity of dyskinesia, parkinsonian symptoms and participant function along with safety and tolerability. The study uses standard scales as noted below along with motion sensors and electronic diaries.
The inclusion criteria require individuals between 30 to 85 years of age with a diagnosis of PD of at least 3 years duration and should be on stable dose of levodopa for 4 weeks prior to screening visit. The dyskinesia is required to be 1. moderate to severely disabling 2. present during 25% of the waking day on an average, and 3. present for at least 3 months prior to study entry.
Standard exclusionary criteria are applied. Participants with surgical treatment for PD namely DBS are not blindly excluded but will be, if the procedure was done within the last 6 months of study inclusion or is planned during the study.
There are 4 study arms as noted here, all with dosing for 3 weeks: 1. Eltoprazine HCl 2.5mg BID (5mg/day) 2. Eltoprazine HCl 5mg BID (10mg/day) 3. Eltoprazine 7.5mg BID (15mg/day) 4. Placebo capsules BID Participants will be randomly assigned to each of the 4 arms. They will complete the 3-week treatment cycle before crossing over to the next study arm.
The study is being conducted in USA at the Parkinson's Disease and Movement Disorders Center at Boca Raton, FL.
Outcome:
The primary outcome measure is the change in the total UDysRS score. This will be assessed at the end of each treatment period on days 21, 42, 63 and 84.
Secondary outcome measures will include 1. Effect on PD motor symptoms as assessed by MDS-UPDRS, participant diaries and physiological measurement using the motion sensor system after 84 days. 2. Change in dyskinesia severity using the physiological motion sensor system after 84 days. 3. Participant function using the questionnaires in MDS-UPDRS and UDyRS to quantify dyskinesia and parkinsonian motor symptoms. This will also be assessed after 84 days. 4. Lastly, safety and tolerability as assessed by adverse events, physical and neurological exams, safe laboratory values, vital signs and ECG. This will be assessed after 94 days.
Current status
Though listed as active and not recruiting, it is unknown if they have met the target already. The Clinicaltrials. gov website has not been updated and no results have been posted yet.
Comments:
The molecule carries potential for meaningful benefi t in dyskinesia. The design of the Phase 1/2a study limits any effective assessment of effi cacy. In 2016, the US FDA granted the molecule orphan drug designation status for PD. Since 2017, Eltoprazine's development has been handled by Elto Pharma, Inc., a joint venture between Amanrantus and PsychoGenics. Elto Pharma recently entered into agreement with Coeptis Pharmaceuticals, Inc. regarding further development.
Though the results from the phase 2b study were expected by now, given the delay, we will have to wait to fi nd out whether the molecule is truly effi cacious for dyskinesia without compromising the levodopa benefi ts.
BUSPIRONE PROGRAMS
Background: Buspirone is an established anxiolytic that acts primarily on the serotonergic system. Though it also affects the 5-HT2 receptors and is an antagonist for the D2 receptor, its effi cacy is thought to be primarily mediated through the 5-HT1A receptors. Given the evidence of serotonergic involvement in Parkinson'sassociated dyskinesia, a number of studies are testing buspirone in PD. Previous human trials have established a safe profi le of the drug and it has a comparatively lower risk of serotonin syndrome [1].
Preclinical data suggests that buspirone is effective in reducing dyskinesia and physiologically reduces the fi ring rate of subthalamic neurons but requires an intact nigrostriatal pathway to do so [2]. Buspirone has been studied in open label trials previously exploring its effect on parkinsonism and dyskinesia. Studies that looked specifi cally into buspirone's role for parkinsonism demonstrated no benefi t at lower doses (30mg/kg) but had worsening of parkinsonism with anti-dyskinetic benefi t at higher doses (~100mg/day) [3,4]. However, when explored specifi cally for dyskinesia in open label studies, benefi t was noted in low to moderate dose (15-60mg/ day) with variable worsening of parkinsonism [5,6]. The anti-dyskinetic benefi t was noted only in moderate to severe cases [6].
Recent clinical data from three PD patients with off state dyskinesia post fetal neural graft are of interest. Imaging studies showed increased serotonergic innervation of the striatum and all three had signifi cant suppression of dyskinesia after using buspirone [7,8]. This supports the serotonergic hypothesis and lays the groundwork for further studies to determine effi cacy in dyskinesia. The study will randomly assign participants to two study arms. Arm 1 will receive buspirone orally in escalating doses. For the fi rst two weeks, they will be on 10mg daily morning dose followed by 10mg twice a day for the next two weeks to fi nally build up to 10mg three times a day from week 5 to 12. Arm 2 will receive capsules of placebo and administered in escalating doses to match the arm 1. Assessments will be done every 2 weeks and at the end of the study.
ASSISTANCE PUBLIQUE -HOPITAUX DE PARIS
Outcome: The primary outcome evaluates change in the UdysRS between the placebo and treatment arm from baseline to week 12.
Secondary outcomes include: 1. Comparison of effi cacy between the two arms as measured by MDS-UPDRS parts 3 and 4 at different time points within the period of 13 weeks treatment duration. 2. Comparison of quality of life between the two arms as measured by MDS-UPDRS parts 1 and 2 at different time points within the 13 weeks treatment duration. 3. Comparison between the two arms as measured by side effects profi le at different time points within the 13 weeks treatment duration. 4. The maximum dose tolerated by the participants at different time points within the 13 weeks treatment duration.
IRL-790 -INTEGRATIVE RESEARCH LABORATORIES
Background: IRL-790 is a dopamine D3 receptor antagonist with psychomotor stabilising properties. A previous Phase 1b study with IRL-790 in 15 participants (NCT03531060) using the UDysRS to assess symptoms showed a median reduction of 11.5 points vs placebo and a mean reduction of 8.2 points vs placebo over four weeks. There was no effect on standard anti-Parkinsonian medication. Inclusion criteria require PwP between the ages of 18 and 79 on a stable regimen of anti-parkinsonian medication. They must display waking day dyskinesia of 25% determined as a score of 2 on question 4.1 of the UPDRS part IV. One intriguing inclusion criterion is that participants must be willing and able to avoid direct exposure to sunlight from day 1 to day 28. Comments: This study is a Phase 2a study to further assess effi cacy of IRL-790 in the reduction of dyskinesia. The trial is still in the early stages but it will be interesting to see if D3 antagonism can deliver anti dyskinetic benefi ts without compromising motor control.
PRIDOPIDINE
Background: Pridopidine, developed by Arvid Carlsson Research Laboratories, is a potential neuroprotective and neurorestorative molecule shown to exert its effect via the sigma-1 receptors. It has mostly been explored for Huntington's Disease (HD) and was given orphan drug status by FDA for HD. Teva pharmaceuticals took over the development of the drug from NeuroSearch in 2012, but given the lack of positive data from the HD trials Teva is letting go of the molecule and Prilenia Therapeutics Development Ltd. has taken over its development.
In experimental PD animal studies, pridopidine has been shown to protect the nigral dopaminergic cell bodies and upregulate growth factors leading to axonal sprouting and restoration of striatal dopaminergic fi bre density.
The nigral neuroprotective effect has been associated with reduced microglial activation [1]. Preclinical data in PD models demonstrate dose dependent reduction in dyskinesia up to 71% without jeopardizing the antiparkinsonian benefi ts of levodopa. There was also a notable reduction in ON time with disabling dyskinesia [2,3].
Most of the data for pridopidine comes from HD trials. Though the trials fail to demonstrate consistent signifi cant benefi t in motor impairment in HD participants, all the studies established a safe and tolerable profi le for the drug [4][5][6]. Since the safety profi le is established, the molecule is being explored for dyskinesia in a Phase 2 trial as detailed below. Study Design: This is a multicentre, double-blind, randomized, three-arm, parallel-group Phase 2 study evaluating the effi cacy and safety of two doses of pridopidine vs placebo for dyskinesia in PD participants. The study will include participants with a clinical diagnosis of PD between the ages of 30 and 85 years. Mild to moderate dyskinesia is a prerequisite. Participants are required to be on a stable medication regimen (PD and non-PD) for at least 28 days prior to the study start date and be able to maintain that through the study duration. Standard exclusionary criteria apply. Participants with surgical intervention such as DBS are excluded.
The participants will be randomized to one of 3 parallel arms: Arm 1-dose 1 in the form of oral capsules for 12 weeks following a 2 week titration period. Arm 2-dose 2 in the form of oral capsules for 12 weeks following a 2-week titration period. Arm 3-placebo in the form of oral capsules for 14 weeks.
The study is currently recruiting participants at two sites in the USA.
Outcome:
The primary outcome measure explores the change in dyskinesia from baseline to week 14. The score is calculated as a sum of parts 1, 3, and 4 of the UdysRS. No secondary outcomes have been posted.
Comments:
The pharmacology of the molecule and data from animal studies are promising. Given an established safety profi le, it is one step ahead in the development for dyskinesia. Physiologically its effect is similar to GDNF growth factors in terms of neuronal dopamine protection and sprouting in the nigrostriatal axons. Though it failed to show effi cacy for the HD population, its effect on dyskinesia is yet to be determined. | 2019-07-31T13:03:53.692Z | 2019-07-22T00:00:00.000 | {
"year": 2019,
"sha1": "4f7fcb4833a507523d9f197a10ad4deaf6198c77",
"oa_license": "CCBYNC",
"oa_url": "https://content.iospress.com/download/journal-of-parkinsons-disease/jpd199002?id=journal-of-parkinsons-disease/jpd199002",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "4f7fcb4833a507523d9f197a10ad4deaf6198c77",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237865388 | pes2o/s2orc | v3-fos-license | The First Report on the Afternoon E-Region Plasma Density Irregularities in Middle Latitude
We report, for the first time, the afternoon (i.e., from noon to sunset time) observations of the northern mid-latitude E -region field-aligned irregularities (FAIs) made by the very high frequency (VHF) coherent backscatter radar operated continuously since 29 December 2009 at Daejeon (36.18°N, 127.14°E, 26.7°N dip latitude) in South Korea. We present the statistical characteristics of the mid-latitude afternoon E -region FAIs based on the continuous radar observations. Echo signal-to-noise ratio (SNR) of the afternoon E -region FAIs is found to be as high as 35 dB, mostly occurring around 100–135 km altitudes. Most spectral widths of the afternoon echoes are close to zero, indicating that the irregularities during the afternoon time are not related to turbulent plasma motions. The occurrence of afternoon E -regional FAI is observed with significant seasonal variation, with a maximum in summer and a minimum in winter. Furthermore, to investigate the afternoon E -region FAIs-Sporadic E ( E s ) relationship, the FAIs have also been compared with E s parameters based on observations made from an ionosonde located at Icheon (37.14°N, 127.54°E, 27.7°N dip latitude), which is 100 km north of Daejeon. The virtual height of E s ( h’E s ) is mainly in the height range of 105 km to 110 km, which is 5 km to 10 km greater than the bottom of the FAI. There is no relationship between the FAI SNR and the highest frequencies ( f t E s ) (or blanket frequencies ( f b E s )). SNR of FAIs, however, is found to be related well with ( f t E s – f b E s ).
INTRODUCTION
Extensive studies of E-region field-aligned irregularities (FAIs) have been made in the equatorial (e.g., Fejer & Kelley 1980), low-latitude (e.g., Patra & Rao 1999) and auroral regions (e.g., Haldoupis 1989) with radars and in situ measurements.In middle latitudes, investigation of E-region FAIs was started with the observation of sporadic E (E s ) at an altitude of ~105 km using a portable 50-MHz Doppler radar on the island of Guadeloupe in French West Indies near Arecibo, Puerto Rico (Ecklund et al. 1981).
In the last three decades, intensive efforts, both observational and theoretical, have been devoted to achieving a better understanding of E-region FAIs in midlatitudes using the Middle and Upper atmosphere (MU) radar in Shigaraki (34.89°N,136.10°E,25.7°Ndip latitude), Japan (e.g., Fukao et al. 1985aFukao et al. , b, 1991)).Investigations using observations from the MU radar have revealed the essential features and characteristics of mid-latitude E-region FAIs.In MU radar observations, Yamamoto et al. (1991) first recognized two types of radar echoes in the middlelatitude E-region: "quasi-periodic (QP)" echoes appearing intermittently at altitudes above 100 km with periods of 5-20 min from post-sunset time to midnight, and "continuous" echoes appearing continuously at altitudes of 90-100 km mainly during the post-sunrise period.Several studies have been conducted to investigate the generation mechanism of the post-sunset QP echoes in the middle latitude.As a factor in the generation mechanism of QP echoes, Woodman et al. (1991) pointed out that atmospheric gravity waves could modulate E s layers to keep the plasma unstable, accounting for its quasi-periodicity.Later, Tsunoda et al. (1994) modified the theory of Woodman et al. (1991) to suggest that a polarization electric field resulting from spatial modulation of the E s layers due to a gravity wave may play a role in generating the QP echoes.The SEEK (Sporadic-E Experiment over Kyushu) (e.g., Fukao et al. 1998;Tsunoda et al. 1998;Yamamoto et al. 1998) and SEEK-2 (Sporadic-E Experiment over Kyushu-2) (e.g., Saito et al. 2005;Yamamoto et al. 2005) were conducted in order to reveal the generation mechanism of QP echoes in the mid-latitude E s layers.From both campaigns, it was found that polarization electric fields were induced from the E s layer with QP echoes, mapped upward along the geomagnetic field, and played an essential role in determining the structures of the whole ionospheric E-region.Otsuka et al. (2007) observed E-region FAIs and medium-scale traveling ionospheric disturbances (MSTIDs) using very high frequency (VHF) radar and 630-nm airglow images simultaneously.They reported that the electric fields associated with the F-region MSTIDs could be closely coupled to those associated with QP echoes in the E-region.
MU radar observations, however, have never shown the E-region irregularities that occurred during the afternoon (i.e., from noon to sunset time).No afternoon E-region FAIs in the middle latitude have been reported yet.On the other hand, a 40.8 MHz VHF radar operated continuously since 29 December 2009 at Daejeon (36.18°N,127.14°E,26.7°N dip latitude) in South Korea has often observed E-region FAIs in the afternoon.In this paper, therefore, for the first time, we report afternoon observations of the mid-latitude E-region FAIs made by the Daejeon radar.In section 2, we describe in detail the experiment of the Daejeon radar.Section 3 presents the statistical characteristics of the midlatitude afternoon E-region FAIs based on the continuous radar observations.Moreover, to investigate the afternoon E-region FAIs-E s relationship, in section 4, the FAIs have also been compared with E s parameters derived from an ionosonde located at Icheon (37.14°N, 127.54°E, 27.7°N dip latitude), which is 100 km north of Daejeon.Our main findings are summarized in section 5.
EXPERIMENT DESCRIPTION
A VHF coherent scattering radar was built at Daejeon (36.18°N,127.14°E,26.7°N dip latitude) in South Korea, aiming to continuously monitor middle-latitude FAIs in the Far East Asian sector.Daejeon VHF radar operates at 40.8 MHz and has a maximum transmit power of 24 kW.
Table 1 summarizes the basic parameters and technical specifications of our radar.A total of 24 5-element Yagi antennas have been installed in a 12 × 2 phased array on an area of 85 m × 40 m.The radar beam is directed at a 48° zenith angle in the magnetic north direction to be perpendicular to the magnetic field lines at the E-and F-region heights so that backscattering of E-and F-region irregularities can be detected, and half-power full beamwidths in horizontal and vertical directions are 10° and 22°, respectively.The radar has been operating routinely, sampling the E-and F-regions for one minute.The radar parameters used for E-region observation are given in Table 2.The inter-pulse period (IPP) for the E-region experiments is 2.5 ms, and the pulse width is 6 μs, so the range resolution of the VHF radar measurements for the E-region is 900 m.The FAIs detected by the 40.8 MHz radar correspond to a scale size of 3.68 m (half wavelength of the transmitted pulse).More detailed information on the radar experiment and data analysis can be found in Kwak et al. (2014) and Yang et al. (2015).During the radar observation periods, ionograms have been obtained at 15-min intervals using a digital ionosonde, having a sweep frequency range from 1 MHz to 20 MHz, operated routinely from Icheon (37.14°N, 127.54°E, 27.7°N dip latitude), which is 100 km north of Daejeon.We obtained three parameters of E s : virtual height of E s (h'E s ); top frequency (f t E s ), the maximum frequency at which the E-region echoes are observed; and blanketing frequency (f b E s ), the lowest frequency at which the F-region echoes are observed.
Fig. 1 illustrates the geometry of the Daejeon VHF radar and Icheon ionosonde experiments.The locations of the radar and ionosonde are marked with a black dot and a black square, respectively.The latitudinal curves with altitude information show the loci where the radar ray paths are perpendicular to the geomagnetic field at various E-region heights, assuming straight-line ray propagation paths.
OBSERVATIONS OF AFTERNOON E-REGION FAIS IN MIDDLE LATITUDE
Since its installation in December 2009, the VHF coherent scattering radar at Daejeon has been operating for E-and F-region ionospheric FAI research.Data were collected in spectral power form and stored for post-analysis, then spectral characteristics were parameterized in terms of signal-to-noise ratio (SNR), Doppler velocity, and spectral width.found at altitudes between 105 km and 110 km.In the postsunset period just after sunset (20:25 LT), strong QP type echoes (maximum SNR ~30 dB) are observed at about 120 km to 140 km altitude.While the characteristics of the postsunrise continuous and post-sunset QP echoes in middle latitudes are already understood thanks to the MU radar observations (e.g., Yamamoto et al. 1991Yamamoto et al. , 1992Yamamoto et al. , 1994;;Ogawa et al. 1995Ogawa et al. , 2002)), little was known about the mid-latitude E-region echoes from noon to sunset.Indeed, although radar probing of the mid-latitude E-region ionospheric electron density irregularities has been carried out for several decades, so far, no afternoon E-region FAIs in middle latitude have been reported as of yet.In Fig. 2(a), however, at Daejeon, very strong continuous-like type afternoon echoes are seen at about 100 km to 135 km altitude with a thickness of 35 km, centered at about 115 km altitude, from 14:00 to 20:30 LT.The maximum SNR of the echoes is ~30 dB.This value is similar to that of the post-sunset QP echoes and is more intensive than that of the post-sunrise continuous echoes.In Fig. 2(b), the Doppler velocities are almost positive except between 16:00 and 17:30 LT, indicating that FAIs are moving away from the radar (or upward velocities).The absolute magnitude of the Doppler velocities of the afternoon echoes is mostly less than 30 m/s.This magnitude is smaller than that in the post-sunset QP echoes, and is similar to that of the post-sunrise continuous echoes.In Fig. 2(c), the spectral widths of the Doppler velocity of the afternoon echoes are mostly very low (maximum spectral width ~10 m/s).This magnitude is smaller than those of the post-sunset QP and the post-sunrise continuous echoes.Daejeon, South Korea, from 2010-2017.An SNR larger than −10 dB was regarded as echoes from FAIs.The occurrence and non-occurrence of FAI echoes from the radar are represented by the red and blue sections, respectively.The white area in Fig. 3 indicates times at which there was no observation due to instrument problems, and the white dotted and dashed lines represent the noontime, the sunset time, and the sunrise time, respectively, against the day of the year.In this study, we observe that the occurrence of the afternoon E-region irregularities between noon and sunset time is especially frequent.
From Fig. 3, it should be noted that we encountered significant data loss except for 2011 and 2012.For this reason, as we could not get meaningful information on the occurrence of the afternoon E-region FAIs during 2010 and 2013-2017 due to this data loss, we present echo occurrence statistics and characteristics for the afternoon E-region FAIs (i.e., irregularities between noon and sunset time) obtained from observations made during 2011-2012 hereafter.FAIs occur around altitudes of 100-135 km.
RELATION BETWEEN AFTERNOON E-REGION FAIS AND E S LAYERS
The sporadic E layer within the E-region is a very thin layer of extremely dense electrons relative to that of the surroundings.The electron density at E s is usually 2 or 3 times greater, and sometimes it reaches F-region electron density.Wind shear theory (Whitehead 1989) with long-lived metallic ions is a well-known mechanism of E s in the middle latitude.In general, E-region FAIs in the middle latitude are related to the level of E s activity, and several studies (e.g., Yamamoto et al. 1992;Hussey et al. 1998;Ogawa et al. 2002;Maruyama et al. 2006) have sought to understand the relationship with E s .However, the studies showed the results in time zones other than the afternoon zone.For this reason, we investigate the relationship between the afternoon E-region FAIs at a middle latitude and variations of the E s layers.To accomplish this, we compare the variations in the afternoon E s parameters observed from Icheon ionosonde with the variations in the afternoon E-region FAIs observed over Daejeon.
In Figs.6(a) and 6(b), we present examples of SNR of afternoon E-region FAIs from the Daejeon radar and the virtual height of E s (h'E s ) from Icheon ionosonde on 22 June and 28 July 2011.Color contours represent the range-time SNR maps of the afternoon E-region FAIs, and circle-solid lines in green represent h'E s .These figures clearly show that the h'E s is located on average in the 105-110 km altitude range, and these altitudes will be 5-10 km higher than the FAI bottom side.The high elevation values commonly observed during daytime can be attributed to the group delay effect on daytime HF frequencies due to underlying ionization.Lee et al. (2000) found that the h'E s was in the range of 100-110 km altitude and considered these heights to be 5-10 km higher than actual heights, depending on the underlying ionization.
According to recent studies on the mid-latitude E s (Hussey et al. 1998;Ogawa et al. 2002;Maruyama et al. 2006;Patra et al. 2009;Phanikumar et al. 2008Phanikumar et al. , 2009)), the top frequency (f t E s ) corresponds to the local maximum electron density of a non-uniform layer or the peak electron density of a spatially uniform layer, and the blanketing frequency (f b E s ) is the minimum value among the peak electron densities of the layer.For a non-uniform E s layer, the difference between f t E s and f b E s was found to be related to the irregularities present in the E s layer, albeit on a larger scale than those observed by VHF radars.In this SNR of the afternoon E-region FAIs is poorly correlated with both f t E s and f b E s and reasonably well correlated with (f t E sf b E s ).Almost no correlation between FAIs and f b E s indicates that commonly occurring blanketing E s is insufficient for the generation of the E-region afternoon irregularities in the middle latitude.Instead, large values of (f t E s -f b E s ) probably increase the SNR of FAIs, implying that patchytype E s structures must be responsible for the excitation of irregularities.
Using the MU radar and ionosonde from Shigaraki in Japan, Yamamoto et al. (1992) found that the QP radar echoes from the mid-latitude E-region in the nighttime were correlated with E s activity.In addition, based on simultaneous observations of FAIs and E s from middle latitude, Ogawa et al. (2002) and Maruyama et al. (2006) found good correlation between the QP radar echoes and the enhanced value of (f t E s -f b E s ).Yamamoto et al. (1992), however, found that the radar echoes from the mid-latitude E-region were not detected in the summer afternoon when ionosonde observed maximum E s activity.Similar results have been reported from Chung-Li, Taiwan (Lee et al. 2000).On the other hand, E-region observations from Daejeon VHF radar and Icheon ionosonde clearly show that significant afternoon FAIs are detected, especially in summer season, and are correlated with E s (especially, f t E sf b E s ).Based on these different observations in the midlatitude E-region, the generation of FAIs is closely related to localized density gradients within the E s layer that provide favorable conditions for the growth of instability.
CONCLUSION
In this paper, for the first time, we present the characteristics and statistical morphology of the midlatitude afternoon E-region FAIs based on the continuous observations from the VHF coherent backscatter radar at Daejeon in South Korea.The main findings of this paper are as follows.
First, it is observed that the occurrence of the afternoon E-region FAIs in the middle latitude is maximum during the summer season and minimum during the winter season.
Second, the echo SNR of the afternoon E-region FAIs in middle latitude is found to be as high as 35 dB, mostly occurring around 100-135 km altitudes.Most spectral widths of the afternoon echoes are close to zero, indicating that the irregularities during the afternoon time are not related to turbulent plasma motions.Third, the afternoon E-region FAIs-E s relationship has been investigated based on Daejeon radar observations and E s parameters made from an ionosonde located at Icheon, which is 100 km north of Daejeon.It is found that the virtual heights of E s (h'E s ) are mainly in the height range of 105-110 km, and these heights are 5-10 km greater than the FAI bottom side.No relation is found between FAIs SNR and top frequency (f t E s ) (or blanketing frequency (f b E s )).Instead, it is supposed that large values of (f t E s -f b E s ) enhance the SNR of FAIs, suggesting that patchy-type E s structures must be responsible for the excitation of irregularities.
Fig. 1.Map showing the locations of the Daejeon coherent scatter radar and the Icheon ionosonde in South Korea.The horizontal curves are the loci where the radar ray paths are perpendicular to the geomagnetic field at E-region altitudes.
Fig. 2 .
Fig. 2. Range-time variations of (a) signal-to-noise ratio (SNR), (b) Doppler velocity and (c) spectral width of the E-region irregularities observed on 22 June 2011.Vertical dotted line represents sunset time in the E-region.
Fig. 3
Fig. 3. Seasonal and local time distributions in the E-region FAI echo observed at Daejeon, South Korea, during 2010-2017.A signal-to-noise ratio lager than −10 dB was regarded as echoes from FAIs.Red represents the E-region FAI occurrence against local time and day of year for each year during 2010-2017.FAI, field-aligned irregularities.
Fig. 4. Seasonal percentage occurrence of the afternoon E-region FAIs as a function of local time observed at Daejeon, South Korea, during 2011-2012.FAI, field- aligned irregularities.
Fig. 6 .
Fig.6.Observations of the afternoon E-region FAIs and E s on (a) 22 June 2011 and (b) 28 July 2011.Color contours and circle-solid lines in green color represent range-time SNR maps of the afternoon E-region FAIs and h'E s for each day, respectively.FAI, field-aligned irregularities; SNR, signal-tonoise ratio.
study, we used f t E s and f b E s to represent the maximum and minimum values of the peak electron density in the E s layer, respectively, and (f t E s -f b E s ) as representative of irregularities.Fig. 7 shows seasonal and local time distributions of E s parameters observed from Icheon ionosonde and peak SNR of the afternoon E-region FAIs observed from Daejeon radar in 2011.In these figures, we have shown f t E s in the top panels, f b E s in the second panels, (f t E s -f b E s ) in the third panels, and peak SNR of the afternoon E-region echoes in the bottom panels.Concerning the relationship of afternoon FAIs with E s activities, as is evident from Figs. 7(a)-7(d),
Fig. 7 .
Fig. 7. (a)-(c) Seasonal and local time distributions of the three E s parameters (f t E s , f b E s and f t E s -f b E s ) observed from Icheon ionosonde and (d) peak SNR of E-region FAIs observed from Daejeon VHF radar in 2011.SNR, signal-to-noise ratio; FAI, field-aligned irregularities; VHF, very high frequency.
Table 1 .
Specifications of the VHF ionospheric radar at Daejeon
Table 2 .
Observational mode for E-region FAIs Tae-Yong Yang et al.Mid-Latitude Afternoon E-Region Plasma Irregularities | 2021-09-28T01:16:07.628Z | 2021-06-01T00:00:00.000 | {
"year": 2021,
"sha1": "93285836036cd86a31b098b3a0a78f044969744a",
"oa_license": "CCBYNC",
"oa_url": "http://www.janss.kr/download/download_pdf?pid=jass-38-2-135",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "afd977e749045b85177d344e517f5037318a1cd1",
"s2fieldsofstudy": [
"Physics",
"Environmental Science"
],
"extfieldsofstudy": [
"Geology"
]
} |
267766904 | pes2o/s2orc | v3-fos-license | Burden of non-serious infections during biological use for rheumatoid arthritis
Introduction Biologicals have become a cornerstone in rheumatoid arthritis (RA) treatment. The increased risk of serious infections associated with their use is well-established. Non-serious infections, however, occur more frequently and are associated with a high socioeconomic burden and impact on quality of life but have not received the same attention in the literature to date. The aim of this study was to gain insight into the various non-serious infections reported in RA patients using biologicals and their experienced burden. Materials and methods The Dutch Biologic Monitor was a prospective observational study that included adults with rheumatoid arthritis and biological use who answered bimonthly questionnaires on the adverse drug reactions (ADRs) they experienced from their biological and reported the associated impact score (ranging from 1, no impact, to 5, very high impact). ADRs were assigned a MedDRA code by pharmacovigilance experts and labeled as definite, probable, possible or no infection by infectious disease professionals. Descriptive statistics were performed using medians and interquartile ranges. Results A total of 586 patients were included in the final analysis. Eighty-five patients (14.5%) reported a total of 421 ADRs labeled as probable or definite infections by the experts. Patient-assigned burden was ADR-specific. Upper respiratory tract infections were most frequently reported and had a high rate of recurrence or persistence, with a median impact score of 3.0 (IQR 2.0–3.0) which remained stable over time. Discussion Non-serious infections significantly outnumbered serious infections in this real-life cohort of RA patients using biologicals (77.1 non-serious infections and 1.3 serious infections per 100 patient years, respectively). Infections in the upper respiratory tract were rated as having an average burden, which remained constant over a long period of time. Awareness of the impact of recurrent and chronic non-serious infections may enable healthcare professionals to timely treat and maybe even prevent them, which would lessen the associated personal and socioeconomic burden.
Introduction
Biologicals have become a cornerstone in rheumatoid arthritis (RA) treatment.The increased risk of serious infections associated with their use is well-established.Non-serious infections, however, occur more frequently and are associated with a high socioeconomic burden and impact on quality of life but have not received the same attention in the literature to date.The aim of this study was to gain insight into the various non-serious infections reported in RA patients using biologicals and their experienced burden.
Materials and methods
The Dutch Biologic Monitor was a prospective observational study that included adults with rheumatoid arthritis and biological use who answered bimonthly questionnaires on the adverse drug reactions (ADRs) they experienced from their biological and reported the associated impact score (ranging from 1, no impact, to 5, very high impact).ADRs were assigned a MedDRA code by pharmacovigilance experts and labeled as definite, probable, possible or no infection by infectious disease professionals.Descriptive statistics were performed using medians and interquartile ranges.
Results
A total of 586 patients were included in the final analysis.Eighty-five patients (14.5%) reported a total of 421 ADRs labeled as probable or definite infections by the experts.Patient-assigned burden was ADR-specific.Upper respiratory tract infections were most frequently reported and had a high rate of recurrence or persistence, with a median impact score of 3.0 (IQR 2.0-3.0)which remained stable over time.
Discussion
Non-serious infections significantly outnumbered serious infections in this real-life cohort of RA patients using biologicals (77.1 non-serious infections and 1.3 serious infections per 100
Introduction
Rheumatoid arthritis (RA) is an auto-inflammatory disease that primarily affects the joints.Its prevalence varies geographically and has been estimated to affect up to 1.5% of the population in some regions, with women being affected two to three times as often as men [1].Its impact is significant, with reduced quality of life and increased morbidity and mortality among its patients [2][3][4][5].Treatment of RA is often complex and may require use of multiple immunomodulatory drugs to achieve disease remission [6].The advent of biological therapies has marked a significant turning point in the treatment of RA.Biologicals target specific components of the immune system involved in the pathogenesis of RA.For the therapy of RA, a number of biological classes are available; each has a unique mode of action, safety profile, and efficacy.Tumor necrosis factor alpha (TNF-alpha) inhibitors are the oldest and most commonly used biological class in RA.Drugs such as adalimumab, etanercept, certolizumab pegol, golimumab, and infliximab are examples of TNF-alpha inhibitors.They specifically target the pro-inflammatory cytokine of the same name.Sarilumab and tocilizumab are examples of interleukin-6 (IL-6) inhibitors, which specifically target the interleukin-6 molecule.Rituximab causes a depletion of B-cells by binding to the CD-20 molecule on their surface.With abatacept, T-cell costimulation is suppressed.Biologicals are highly effective, have an acceptable benefit-to-risk profile and are well-tolerated in the long term, and are therefore increasingly used in RA treatment.However, accessibility to and the cost of these drugs lead to an unequal distribution of their use across the world [7], with their use correlating positively with a country's social-economic status.According to Grellmann et al, biologicals are currently used in 29% of German RA patients [8].
The use of biologicals is associated with an increased risk of serious infections, the occurrence of which has been extensively studied since they hit the market in the late nineties.Moreover, published randomized controlled trials (RCTs) reveal the frequency of non-serious infections to be even 10 times higher than of serious infections [9][10][11][12][13], and infections are among the most frequently reported adverse reactions in RA patients using biologicals.However, unlike serious infections, non-serious infections have not been given the same attention in the scientific literature [9,[14][15][16].Because patients generally do not seek the help of healthcare professionals for non-serious infections, such as the common cold, healthcare professionals may underestimate their incidence and importance.
This poses the question what the occurrence and impact of non-serious infections in RA patients using biologicals really is.The most frequently reported non-serious infections during trials and observational studies of biological use are respiratory tract and urinary tract infections [9,14].Prior research shows that such infections have a high socioeconomic burden [17][18][19][20][21]. Unfortunately, there is currently no standardized definition of non-serious infections, making research on this topic challenging [9].
Self-reporting of adverse drug reactions (ADRs-defined as harm caused by the correct use of the drug in question) by patients is an important component of pharmacovigilance and can take place as part of a trial or an observational study or can be based on spontaneous reporting.In spontaneous reporting, patients tend to report upon more (both known and unknown) adverse events (AEs-defined as any harm that occurred during correct or incorrect use of the drug, not necessarily reflecting a causal relationship) than their treating physicians and do so more quickly and in more detailed terms [22,23].This may therefore contribute to earlier ADR detection.Patients report more upon the impact of the ADR on their life and well-being than health care professionals (HCPs) [24].However, symptoms reported by patients may be of lower medical quality than the reports of HCPs [22,23].
In self-reporting as part of a trial or cohort study, patients also report more ADRs than HCPs, and agreement between HCPs and patients on ADRs is varied and dependent on the specific ADR [25,26].Patients report more general system disorders such as fatigue and malaise.However, they report fewer infections than HCPs.As in spontaneous reporting, selfreporting during trials and observational studies is more reflective on the impact on patients' quality of life [26,27].
Patient-generated data provide an important addition to standard HCP-based ADR monitoring [26,28,29], particularly where quality of life is concerned.As self-reporting is the only means by which ADR burden can be estimated, registration of ADR impact on daily life and well-being is an essential addition to current pharmacovigilance strategies.Having more detailed information on this aspect of ADRs may enable HCPs and patients alike to construct better treatment strategies that consider their experienced burden.
The aim of this study was to gain more insight into the various serious as well as non-serious infections reported by RA patients using biologicals, and their burden as perceived by the patients themselves.To achieve this, we used self-reported ADRs in web-based ADR questionnaires as part of the Dutch Biologic Monitor [29].In the questionnaires patients were asked if adverse drug reactions occurred.Of course this doesn't necessarily imply the existence of a true causal relationship; for this reason we consider the reported events as "potential ADRs".
Data collection
We used data from all RA patients included in the Dutch Biologic Monitor that collects patients' experiences using web-based questionnaires addressing the use of biologicals and potential ADRs attributed to these drugs [29].The Medical Ethics Committee Brabant judged that the Dutch Biologic Monitor does not require specific ethical approval (METC Brabant NW2016-66) since it collects data by means of questionnaires and existing data sources only.The monitor was approved by the scientific committees and boards of directors of participating hospitals.Patients were consecutively recruited by HCPs in nine Dutch hospitals during outpatient visits and through letters sent by their pharmacy.Patients were eligible when they were � 18 years of age and had an established RA diagnosis, were treated with a biological and were proficient in Dutch.All participants provided a digital informed consent form prior to enrolment.Enrolled patients were asked to complete an online questionnaire once every two months to register potential ADRs.Patients were asked to share any experiences they had with the medications under study.These events will henceforward be referred to as "potential ADRs" because the patient suggested a causal relationship between the recorded events and the drug, but this relationship could not be verified.A causality assessment between the reported potential ADRs and the medication could not be performed.The following patient characteristics were registered in the first questionnaire: age, weight, length, comorbidities, smoking status, biological (generic and brand name and its start date) and RA-related comedication use (conventional synthetic disease-modifying anti-rheumatic drugs (csDMARDs), prednisone).There was no predefined maximum number of questionnaires that could be completed by the patients in the course of time.The study was conducted from January 1, 2017, until December 31, 2020.Patients stopped receiving questionnaires when informed consent was withdrawn or if a previous questionnaire was left unanswered for 21 days.
Patient-reported ADRs
In every questionnaire, patients were asked to report on whether they experienced potential ADRs since the last questionnaire.If potential ADRs were reported, the following information was requested: a patient's description of the potential ADR, its start date, and its burden on a five-point Likert scale [30], being 1, no burden, and 5, a very high burden.When a potential ADR was mentioned, patients were requested to report its status in each subsequent questionnaire until a stop date for the potential ADR was provided.In this way, the development of the potential ADR (getting better, getting worse, staying the same, or resolved, and if so, a stop date) was recorded, and whether the patient had contacted HCP(s) because of the ADR and which type of HCP was contacted, whether treatment of the potential ADR was provided (if applicable), whether the patient was hospitalized and what actions were taken by the patient.A potential ADR was considered serious when treatment involved hospitalization, was life-threateming or resulted in death or a significant disability [31].
Patient-reported descriptions of potential ADRs, provided as free text, were interpreted by trained assessors at the Dutch Pharmacovigilance Centre Lareb and coded using the Medical Dictionary for Regulatory Activities Terminology (MedDRA version 23.1) [32].The MedDRA system encompasses a hierarchal structure in which individual potential ADRs can be classified according to, first, the System Organ Class (SOC), then, the High-Level Group Terms (HLGT), then, the High-Level Terms (HLT), then, the Preferred Terms (PT) and, finally, the Lower Level Terms (LLT).
Assigning infection probability and recoding by medical professionals
Since it was not always clear to what extent a reported potential ADR could be considered an infection, all reported potential ADRs were rated on the probability of being an infection by physicians specialized in infectious diseases (BB, JLM, EdV).As potential ADRs were patientreported, generally little to no information about additional diagnostics was provided (for example, when herpes zoster was reported as a potential ADR, there was no information on whether a polymerase chain reaction (PCR) was positive for varicella zoster virus).Currently, no standardized method of assigning infection probability exists, therefore we made use of clinical judgement.Predefined options were "definite", when infection was considered certain (for example, "herpes zoster"); "probable", when infection was considered likely, but the description did not allow for a definitive confirmation (for example, "infection susceptibility increased"); "possible" when infection was considered unlikely but could not be ruled out (for example, "malaise"); "noninfectious" when the potential ADR was considered as definitely not an infection (for example, "hematoma").This physician's label did not imply any sort of causal relationship between the potential ADR and the drug.Reported potential ADRs were independently reviewed by BB, EdV and JLM (physicians specialized in infectious diseases), with BB reviewing all potential ADRs and EdV and JLM each reviewing another half, so that every potential ADR was reviewed twice.Discrepancies in rating decisions were resolved by discussion between BB, EdV, JLM and EvP (physician specialized in pharmacovigilance and general practice) until consensus was reached.Because initial MedDRA coding, as interpreted by trained MedDRA assessors at pharmacovigilance centre Lareb, showed discrepancies between the assigned coding and the expert opinion of the physicians in some cases, potential ADRs deemed definite, probable or possible infections were recoded to a more appropriate Med-DRA-term by the authors when needed.For example, if the original MedDRA coding specified "inflammation of wound" under PT, the PT was recoded to "wound infection", to align with the judgment of the potential ADR being a "definite" infection.When recoding, we decided to code only from the SOC level to PT level, since LLT's tend to be too detailed or are synonyms of the overarching PT.The MedDRA system lacks specific information on the organ system in which infections occur.Although each PT is formally linked to a single MedDRA primary System Organ Class, it could not always be ruled out that the infection may have occurred in another organ system.Insufficient information was available to adequately reflect where in the body infections took place.To solve this, PTs were assigned a "system organ category" predefined by the authors in which the potential ADR had most likely occurred by BB.Definite, probable and possible infections were categorized into the following predefined categories: bone and joint, ear, eye, gastro-intestinal, genital, respiratory tract (divided into upper respiratory tract, lower respiratory tract), lymphatic tissue, neurological, oral, skin/soft tissue, systemic, urinary tract, other, or unknown.For a list of PTs corresponding to each category, see Table 1 in S1 Appendix.Definite infections were assigned a most likely pathogen type by BB, EdV, JLM and EvP through independent review and consensus discussion where needed (bacterial, viral, fungal or unknown, see Table 2 in S1 Appendix).
Database cleaning
Start and stop dates and age, weight and length were entered into date or numerical fields.However, no automated validation of these fields was carried out upon entering the data in the system.Consequently, we identified several inconsistencies in age, weight, length and start and stop dates of both biologicals and potential ADRs that we dealt with based on discussion among all authors until consensus was reached.See Table 3 in S1 Appendix.
Medians and interquartile ranges (IQRs) were calculated when data was not normally distributed.Outcomes were calculated for the total number of potential ADRs labeled as definite or probable infections by our team (whenever a potential ADR was reported in multiple questionnaires, it was counted multiple times).We used bar charts to visualize the distribution of probable and definite infections across various organ systems, and box plots to visualize the median impact score and percentile ranges of probable and definite infectious events in subsequent questionnaires.To achieve this, we plotted the median impact score for every questionnaire in which a probable or definite potential infectious ADR was sequentially reported (the first questionnaire in which a potential ADR was reported being "1", the second one being "2", etc).This was irrespective of the potential ADR reporting timeline: if a potential ADR was reported by a patient for the first time in for example the 15 th questionnaire, it was plotted as questionnaire number 1 in this visualization.
Results
A total of 586 RA patients were included in the cohort, who together completed 5,388 questionnaires.See Table 1 for baseline patient characteristics.Of all patients, 30 (5.1%) also suffered from another autoimmune disease and 353 (60.2%) reported one or more other comorbidities.TNF-alpha inhibitors were the most frequently used biologicals (495 patients, 84.5%), followed by interleukin-6 antagonists (41 patients, 7.0%), T-cell costimulation Participants answered a median of five questionnaires (range 1 to 22).One fifth of participants (n = 121, 20.6%)only completed one questionnaire, and more than half (n = 326, 55.6%) stopped participating after the fifth questionnaire (see Fig 1
in S1 Appendix
).There were no significant differences in patient characteristics between patients completing only the first questionnaire and patients completing more questionnaires (see Table 4 in S1 Appendix).Together all questionnaires encompass a follow up duration of 546 patient years, with a median duration of 8.0 months per patient (IQR 2.0-20.0).More than half of the respondents (n = 311, 53.1%) had used their biological for less than three years at inclusion, the median number of months between start of the biological and start of the questionnaires being 30.0 (IQR 13.0-81.0).See More than half of the participants (n = 305, 52.0%) reported one or more potential ADRs, with a total of 2,817 potential ADRs being reported across 4,015 questionnaires.Of these, 1,950 (69%) were considered non-infectious in nature, and 867 (31%) potential ADRs were considered either possible (446, 15.8%), probable (63, 2.2%) or definite (358, 12.7%) infections.156 (26.7%) participants reported one or more possible, probable or definite potential infectious ADRs.Overall, most potential ADRs were classified as related to the upper respiratory tract, skin and soft tissue, and systemic symptoms by our team.See Fig 1 .More than half of all potential infectious ADRs were labeled as "possible infection" by our team, meaning an infection was unlikely, but could not be excluded.For example, nearly all (n = 101, 75.4%) possible skin-and soft tissue related infections were coded as SOC "General disorders and administration site conditions", making injection site reaction the most likely explanation.We therefore describe probable and definite infections in the main text and possible infections separately in Figs 4 and 5 in S1 Appendix and Table 5 in S1 Appendix.
Overall, 85 (14.5%) patients reported a total of 421 probable and definite infection ADRs, corresponding to 136 unique probable and definite potential infectious ADRs per patient.The median time between start of the biological and a reported probable or definite infection was 18.0 months (IQR 2.0-72.0).The median reported duration of probable and definite infections was 31.0 days (IQR 13.0-64.0).Overall, these potential ADRs were assigned a median impact score of 3.0 (IQR 2.0-4.0) by participants.See Table 2. Table 6 in S1 Appendix shows all potential ADRs subdivided according to organ system and experts' infection label.See Fig 6
in S1
Appendix for an analysis of probable and definite upper respiratory tract infections throughout different seasons.There was high variability considering the duration of potential ADRs: most patients reported relatively short durations, however, incidentally, exceedingly long The questionnaire in which a potential ADR was reported for the first time is indicated as "1" on the x-axis, the questionnaire in which a potential ADR in the same organ system was reported for the second time by that patient is indicated as "2" on the x-axis, etc..In some organ systems, e.g., systemic infections, a single patient is observed that continues to mention a potential ADR in this organ system for an extended period of time.
Impact assigned by patient
https://doi.org/10.1371/journal.pone.0296821.g002 tract had comparatively high impact scores.Impact scores and the number of questionnaires in which infections were sequentially reported varied substantially depending on the affected organ system.Upper respiratory infections, in particular, had a high rate of recurrence or persistence: more than half (n = 15) of the patients who reported an upper respiratory tract infection at least once, reported it in up to three subsequent questionnaires (six months).When separating probable and definite upper respiratory tract infections into individual potential ADR-PTs (see Fig 8 in S1 Appendix), nasopharyngitis accounts for the majority of potential ADRs, followed by sinusitis.Contrary to upper respiratory tract infections, urinary tract infections were not reported for more than three consecutive questionnaires, indicating these infections may not recur as frequently.Patients with skin and soft tissue infections, lower respiratory tract infections and urinary tract infections reported a relatively high initial burden, that subsided relatively quickly over time as opposed to upper respiratory tract infections (Fig 2 ).Patients using biologicals are specifically instructed to contact their HCP in case of an infection.Out of all probable and definite potential infectious ADRs, 207 (49.3%) were followed by contact with an HCP.There was a higher rate of consultation in lower respiratory tract infections (79.4%), eye infections (72.2%) and urinary tract infections (70.4%).Conversely, an HCP was only consulted 32.6% of the time in upper respiratory tract infections.41.8% of skin-and soft tissue infections were followed by contact by an HCP.When subdividing probable and definite skin-and soft tissue infections into individual potential infectious ADRs, 29 (36.7%) were coded as "skin infection" and not otherwise specified.At the same time, this potential ADR had a low rate of HCP consultation.See Fig 9 in S1 Appendix.Overall, the general practitioner was the most frequently consulted HCP, with a total of 122 visits.A medical specialist was consulted 99 times.For details, see Table 7 in S1 Appendix.
There were eight hospitalizations in seven patients due to definite or probable infections, corresponding to an incidence of 1.3 hospitalized patients per 100 patient years of follow-up.Three patients reported hospitalization for pneumonia, one for a fungal lower respiratory tract infection, one for an upper respiratory tract infection (which was however most likely a hospitalization because of a tonsillectomy to resolve recurrent throat infections), one for erysipelas, and one for oral herpes.Potential ADRs that led to hospitalization had a high median impact score of 5.0 (IQR 4.50-5.00).
Discussion
In this study, we evaluated the occurrence and self-reported burden of non-serious infections as reported by RA-patients using biologicals.Because it was sometimes uncertain to what extent reported potential ADRs were infections, each potential ADR was rated by four physicians specialized in infectious disease or pharmacovigilance and general practice as definite, probable or possible infection or as potential non-infectious ADR.We found 14.5% of patients reported any kind of likely (i.e., definite or probable) infection at least once during the course of their treatment; only 1.2% of patients reported the occurrence of a serious infection.Nonserious infections had a median impact score of 3.0 (IQR 2.0-4.0), which remained stable over time in upper respiratory tract infections during several months of follow-up.
In our cohort of 586 patients, only seven (1.2%) patients experienced an infection leading to hospitalization, corresponding to 1.3 serious infections per 100 patient years of follow-up.However, 85 (14.5%) patients reported a potential ADR that was considered a probable or definite infection by our team (logically, most of them being non-serious) corresponding to an incidence rate of 15.6 infected patients per 100 patient years and an event rate of 77.1 events per 100 patient years of follow-up.Non-serious infections have not been the focus of most literature on biological safety to date.Estimates of the incidence of non-serious infections in RA patients treated with biologicals have been highly variable (ranging between anywhere between 13 and 147 infectious events per 100 patient years of follow-up) [37,38].This high variability is likely the effect of heterogeneity, bias and incomplete reporting that is inherent to harm-reporting in RCTs [39,40], aggravated by the absence of a standardized definition of non-serious infection [37][38][39][40][41][42].The incidences we found in this study are similar to those in RCT's and registry studies, which have a similar follow-up frequency (once every two or three months).The literature on serious infections, on the other hand, has been more consistent, reporting event rates of 2 to 10 events per 100 patient years of follow-up [43,44].As opposed to non-serious infections, there is a standardized definition of a serious infection, and any occurrence is consistently registered and reported.It should be noted that the event rate for serious infections we found using patient-generated data falls within the previously reported range.
Infections have previously been described as especially burdensome by patients receiving biological therapies and are a frequent cause of treatment discontinuations [45,46].In line with previous literature, the majority of these infections are non-serious in nature and are mostly related to the upper respiratory tract [9,14,47].As for the non-serious infections in our dataset, upper respiratory tract infections were reported by the highest number of patients and in the highest number of sequential questionnaires, indicating a high rate of either recurrence or persistence of such infections.Upper respiratory tract infections were attributed an initial median impact score of 3 (IQR 2.0-3.0) by patients and remained relatively stable over several months of follow-up.The high incidence of upper respiratory tract infections is in line with existing literature [9,14].Stja ¨rne et al have previously established that acute rhinosinusitis, though usually self-limiting and with a very low rate of complications, in general reduces a patient's quality of life [48].Patients with chronic rhinosinusitis with nasal polyps also experience this as a high burden [49].Rhinosinusitis and other upper respiratory tract infections come at considerable socioeconomic costs and work absenteeism [19,21,[50][51][52][53], lead to an increase in (often unnecessary) antibiotic use and may therefore stimulate bacterial resistance [54,55].In our dataset, only 32% of participants sought help from a HCP for upper respiratory tract infections, meaning a large proportion of these infections likely go unnoticed in regular practice, and they may be underreported in the literature to date.Considering the high burden that is attributed to these infections, and their high rate of recurrence or persistence, it is of paramount importance to create more awareness surrounding this phenomenon as the information that is currently available is likely only the tip of the iceberg.
Lower respiratory tract infections, skin and soft tissue infections and urinary tract infections were rated with higher impact scores by RA patients than upper respiratory tract infections, but interestingly impact scores dropped after the first questionnaire where they were reported.A possible explanation may be that these infections are taken more seriously and therefore more readily treated.These infections also had a markedly shorter reported duration than upper respiratory tract infections.Prior literature shows that recurrent urinary tract infections have a negative effect on patients' experienced quality of life and also lead to significant direct and indirect economic costs [56][57][58][59].Not surprisingly, infections leading to hospitalization (four of which were lower respiratory tract infections) were experienced as very burdensome (median impact score 5.0, IQR 4.5-5.0).
This study has several limitations.Firstly, the frequency of questionnaires (bimonthly) is not ideal for the purpose of evaluating potential infectious ADRs, as most infections have a shorter duration and can be forgotten in the course of two months, leading to recall bias.As data were provided by a third party and no changes could be made to the methodology in this study.Participants self-reported exceedingly long durations of infections, which is most likely also the result of recall bias or a different interpretation of recurrence versus persistence of infection.As illustrated in Table 2, some patients may interpret recurrent infections as a single infection that is "always present" while others may view them as multiple infections of shorter duration.Alternatively, upper respiratory tract infections may be followed by an exacerbation of chronic obstructive pulmonary disease (COPD) or asthma (13.1% of participants having reported a comorbid pulmonary condition), the symptoms of which may have been mistakenly identified as upper respiratory tract infection.Due to the absence of detailed information and the previously mentioned factors, it was impossible to discern whether an infection was recurrent or chronic.Lastly, data were obtained by using patient-reported outcomes.While patient-reported outcomes may provide valuable additional information on ADRs, they are prone to bias and underestimation, as patients often fail to recognize the causal relationship with their therapy or report (multiple) symptoms instead of an infectious disease diagnosis [26].The same applies to the potential infectious ADRs in this database.This is illustrated by participants in this study reporting skin infections that are not otherwise specified, but did not require any medical care, meaning patients may not be able to correctly identify a skin infection.Working with patient-reported data also meant having to exclude several patient-provided start and stop dates of potential ADRs that were improbable or impossible (see Table 3 in S1 Appendix).Because of high attrition rate, stop dates of ADRs were often missing as patients left the study while the potential ADR persisted or before a stop date was registered.Nevertheless, patient-reported potential ADRs are of great additional value to pharmacovigilance, as patients generally report different aspects of ADRs than HCPs would [24].ADR burden can only be reported by patients themselves.Limited data has been obtained so far regarding the impact of infections on a patient's quality of life during biological therapy, which is where patient-reported outcomes using an impact score provide a high additional value.Furthermore, patient-reported outcomes provide a unique point of view: that of the patient, and not the HCP.This information is of added value for general practitioners and medical specialists alike, as a large proportion of patients will contact their HCP when they have an infection.
This study provides an overview of the patient-experienced burden of non-serious infections in biological treatment in RA.Though not life-threatening, it is clear that a significant proportion of patients (14.5%) suffers from an infection at some point during their treatment, and our data show that even such non-serious infections have a significant impact on patients' quality of life, with a median impact score of 3.0 (out of 0 to 5).Creating more awareness of the affected organ systems and the burden of non-serious infectious ADRs may enable HCPs to timely treat and maybe even prevent them, lessening both their associated personal as well as socioeconomic burden.Future research should focus more on the occurrence and burden of non-serious infections and include multiple methods of assessing impact on quality of life.
Fig 2 in S1 Appendix.Fig 3 in S1 Appendix shows questionnaires were distributed and filled out around the same time every two months, therefore a cyclical pattern can be observed in the total number of filled in questionnaires per calendar week throughout the year.
Fig 2
Fig 2 shows the potential ADRs' impact scores per each questionnaire for probable and definite infections across various organ systems.Infections in skin and soft tissues and the urinary
Fig 1 .
Fig 1. Affected organ system classes.NOS = not otherwise specified.Distribution of all infection-related potential ADRs across the various organ systems.Infection labels are shown for each organ system.One patient could contribute multiple potential ADRs.https://doi.org/10.1371/journal.pone.0296821.g001
Fig 2 .
Fig 2.Impact scores per questionnaire for probable and definite infections in various organ systems.LRT = lower respiratory tract, URT = upper respiratory tract.Data are presented as patients reporting a probable or definite infection once or multiple times (x-axis) and the patient-assigned impact score (y-axis).The numbers in red represent the number of patients that reported an infection.The questionnaire in which a potential ADR was reported for the first time is indicated as "1" on the x-axis, the questionnaire in which a potential ADR in the same organ system was reported for the second time by that patient is indicated as "2" on the x-axis, etc..In some organ systems, e.g., systemic infections, a single patient is observed that continues to mention a potential ADR in this organ system for an extended period of time.
Table 1 .
(Continued) a An individual patient could have no, one or multiple comorbidities and/or comedications.https://doi.org/10.1371/journal.pone.0296821.t001durations were reported.Overall, we classified most definite potential infectious ADRs as being bacterial in nature (Fig 7 in S1 Appendix). | 2024-02-22T05:06:48.247Z | 2024-02-20T00:00:00.000 | {
"year": 2024,
"sha1": "64650822809db14bff2d0b7ddc608134b71b6f90",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "64650822809db14bff2d0b7ddc608134b71b6f90",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
8402501 | pes2o/s2orc | v3-fos-license | An Anatomical Study of the Nutrient Foramina of the Human Humeral Diaphysis
Background Understanding the nutrient foramina is critical to clinical practice. An insult to the nutrient foramina can be caused by trauma and/or surgical dissection and lead to devascularization and bad outcomes. Few studies have looked at the humerus, and no studies have described relative information of humeral nutrient foramen related to anatomical structures that might be located by palpable landmarks. In this study, we analyzed the anatomical features of the nutrient foramina of the diaphyseal humerus and provide a discussion of clinical relevance. Material/Methods We dissected 19 cadavers and analyzed the relative positions of the foramina and surrounding muscles, and the number, direction, diameter, and location of the nutrient foramina. Foramina index and a new landmark index were used to calculate the location. We compared the data from both sides and the relationships between transverse and longitudinal locations, diameter and total length, and foramina index and landmark index were also analyzed. Results The humeri had one or two main nutrient foramina located in a small area between the coracobrachialis and brachial muscles and oriented toward the elbow. The mean diameter was 1.11±0.32 mm. The mean index and landmark index were 43.76±4.94% and 42.26±5.35%, respectively. There were no differences between sides in terms of diameter, length, or nutrient foramina index. There were no significant correlations between transverse and longitudinal locations or diameter and total length. The foramina index and landmark index showed strong positive correlation (r=0.994, p<0.0001). Conclusions Our study provides details about the nutrient foramina that will benefit clinicians who treat injuries and diseases of the humerus. Surgeons should be mindful of soft tissue in the foraminal area during surgical procedures.
In the humerus, 90% of the blood supply to the diaphyseal cortical bone is supplied from the nutrient artery [8]. Menck et al. reported that the humerus is usually supplied by a single nutrient artery entering the nutrient foramen just below its midpoint [12]. Unfortunately, a significant proportion of humeral fractures are located in this area and will likely destroy the main nutrient artery [13,14]. Clinicians should be aware that fractures passing through the foraminal area are likely to heal slowly or not at all [4,14,15].
Fractures of the humeral shaft account for approximately 3% of all fractures [16,17]. With advancements in bone fixation techniques and increasing pressure from patients, humeral shaft fractures are increasingly being treated surgically, which is associated with high costs and risks of complications [14,17]. Inappropriate therapy or poor surgical technique can impair the foraminal area and nutrient artery, and therefore interfere with fracture union [6,10]. Nonunion occurs 15-30% of the time, depending on the treatment [18][19][20], leading to substantial additional cost [14,21]. If surgeons were able to avoid the bone area containing the nutrient foramen during surgeries, improved management and outcomes would likely be realized [15].
Many scholars have studied the nutrient foramina of long bones [1,2,4,5,13,15,[22][23][24][25][26][27]. Most of these studies were performed many years ago, and mainly focused on the number, location, and direction of the nutrient foramina. Few studies were specific to the humerus, and study findings were limited to anatomical descriptions and often differed from one another [1,15]. In addition, while anatomical structures can be identified by palpable landmarks in clinical practice, a palpable landmark for the nutrient foramina has not been described in the literature.
In this study, we systematically observed the anatomical features of nutrient foramina in humeral diaphysis. Based on these findings, we also provide a conclusive descriptive interpretation of previously published studies, which indicate that each humerus has one or two main nutrient arteries and several accessory arteries. Our study also provides novel data, including the diameter and symmetry analysis of the nutrient foramen. We also provide observations regarding the relative positions of the nutrient foramen and the surrounding muscles. Most importantly, we introduce a novel landmark index that will help clinicians to locate the nutrient foramen by palpation.
Material and Methods
The Ethics Committee of Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University approved the study protocol. Nineteen adult Chinese cadavers (10 males and 9 females) were separately collected. The cadaver donors were free of any history of upper limb trauma or vascular or hemorrhagic diseases.
Our study was guided by findings from previous studies that showed the majority of the foramina were observed in the anteromedial portion of the mid-distal diaphysis [1,2,4,5,13,15,22,23]. The foramina were first exposed by careful dissection to determine the relationship between the foramina and the surrounding muscles ( Figure 1).
Next, the soft tissues and periosteum were removed. As Laing [1] previously observed, the accessory nutrient arteries entered the posterior surface in the spiral groove, and these vessels were all small, with no nutrient foramina visible on the bone surface. Additionally, because our study aimed to benefit surgical outcomes, only macroscopic foramina of the diaphysis were included. All bone surfaces were systematically examined macroscopically so that small foramina would not be overlooked.
The nutrient foramina were identified by the presence of distal grooves and the canals, which were raised above the surface of the bone ( Figure 1). In ambiguous cases, we passed a fine wire through the foramen to confirm that it did indeed enter the medullary cavity. For bones with more than one foramen, all foramina in that bone were recorded.
For each limb, the number, direction, diameter, and location of the nutrient foramina were recorded. The anatomic surface bearing the foramen was also noted. Foramina within 1 mm of the anterior or medial border were considered to be on that border. The diameters of the nutrient foramina were measured using a sliding caliper that was accurate to 0.01 mm ( Figure 2).
The transverse distribution of the foramina was recorded relative to the medial border. The longitudinal location of a foramen was determined by measuring its distance from both fixed points and apices at the proximal and distal ends of the bone; these measurements were then expressed as percentages of the palpable and maximal lengths. Measurements were made using a divider that was read on a scale graduated in millimeters.
While it is impossible to make perfectly precise measurements, all measurements were performed by one author using a standardized process to avoid inter-observer variability to the greatest extent possible.
Hughes introduced a formula to calculate the index (I) of the nutrient foramina away from the proximal ends [28]. To provide more practical information for clinical use in surgery, we modified the formula to create a landmark index (I') in this study. In clinical practice, especially in surgery, many anatomical structures can be located by palpation of landmarks on the body surface. Furthermore, because most of the foramina were observed in the anteromedial diaphyseal humerus, we selected the medial epicondyle and the greater tuberosity as two fixed points from which to calculate the landmark index; of these, the epicondyle is more easily palpable than the greater tuberosity. We calculated both indices from the distal end.
The formulas are expressed as I=DF/TL×100, where I is the foramina index, DF is the distance from the distal end of the bone to the nutrient foramen, and TL is the total length from apex to apex, and I'=CF/LL×100, where I' is the landmark index, CF is the distance between the medial epicondyle and the nutrient foramen, and LL is the distance between the medial epicondyle and the greater tuberosity ( Figure 3).
The bones were photographed with a digital camera. Data were analyzed with Pearson's correlation coefficient and the paired t-test using SPSS software (Statistical Package for the Social Sciences-SPSS Inc. v20, Chicago, IL, USA).
Results
In all limbs but one, the nutrient foramina were consistently found between the insertion of the coracobrachial muscle and the origin of the brachial muscle anterior and inferior to the coracobrachialis ( Figure 1).
The data are displayed in Table 1. A total of 42 nutrient foramina were found in 38 humeri. Thirty-two (84.21%) humeri had a single nutrient foramen. Double foramina were observed in five (13.16%) humeri, while the foramen was absent in one (2.63%) humerus (Figures 4,5). All nutrient foramina entered the diaphysis obliquely and were oriented distally in the direction of the elbow ( Figure 1). The mean foramen diameter was 1.11±0.32 mm (range 0.42-1.78 mm). All foramina were found on the surface from the medial to the anterior border. To illustrate the transverse distribution, we created a similar ratio of the distance from the medial border to the nutrient foramen and from the foramen to the anterior border. As shown in Figure 6, there was a highly significant tendency for the foramina to . TL is the total length of the bone; DF is the distance from the distal end of the bone to the nutrient foramen; CF is the distance between the medial epicondyle and the nutrient foramen; and LL is the distance between the medial epicondyle and the greater tuberosity. TL -total long; D-C -distance from distal end of the bone to medial epicondyle; C-F -distance from the epicondyle to nutrient foramen; F-T -distance from nutrient foramen to greatest tuberosity. Correlations between the transverse and longitudinal distributions, diameter and total length, and foramina and landmark indices were analyzed using Pearson's correlation coefficient. There was no significant correlation between the transverse and longitudinal distribution (r=-0.38, p=0.809) (Figure 8).
Similarly, there was no correlation between the foramina diameter and the total humerus length (r=0.094, p=0.552) (Figure 9).
In contrast, a strong correlation was observed between the two indices (r=0.994, p<0.0001) ( Figure 10).
The availability of full cadavers allowed comparison of data between both sides of the body. The statistical data for the left and right sides are presented in Table 2. Paired t-tests were performed for diameter, length, and nutrient foramina index. Specimens with absent or two foramina were excluded. No significant differences were observed between the left and right sides for diameter, length, and nutrient foramina index (p values: 0.713, 0.431, and 0.278, respectively).
Discussion
The arrangement of the diaphyseal nutrient foramina in the long bones usually follows a defined pattern in which the foramina are located on the flexor surface of the bones (anterior in the upper limbs and posterior in the lower) [15 23]. Dissection revealed that the main blood supply to the shaft of the humerus enters through a restricted surface area on the anteromedial aspect of the distal half of the shaft. This finding was consistent with most previously reported studies [1,2,4,5,15,23].
Among these studies, only Carroll and Forriol investigated the relationship between nutrient foramina and the surrounding muscles. Carroll measured the distances from the foramen to the apex of the deltoid insertion [15]. Forriol found that the location of the nutrient foramina was below the insertion of the coracobrachialis muscles [4]. Because the main nutrient arteries enter the humerus medially, it is appropriate to observe the relative locations between the nutrient foramina and the medial muscles. Our findings were consistent with 1642 those of Forriol. We believe this information will assist surgeons in locating the nutrient foramina during surgery, thereby preserving the circulation in the region. Kizilkanat suggested a direct relationship between the position of the nutrient foramina and a continuous blood supply because the foramina were always located near major muscle attachments [2]. This may also explain the location of the nutrient foramina in the diaphyseal humeri.
The observation that the majority of the humeri had a single nutrient foramen is consistent with most studies, including those conducted with different races [1,2,4,5,13,15,23]. As we observed, some authors also reported a small number of humeri with no foramina [5,[22][23][24]. Nutrient arteries divide into ascending and descending arteries after entering the cortex of the bone [10]. In the humerus, this division may take place outside the cortex, with each branch having its own canal and nutrient foramen [1]. This could explain the humeri with two foramina that were observed by our team and by other researchers. In Mysorekar's study, 42% of the specimens (from Hindu patients) had more than one nutrient foramen, and 19% of the foramina were found in the spiral groove [22]. Because the other two authors from India reported conclusions similar to those of most studies, we rejected the idea that the differences observed could be attributed to race; instead, we surmised that Mysorekar might have noted the foramina of both the main and accessory nutrient arteries on the basis of Laing's definition [1]. Laing and Forriol reported that the main nutrient foramina were always found on the anteromedial surface of the bone [1,4]. Laing also stated that one or several accessory arteries of the humerus arise from the profunda brachii and enter the posterior surface in the spiral groove [1]. This can explain the humeri that were observed to have more than two foramina or foramina on the posterior surface. The accessory nutrient arteries varied in number, and their foramina were too small to identify with the naked eye [1,4]. Therefore, the main nutrient foramina are more clinically meaningful during surgery.
Previous studies have focused largely on the direction and orientation of the nutrient foramina. Some authors have proposed theories to account for the generally consistent direction of the nutrient foramina as well as the anomalously directed ones. Among these, the "vascular theory" proposed by Hughes and favored by most authors offers the best explanation for both the normal nutrient foramina and anomalies [11,23,24,27,28]. Hughes stated that the foramina were directed away from the growing end, which was the proximal end in the case of the humerus, and anomalous foramina are frequently observed in the femur but rarely occur in the radius and other bones. In his article, Hughes also noted that anomalous foramina were extremely rare in the human femur but were common in other species [28]. In the present study, we observed that the foramina were consistently directed toward the elbow. Previous authors have demonstrated that the obliquity and location of the nutrient foramina are not significantly correlated with the known bone age [22,24], which supports the vascular theory.
The diameter of the nutrient foramina in human long bones has been reported in only a few papers. Because there have been no reference data on the humerus to date, the results reported here are novel data. In some studies, when a bone had more than one foramen, the larger was considered the main foramen [15,22]. Mysorekar reported reciprocity between foraminal sizes in humeri with two foramina [22]. In the studies of Kizilkanat and Longia, on the other hand, some humeri were found to have two nutrient foramina, neither of which was dominant and with no reciprocity observed in their size [2,23]. In our series, we observed one humerus that had two foramina with the same diameters (Specimen 10). We also observed no relationship between the foraminal size and their proximal or distal location. Some authors discussed the concept of acquired disposition [15,25]. Carroll observed a significantly greater proportion of large foramina on the right side and attributed this to the increased function of the right arm, which is usually dominant [15]. Sendemir proposed that the difficult living conditions experienced by warriors might play a role in the differences observed between ancient and modern humans after studying the lower limb long bones of 305 unearthed ancient skeletons [25]. We analyzed the data from our sample and found no significant differences between left and right sides (p=0.713). Because all of our specimens were Chinese, this observation may not necessarily be extrapolated to other populations.
According to Patake, the number of foramina is not significantly related to the length of the bone [27]. In our series, the mean total bone length was 305.12±16.29 mm. We analyzed the relationship between foramen size and humerus length and found no correlation (r=0.094, p=0.552). This suggests that clinicians cannot estimate the size of nutrient arteries by their patients' body size.
There is no currently available method for comparing data from different studies other than the foramina index [25,28]. Because this is a theoretical parameter that cannot be applied to clinical practice, we introduced the landmark index. The epicondyles are more prominent than the proximal landmarks, and the medial epicondyle is on the same border as the nutrient foramina; therefore, we modified the indices by calculating them from the distal end. One method based on specific landmarks has been applied by Carroll, who measured the distance between the foramen and the medial epicondyle. However, he reported these in the form of an absolute distance, which could be easily affected by differences in the total length of the humerus [15]. In our study, the foramina index was similar to those reported in previous studies after conversion. We also found a strong correlation between the two indices, with a correlation coefficient of 0.994 (p<0.0001). | 2018-04-03T00:06:00.844Z | 2016-05-16T00:00:00.000 | {
"year": 2016,
"sha1": "14facb87f51088391b214795ce3cefde51cce032",
"oa_license": "implied-oa",
"oa_url": "https://europepmc.org/articles/pmc4917311?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "14facb87f51088391b214795ce3cefde51cce032",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
201283481 | pes2o/s2orc | v3-fos-license | History and progress of hypotheses and clinical trials for Alzheimer ’ s disease
Alzheimer’s disease (AD) is a neurodegenerative disease characterized by progressive memory loss along with neuropsychiatric symptoms and a decline in activities of daily life. Its main pathological features are cerebral atrophy, amyloid plaques, and neurofibrillary tangles in the brains of patients. There are various descriptive hypotheses regarding the causes of AD, including the cholinergic hypothesis, amyloid hypothesis, tau propagation hypothesis, mitochondrial cascade hypothesis, calcium homeostasis hypothesis, neurovascular hypothesis, inflammatory hypothesis, metal ion hypothesis, and lymphatic system hypothesis. However, the ultimate etiology of AD remains obscure. In this review, we discuss the main hypotheses of AD and related clinical trials. Wealthy puzzles and lessons have made it possible to develop explanatory theories and identify potential strategies for therapeutic interventions for AD. The combination of hypometabolism and autophagy deficiency is likely to be a causative factor for AD. We further propose that fluoxetine, a selective serotonin reuptake inhibitor, has the potential to treat AD.
INTRODUCTION
Alzheimer's disease (AD) is an irreversible progressive neurological disorder that is characterized by memory loss, the retardation of thinking and reasoning, and changes in personality and behaviors. 1,2 AD seriously endangers the physical and mental health of the elderly. Aging is the biggest risk factor for the disease, the incidence of which doubles every 5 years after the age of 65. 3 Approximately 40 million people over the age of 60 worldwide suffer from AD, and the number of patients is increasing, doubling every 20 years. [4][5][6][7] In 1906, Alois Alzheimer presented his first signature case and the pathological features of the disease at the 37th convention of Southwestern German Psychiatrists. Later, in 1910, his coworker Emil Kraepelin named the disease in honor of his achievements. In the following years (from 1910 to 1963), researchers and physicians did not pay much attention to the disease until Robert Terry and Michael Kidd revived interest by performing electron microscopy of neuropathological lesions in 1963. Electron microscopy analysis showed that neurofibrillary tangles (NFTs) were present in brain biopsies from two patients with advanced AD. 8,9 Since then, studies on the pathological features and mechanisms of AD and drug treatments for the disease have been conducted for more than half a century (from 1963 to present). 10 Clinically, AD is divided into sporadic AD (SAD) and familial AD (FD). FD accounts for 1-5% of all AD cases. [11][12][13][14][15] In the early 1990s, linkage analyses of early-onset FD determined that mutations in three genes, namely, amyloid-beta A4 precursor protein (APP), presenilin 1 (PSEN1), and presenilin 2 (PSEN2), are involved in FD. PSEN1 mutations account for~81% of FD cases, APP accounts for 14%, and PSEN2 accounts for~6%. 11 In addition to these three genes (APP, PSEN1, and PSEN2), more than 20 genetic risk loci for AD have been identified. 16,17 The strongest genetic risk factor for AD is the ε4 allele of apolipoprotein E (APOE). [18][19][20][21] APOE is a class of proteins involved in lipid metabolism and is immunochemically colocalized to senile plaques, vascular amyloid deposits, and NFTs in AD. The APOE gene is located on chromosome 19q13.2 and is associated with late-onset FD. The APOE gene has three alleles, namely, ε2, ε3, and ε4, with frequencies of 8.4%, 77.9%, and 13.7%, respectively. The differences in APOE2 (Cys112, Cys158), APOE3 (Cys112, Arg158), and APOE4 (Arg112, Arg158) are limited to amino acid residues 112 and 158. [22][23][24][25] Analyses of the frequencies of these APOE alleles among human populations have revealed that there is a significant association between APOE4 and lateonset FD (with an ε4 allele frequency of~40% in AD), suggesting that ApoE4 may be an important susceptibility factor for the etiopathology of AD. [25][26][27] Moreover, APOE4 can increase the neurotoxicity of β-amyloid (Aβ) and promote filament formation. 28 The APOE4 genotype influences the timing and amount of amyloid deposition in the human brain. 29 Reelin signaling protects synapses against toxic Aβ through APOE receptors, which suggests that APOE is a potential target for AD therapy. 30 The incidence of SAD accounts for more than 95% of all AD cases. Therefore, in this review, we focus our attention on recent SAD research and clinical trials. There are various descriptive hypotheses regarding the causes of SAD, including the cholinergic hypothesis, 31 amyloid hypothesis, 32,33 tau propagation hypothesis, 34 mitochondrial cascade hypothesis, 35 calcium homeostasis hypothesis, 36 inflammatory hypothesis, 37 neurovascular hypothesis, 38 metal ion hypothesis, 39 and lymphatic system hypothesis. 40 In addition, there are many other factors that increase the risk for SAD, including family history, 41 midlife hypertension, 42 sleep disorders, 43 midlife obesity, 44 and oxidative stress. 45,46 Interestingly, according to the latest evaluation of single-nucleotide polymorphisms (SNPs), Mukherjee et al. found 33 SNPs associated with AD and assigned people to six cognitively defined subgroups. 47 Tacrine mutant (APP751) transgenic model supported the hypothesis and further contributed to shifting the amyloid hypothesis from a descriptive to a mechanistic hypothesis. 68,69 Positron emission tomography (PET) imaging studies have suggested that~30% of clinically normal older individuals have signs of Aβ accumulation. [70][71][72][73] Aβ was first isolated by Glenner and Wong in 1984. 74 Aβ may provide a strategy for diagnostic testing for AD and for understanding its pathogenesis. 74 APP was first cloned and sequenced in 1987; APP consists of 695 amino acid residues and a glycosylated receptor located on the cell surface. 75,76 Aβ is composed of 39-43 residues derived from multiple proteolytic cleavages of APP. APP is cleaved in two ways (Fig. 2). The first method is through the α pathway. APP is hydrolyzed by αsecretase and then by γ-secretase; this process does not produce insoluble Aβ. The second method is through the β pathway. APP is hydrolyzed by β-secretase (BACE1) and then by γ-secretase to produce insoluble Aβ. Under normal conditions, the Aβ protein is not produced since APP hydrolysis is mainly based on the α pathway. A small amount of APP is hydrolyzed via the second method, and the Aβ that produced is eliminated by the immune system. However, when some mutations, such as the Lys670Asn/ Met671Leu (Swedish) and Ala673Val mutations near the BACE1 cleavage site, are present, 77,78 APP is prone to hydrolysis by the β pathway, resulting in an excessive accumulation of insoluble Aβ and eventually the development of AD. 79,80 However, the Ala673Thr mutation has been suggested to be protective. 81 High concentrations of Aβ protein are neurotoxic to mature neurons because they cause dendritic and axonal atrophy followed by neuronal death. 82 The levels of insoluble Aβ are correlated with the decline of cognition. 83 In addition, Aβ inhibits hippocampal long-term potentiation (LTP) in vivo. 84 Neurofibrillary degeneration is enhanced in tau and APP mutant transgenic mice. 85 Transgenic mice that highly express human APP in the brain exhibit spontaneous seizures, which may be due to enhanced synaptic GABAergic inhibition and deficits in synaptic plasticity. 86 Individuals with Aβ are prone to cognitive decline [87][88][89] and symptomatic AD phenotypes. 90,91 The current strategies for AD treatment based on the Aβ hypothesis are mainly divided into the following categories: βand γ-secretase inhibitors, which are used to inhibit Aβ production; antiaggregation drugs (including metal chelators), which are used to inhibit Aβ aggregation; protease activity-regulating drugs, which are used to clear Aβ; and immunotherapy. 92 We will discuss recent progress regarding immunotherapy and BACE1 inhibitors.
Aβ-targeting monoclonal antibodies (mAbs) are the major passive immunotherapy treatments for AD. For example, solanezumab (Eli Lilly), which can bind monomeric and soluble Aβ, failed to show curative effects in AD patients in phase III, although solanezumab effectively reduced free plasma Aβ concentrations by more than 90%. 93 Gantenerumab (Roche/Genentech) is a mAb that binds oligomeric and fibrillar Aβ and can activate the microglia-mediated phagocytic clearance of plaques. However, it also failed in phase III. 94 Crenezumab (Roche/Genentech/AC Immune) is a mAb that can bind to various Aβ, including monomers, oligomers, and fibrils. On January 30, 2019, Roche announced the termination of two phase III trials of crenezumab in AD patients. Aducanumab (Biogen Idec) is a mAb that targets aggregated forms of Aβ. Although aducanumab can significantly reduce Aβ deposition, Biogen and Eisai announced the discontinuation of trials of aducanumab on March 21, 2019. Together, the failure of these trials strongly suggests that it is better to treat Aβ deposits as a pathological feature rather than as part of a major mechanistic hypothesis.
BACE1 inhibitors aim to reduce Aβ and have been tested for years. However, no BACE1 inhibitors have passed clinical trials. Verubecestat (MK-8931, Merck & Co.) reduced Aβ levels by up to 90% in the cerebrospinal fluid (CSF) in AD. However, Merck no longer listed verubecestat in its research pipeline since verubecestat did not improve cognitive decline in AD patients and was associated with unfavorable side effects. 95 can lower CSF Aβ levels by up to 75%. However, on June 12, 2018, phase II/III trials of lanabecestat were discontinued due to a lack of efficacy. The BACE1 inhibitor atabecestat (JNJ-54861911, Janssen) induced a robust reduction in Aβ levels by up to 95% in a phase I trial. However, Janssen announced the discontinuation of this program on May 17, 2018. The latest news regarding the BACE inhibitor umibecestat (Novartis/Amgen) was released on July 11, 2019; it was announced that the evaluation of umibecestat was discontinued in phase II/III trials since an assessment demonstrated a worsening of cognitive function. Elenbecestat (E2609, Eisai) is another BACE1 inhibitor that can reduce CSF Aβ levels by up to 80% 96,97 and is now in phase III trials (shown in Table 2). Although all BACE1 inhibitors seem to reduce CSF Aβ levels, the failure of trials of solanezumab, which can reduce free plasma Aβ concentrations by more than 90%, 93 may be sufficient to lead us to pessimistic expectations, especially considering that the treatment worsened cognition and induced side effects.
Tau propagation hypothesis Intracellular tau-containing NFTs are an important pathological feature of AD. 98,99 NFTs are mainly formed by the aggregation of paired helical filaments (Fig. 2). Pathological NFTs are mainly composed of tau proteins, which are hyperphosphorylated. [100][101][102][103] Tau proteins belong to a family of microtubule-binding proteins, and are heterogeneous in molecular weight. A main function of tau is to stabilize microtubules, which is particularly important for neurons since microtubules serve as highways for transporting cargo in dendrites and axons. 34,104 Tau cDNA, which encodes a protein of 352 residues, was cloned and sequenced in 1988. RNA blot analysis has identified two major transcripts that are 6 and 2 kilobases long and are widely distributed in the brain. 105,106 The alternative splicing of exons 2, 3, and 10 of the tau gene produces six tau isoforms in humans; the differential splicing of exon 10 leads to tau species that contain various microtubule-binding carboxyl terminals with repeats of three arginines (3R) or four arginines (4R). 107,108 An equimolar ratio of 3R and 4R may be important for preventing tau from forming aggregates. 109 The tau propagation hypothesis was introduced in 2009. 34 The pathology of tau usually first appears in discrete and specific areas and later spreads to more regions of the brain. Aggregates of fibrillar and misfolded tau may propagate in a prion-like way through cells, eventually spreading through the brains of AD patients (Fig. 2). Clavaguera et al. demonstrated that tau can act as an endopathogen in vivo and in culture studies in vitro with a tau fragment. 104 In their study, brain extracts isolated from P301S tau transgenic mice 110 were injected into the brains (the hippocampus and cortical areas) of young ALZ17 mice, a tau transgenic mouse line that only develops late tau pathology. 111 After the injection, the ALZ17 mice developed tau pathology quickly, whereas the brain extracts from wild-type mice or Under normal processing, APP is hydrolyzed by α-secretase and then by γ-secretase, which does not produce insoluble Aβ; under abnormal processing, APP is hydrolyzed by β secretase (BACE1) and then by γ secretase, which produces insoluble Aβ. Phase III clinical trials of solanezumab (Eli Lilly), crenezumab (Roche/Genentech/AC Immune), aducanumab (Biogen Idec), and umibecestat (Novartis/Amgen), which target the amyloid hypothesis, have all been terminated thus far. Lower: The tau protein can be hyperphosphorylated at amino residues Ser202, Thr205, Ser396, and Ser404 (which are responsible for tubulin binding), thereby leading to the release of tau from microtubules and the destabilization of microtubules. Hyperphosphorylated tau monomers aggregate to form complex oligomers and eventually neurofibrillary tangles, which may cause cell death immunodepleted P301S mice, which were used as controls, had no effect. The causes of tau aggregation in sporadic tauopathies are not fully understood. Tau can be phosphorylated at multiple serine and threonine residues (Fig. 2). 112,113 The gain-and loss-offunction of tau phosphorylation may be due to alterations in the activities of kinases or phosphatases that target tau, and thus, the toxicity of tau can be augmented as a result. Other posttranslational modifications can decrease tau phosphorylation or enhance the harmful states of tau. For example, serine-threonine modifications by O-glycosylation can reduce the extent of tau phosphorylation. 114,115 Thus, tau hyperphosphorylation may partially result from a decrease in tau O-glycosylation. In addition, tau can also be phosphorylated at tyrosine residues, 116 sumoylated and nitrated, 117 but the exact roles of these tau modifications remain elusive. According to the tau propagation hypothesis, abnormally phosphorylated tau proteins depolymerize microtubules and affect signal transmission within and between neurons. 101,103,118 In addition, mutant forms of human tau cause enhanced neurotoxicity in Drosophila melanogaster. 119 There may be crosstalk between the tau propagation hypothesis and the amyloid hypothesis. As mentioned earlier, among the risk loci for AD, APOE is the most robust factor for AD pathogenesis. 120 Unlike other isoforms, APOE4 may increase Aβ by decreasing its clearance [121][122][123] and enhancing tau hyperphosphorylation. [124][125][126] GSK3 is one of the upstream factors that jointly regulates Aβ and tau. Increased GSK3 activity leads to the hyperphosphorylation of the tau protein. 126 GSK3 overactivity may also affect the enzymatic processing of APP and thus increase the Aβ level. 127,128 In addition, tau is essential for Aβ-induced neurotoxicity, and dendritic tau can mediate Aβ-induced synaptic toxicity and circuit abnormalities. 129 Moreover, APP and tau act together to regulate iron homeostasis. APP can interact with ferroportin-1 to regulate the efflux of ferrous ions. 130,131 As an intracellular microtubuleassociated protein, tau can increase iron output by enhancing the transport of APP to the cell surface. 132 Decreased APP trafficking to the cell surface accounts for iron accumulation in tau knockout neurons. 133,134 As one of the most important hypotheses of AD, the tau propagation hypothesis has a wide range of impacts. Drugs that target the tau protein are divided into the following categories: tau assembly inhibitors, tau kinase inhibitors, O-GlcNAcase inhibitors, microtubule stabilizers, and immunotherapy drugs. 92 Only a few agents have undergone proof-of-principle tests as tau kinase inhibitors, microtubule-stabilizing agents, and inhibitors of heat shock protein 90 (Hsp90), which stabilize GSK3β. 135,136 In addition, some inhibitors of tau aggregation, such as TRx0237 (TauRx Therapeutics), are in clinical trials. The results of TRx 237-005 phase III clinical trials showed that the agent may be effective as a monotherapy since the brain atrophy rate of AD patients declined after 9 months of treatment. 137 ACI-35 (AC Immune/Janssen) and AADvac1 (Axon Neuroscience SE) are vaccines that target the hyperphosphorylated tau protein, and the vaccines are still being evaluated in clinical trials 138 (Table 2). Tau-directed therapies will inevitably face challenges similar to those presently encountered in Aβ-targeted trials. Overall, the effectiveness of tau-directed therapies remains to be tested in the future.
Mitochondrial cascade hypothesis and related hypotheses (Fig. 3) In 2004, Swerdlow and Khan first introduced the mitochondrial cascade hypothesis 35 and stated that mitochondrial function may affect the expression and processing of APP and the accumulation of Aβ in SAD. The hypothesis includes three main parts. First, an individual's baseline mitochondrial function is defined by genetic inheritance. Second, the rate of age-associated mitochondrial changes is determined by inherited and environmental factors. Moreover, a decline in mitochondrial function or efficiency drives aging phenotypes. [139][140][141] Third, the rate of change of mitochondrial function in individuals influences AD chronology.
Oxidative stress is defined as "an imbalance in pro-oxidants and antioxidants with associated disruption of redox circuitry and macromolecular damage." 142 Oxidative stress is mainly caused by increased levels of reactive oxygen species (ROS) and/or reactive Mitochondria are the main contributors to ROS production, which is significantly increased in AD. The metabolites of mitochondrial TCA, such as pyruvate, fumarate, malate, OAA, and α-KG, not only directly regulate energy production but also play an important role in the epigenetic regulation of neurons and longevity. 164,173,[187][188][189] For example, SAM provides methyl groups for histone and DNA methyltransferases (HMTs and DNMTs). 165,166 α-KG is a necessary cofactor for TET DNA methylases, histone demethylases (HDMs), and lysine demethylases KDMs/JMJDs. 167,168 Mitochondria also regulate the levels and redox state of FAD, a cofactor of the histone demethylase LSD1. 175 Dysfunctional mitochondria can be removed by mitophagy, which is also very important in the progression of AD. BNIP3L interacts with LC3 or GABARAP and regulates the recruitment of damaged mitochondria to phagophores. In addition, Beclin 1 is released from its interaction with Bcl-2 to activate autophagy after BNIP3L competes with it. PINK1 promotes autophagy by recruiting the E3 ligase PARK2. Then, VDAC1 is ubiquitinated and then binds to SQSTM1. SQSTM1 can interact with LC3 and target this complex to the autophagosome. 445 L. monocytogenes can promote the aggregation of NLRX1 and the binding of LC3, thus activating mitophagy. 446 The MARCH5-FUNDC1 axis mediates hypoxia-induced mitophagy. 447 The mitochondrial proteins NIPSNAP1 and NIPSNAP2 can recruit autophagy receptors and bind to autophagy-related proteins. 448 ROS: reactive oxygen species; TCA: tricarboxylic acid cycle; OAA: oxaloacetate; α-KG: αketoglutarate; SAM: S-adenosyl methionine; TET: ten-eleven translocation methylcytosine dioxygenase; FAD: flavin adenine dinucleotide nitrogen species, including superoxide radical anions (O 2− ), hydrogen peroxide (H 2 O 2 ), hydroxyl radicals (HO − ), nitric oxide (NO), and peroxynitrite (ONOO − ). In intact cells, ROS can be produced from multiple sources, including mitochondria, ER, peroxisomes, NADPH oxidases, and monoamine oxidases. 143,144 In AD, neurons exhibit significantly increased oxidative damage and a reduced number of mitochondria, 145 which are the main contributors to ROS generation among these ROS sources. 146,147 The overproduction of ROS and/or an insufficient antioxidant defense can lead to oxidative stress. 148 Before the onset of the clinical symptoms of AD and the appearance of Aβ pathology, there is evidence that the production of ROS increases due to mitochondrial damage. 148 Both mtDNA and cytochrome oxidase levels increase in AD, and the number of intact mitochondria is significantly reduced in AD. 145 Several key enzymes involved in oxidative metabolism, including dehydrogenase complexes for αketoglutarate (α-KG) and pyruvate, and cytochrome oxidase also show reduced expression or activity in AD. [149][150][151][152][153][154] In addition, there is evidence in vitro and in vivo for a direct relationship between oxidative stress and neuronal dysfunction in AD. 155,156 Aβ-dependent endocytosis is involved in reducing the number of NMDA receptors on the cell surface and synaptic plasticity in neurons and brain tissue in AD mice. 157 Excessive Aβ may also trigger excitotoxicity and stress-related signaling pathways by increasing Ca 2+ influx, increasing oxidative stress, and impairing energy metabolism. 158 Although the majority of efforts have been focused on genetic variations and their roles in disease etiology, it has been postulated that epigenetic dysfunction may also be involved in AD. 159,160 Indeed, there is growing evidence that epigenetic dysregulation is linked to AD. [161][162][163] Mitochondrial metabolites are required for epigenetic modifications, such as the methylation of DNA and the methylation and acetylation of histones. 164 AD brains exhibited a global reduction in DNA modifications, including 5methylcytosine and 5-hydroxymethylcytosine. [165][166][167][168] S-adenosyl methionine (SAM) provides a methyl group for histones and DNA methyltransferases in the nucleus. SAM is generated and maintained by coupling one-carbon metabolism and mitochondrial energy metabolism. 169,170 α-KG, which is generated by the tricarboxylic acid cycle (TCA) cycle in mitochondria and the cytosol, is a cofactor of ten-eleven translocation methylcytosine dioxygenase DNA methylases, histone demethylases (HDMs) and the lysine demethylases KDMs/JMJDs. 171,172 However, the activities of KDMs/JMJDs and TETs can be inhibited by fumarate, succinate, and 2-hydroxyglutarate. 173 Mutations that affect the succinate dehydrogenase complex and fumarate hydratase can induce the accumulation of succinate and fumarate, respectively. 174 Oxidized flavin adenine dinucleotide (FAD) is an essential cofactor of the HDM LSD1, a member of the KDM family. 175 In addition, acetyl-CoA, the source of acetyl groups that are consumed by histone acetyltransferases, is generated by ATP citrate lyase and pyruvate dehydrogenase in the cytosol and mitochondria, respectively. 176 In addition, oxidized nicotinamide adenine dinucleotide (NAD + ) is a cofactor for sirtuins (SIRTs), a family of deacetylases that includes nuclear-localized SIRT1, SIRT6, and SIRT7, cytosolic SIRT2, and three mitochondrial SIRTs (SIRT3, SIRT4, and SIRT5) (Fig. 3). Therefore, the activities of SIRTs are sensitive and are regulated by cellular NAD + pools. 177 As summarized by Fang, NAD + replenishment can enhance autophagy/mitophagy mainly through SIRT1 or SIRT3; meanwhile, SIRT6 and SIRT7 induce autophagy through the inhibition of mTOR; NAD + may also inhibit autophagy/mitophagy through SIRT2, SIRT4, SIRT5, and poly(ADP-ribose) polymerases. 178 In short, mitochondrial dysfunction can partially explain the epigenetic dysregulation in aging and AD.
Dysfunctional mitochondria can be removed by mitophagy, a term that was first coined by Dr Lemasters in 2005. 179 Since then, mitophagy has been linked to various diseases, including neurodegenerative disorders such as PD 180 and Huntington's disease (HD), 181 as well as normal physiological aging. 182 Mitophagosomes can effectively degrade their internalized cargo by fusing with lysosomes during axonal retrotransport. 183 Fang et al. demonstrated that neuronal mitophagy is impaired in AD. 184 Mitophagy stimulation can reverse memory impairment, diminish insoluble Aβ 1-42 and Aβ 1-40 through the microglial phagocytosis of extracellular Aβ plaques, and abolish AD-related tau hyperphosphorylation. 184 Therefore, deficiencies in mitophagy may have a pivotal role in AD etiology and may be a potential therapeutic target. 178,[184][185][186] The metabolites of mitochondrial TCA, such as pyruvate, fumarate, malate, oxaloacetate (OAA), and α-KG, have been demonstrated to extend lifespan when fed to C. elegans. 173,187-189 Wilkins et al. found that OAA enhances the energy metabolism of neuronal cells. 190 Moreover, OAA can also activate mitochondrial biogenesis in the brain, reduce inflammation, and stimulate neurogenesis. 191 The application of OAA in AD was also investigated by Swerdlow et al., and the results showed that 100-mg OAA capsules did not result in an elevation of OAA in the blood 192 ; higher doses up to 2 g per day were also evaluated in clinical studies, but no results have been posted or published yet.
Clinical trials related to the mitochondrial cascade hypothesis and related hypotheses account for 17.0% of all clinical trials (Fig. 1). Based on the above, the mitochondrial cascade hypothesis and related hypotheses (Fig. 3) may link other hypotheses, including the cholinergic hypothesis, amyloid hypothesis, and tau propagation hypothesis.
Calcium homeostasis and NMDA hypotheses The calcium homeostasis hypothesis was proposed in 1992 by Mattson et al. They found that Aβ can elevate intracellular calcium levels and render neurons more vulnerable to environmental stimuli. 36 The involvement of calcium in AD was first suggested long ago by Khachaturian,193 and since then, there are many efforts to clarify this hypothesis. [194][195][196] Calcineurin can trigger reactive/inflammatory processes in astrocytes, which are upregulated in AD models. 197 In addition, calcium homeostasis is closely related to learning and memory. Rapid autopsies of the postmortem human brain have suggested that calcineurin/nuclear factor of activated T-cells signaling is selectively altered in AD and is involved in driving Aβ-mediated cognitive decline. 198 The evidence indicates that calcium homeostasis may be associated with the development of AD. 193,199 Memantine, a noncompetitive antagonist of NMDA glutamate receptors in the brain was approved for marketing in Europe in 2002 and received US FDA approval in 2003. 200,201 Memantine is not an AChEI. The functional mechanism of memantine likely involves blocking current flow (especial calcium currents) through NMDA receptors and reducing the excitotoxic effects of glutamate. 202 Memantine is also an antagonist of type 3 serotonergic (5-HT 3 ) receptors and nicotinic acetylcholine receptors, but it does not bind other receptors, such as adrenergic, dopamine, and GABA receptors. The inhibition of NMDA receptors can also reduce the inhibition of α-secretase and thus inhibit the production of Aβ. 203 However, the French Pharmacoeconomic Committee downgraded its rating of the medical benefit provided by memantine in AD from "major" to "low," 67 which was also supported by a recent meta-analysis. 64 Neurovascular hypothesis The homeostasis of the microenvironment and metabolism in the brain relies on substrate delivery and the drainage of waste through the blood; neurons, astrocytes, and vascular cells form a delicate functional unit that supports the integrity of brain structure and function. [204][205][206] Vascular dysregulation leads to brain dysfunction and disease. Alterations in cerebrovascular function are features of both cerebrovascular pathologies and History and progress of hypotheses and clinical trials for. . . Liu et al. neurodegenerative diseases, including AD. 38 In 1994, it was demonstrated that the cerebral microvasculature is damaged in AD. 207 Aβ can induce the constriction of the cerebral arteries. 208 In an AD mouse model, neocortical microcirculation is impaired before Aβ accumulation. 209,210 Neuroimaging studies in AD patients have demonstrated that neurovascular dysfunction is found before the onset of neurodegeneration. [211][212][213][214] In addition to aberrant angiogenesis and the senescence of the cerebrovascular system, the faulty clearance of Aβ across the blood-brain barrier (BBB) can initiate neurovascular uncoupling and vessel regression and consequently cause brain hypoperfusion, brain hypoxia, and neurovascular inflammation. Eventually, BBB compromise and a chemical imbalance in the neuronal environment lead to neuronal dysfunction and loss. 215 In mice that overexpress APP, impairment in the neocortical microcirculation is observed. The cerebrovascular effects of Aβ in dementia may involve alterations in cerebral blood flow and neuronal dysfunction. 209 Moreover, neurovascular dysfunction may also play a role in the etiology of AD.
Many factors can lead to changes in the neurovasculature, which in turn affect the occurrence and progression of AD. Of these factors, hyperlipidemia is one of the most important. During the last two decades, growing evidence has shown that a high cholesterol level may increase the risk of AD. In one test, higher levels of low-density lipoprotein (LDL) or total cholesterol were correlated with lower scores on the MMSE (modified mini mental state exam) in nondemented patients. High total cholesterol levels in midlife increase the risk of AD nearly threefold: the odds ratio (OR) is 2.8 (95% confidence interval, CI: 1.2-6.7). 216 Midlife obesity is also a risk factor for AD, 217 and midlife adiposity may predict an earlier onset of dementia and Aβ accumulation. 218 Adipose tissue secretes some inflammation factors, such as tumor necrosis factor (TNF-α), interleukin-1 (IL-1), and interleukin-6, in obesity, 219 and these factors may induce insulin resistance, produce Aβ deposits, and stimulate excessive tau phosphorylation. 220 A hyperglycemic state is another risk factor. Type 2 diabetic patients (T2D) have an increased risk of dementia, 221 both vascular dementia (VD) and AD. In the largest and latest meta-analysis of T2D and dementia risk, data from 6184 individuals with diabetes and 38,350 without diabetes were pooled and analyzed. 222 The relative risk (RR) for dementia was 1.51 (95%CI: 1.31-1.74). The results of the analyses further suggested that there are two common subtypes of dementia: AD and VD. The results suggested that T2D conferred an RR of 2.48 (95%CI: 2.08-2.96) for VD and 1.46 (95%CI: 1.20-1.77) for AD. 222 Insulin resistance is a common feature of T2D and SAD. Accumulating evidence supports the involvement of impaired insulin signaling in AD progression. Insulin levels and insulin receptor expression are reduced in AD brains. 223 However, plasma insulin and Aβ levels are both increased in AD patients, suggesting that a decrease in insulin clearance may increase plasma Aβ levels. Blocking insulin signaling in the brain through the intracerebroventricular administration of STZ (the diabetogenic drug streptozotocin) resulted in various pathological features that resemble those found in human SAD, while the administration of insulin and glucose enhances learning and memory in AD patients. 224,225 Many institutions have conducted clinical trials of statins, drugs that are used to lower blood cholesterol, for the treatment of AD. However, in a phase IV clinical trial, simvastatin failed to reduce Aβ-42 and tau levels in the CSF. The results suggested that the use of statins for the treatment of AD requires more evidence. 226 To test the hyperglycemic hypothesis, rosiglitazone (RSG), a drug used for the treatment for type II diabetes mellitus, was evaluated. RSG XR had no effect in a phase III trial. 227 In addition, hypertension has also been linked to worse cognition and hypometabolism in AD. AD patients with hypertension exhibit worse cognitive function (on the AD assessment scale-cognitive subscale, P = 0.038) and a higher burden of neuropsychiatric symptoms (on the neuropsychiatric inventory questionnaire, P = 0.016) than those without hypertension. 228 As an antihypertensive medication, ramipril is a specific angiotensin-converting enzyme inhibitor; however, ramipril was tested and failed in a pilot clinical trial. 229 Therefore, trial failures of treatments related to the neurovascular hypothesis and related hypotheses suggest that these hypotheses alone may not be sufficient to explain the etiology of AD.
Inflammatory hypothesis
The inflammatory responses of microglia and astrocytes in the central nervous system (CNS) also play important roles in the development of AD. [230][231][232] Microglial cells are brain-specific macrophages in the CNS, and they make up 10-15% all brain cells. 233 Microglia cells exhibit higher activity in AD patients than in the control group. 234 The concentration of aggregated microglial cells near senile plaques and neurons with NFTs in AD patients is usually 2-5 times higher than that in normal individuals. Inflammatory factors that are expressed by microglia and histocompatibility complexes also cause inflammation. 235 In vitro studies have linked Aβ pathology in AD to neuroinflammation. It has been shown that Aβ possesses a synergistic effect on the cytokine-induced activation of microglia. 236 Two studies have confirmed that Aβ can induce glial activation in vivo. 237,238 The fibrillar conformation of Aβ seems to be crucial for such activation. 239 In AD patients, Aβ can bind to microglia cells through the CD36-TLR4-TLR6 receptor complex and the NLRP3 inflammatory complex, destroy cells, release inflammationinducing factors, such as TNF-α, and cause immune responses. In addition to increased levels of TNF-α, increased levels of the inflammatory cytokines IL-1β, TGF-β, IL-12, and IL-18 in the CNS are also correlated with AD progression and increase damage in the brains of AD patients. 240 Interestingly, CD22 is a B-cell receptor that functions as a negative regulator of phagocytosis. The functional decline of aged microglia may result from the upregulation of CD22; thus, the inhibition of CD22 can enhance the clearance of debris and fibrils, including Aβ oligomers, in vivo, and this process may be potentially beneficial for the treatment of AD. 241 Considerable evidence suggests that the use of antiinflammatory drugs may be linked with a reduced occurrence of AD. The ability of naproxen and celecoxib to delay or prevent the onset of AD and cognitive decline was evaluated in phase III clinical trials. However, therapeutic efficacy analysis indicated that naproxen and celecoxib do not exert a greater benefit compared with that of placebo. In addition, the naproxen and celecoxib groups experienced more adverse events, including hypertension, gastrointestinal, and vascular or cardiac problems, so these phase III clinical trials were discontinued. 242 A clinical trial of lornoxicam in AD patients was also terminated due to a lack of efficacy. These failures suggest that the clinical application of anti-inflammatory drugs for AD treatment needs to be further validated ( Table 2).
Metal ion hypothesis
Metal ions that play functional roles in organisms are classified as biometals, while other metal ions are inert or toxic. 243,244 The dyshomeostasis of any metal ion in the body usually leads to disease. In the CNS, biometals, such as copper, zinc, and iron, are required to act as cofactors for enzymatic activity, mitochondrial function, and neuronal function. 245,246 In healthy brains, free metal ions are stringently regulated and kept at a very low level. 247 Biometal ions are involved in Aβ aggregation and toxicity. In the first study that evaluated biometals and Aβ, which was published by Bush et al. in 1994, zinc was linked to Aβ. The potential link between biometals and AD has been intensively studied. 39,[248][249][250] There is evidence of the dyshomeostasis of biometals in AD brains. Biometals, especially zinc and copper, are directly coordinated by Aβ, and biometals such as iron can reach a high concentration (~1 mM) in plaques. 251,252 In the serum, the levels of copper, which are not associated with ceruloplasmin, are elevated in AD patients. Moreover, a higher copper content in the serum is associated with lower MMSE scores. 253,254 In the serum of AD patients, the levels of Zn 2+ ions are decreased compared with those in age-matched controls, whereas the concentration of Zn 2+ is elevated in the CSF. 255 The important role of biometals in Aβ formation has been reported in various animal models. For example, the role of Cu 2+ in Aβ formation was demonstrated in a cholesterol-fed rabbit model of AD. 256 Administering trace amounts of Cu 2+ in drinking water was sufficient to induce Aβ accumulation, the consequent formation of plaques, and deficits in learning. 256 On the other hand, Cu 2+ also plays a beneficial role. For example, transgenic mice that overexpress mutant human APP and are treated with Cu 2+ show a reduction in Aβ and do not exhibit a lethal phenotype. 257 In contrast, in Drosophila that specifically express human Aβ in the eye, dietary zinc and copper increase Aβ-associated damage, while different chelators of biometals demonstrate favorable effects. 258 During normal aging, the gradual accumulation of iron is observed in some brain areas, such as the substantia nigra, putamen, globus pallidus, and caudate nucleus. [259][260][261][262][263] An increase in the level of iron in AD brains was first demonstrated in 1953. 264 More recently, through the use of magnetic resonance imaging (MRI), iron accumulation was found in AD and was shown to be mainly localized to certain brain areas, such as the parietal cortex, motor cortex, and hippocampus. [265][266][267][268][269][270][271][272] Studies of gene mutations that affect the metabolism of iron have suggested that the dyshomeostasis of iron plays a role in neuronal death, such as the neuronal death that occurs in neurodegenerative disorders such as AD. [273][274][275][276][277] Iron overload accelerates neuronal Aβ production and consequently worsens cognitive decline in a transgenic AD mice. 278 There is evidence that the levels of labile iron can directly affect APP production via iron regulatory element. 279 As a potent source of highly toxic hydroxyl radicals, redox-active iron is actively associated with senile plaques and NFTs. 280 As the most common nutrient deficiency in the world, iron deficiency is also frequently observed and reported in AD. 281 Iron is present in polynuclear iron-sulfur (Fe/S) centers and hemoproteins. Mitochondrial complexes I-III require Fe/S clusters, and complexes II-IV need hemoproteins for electron transfer and the oxidative phosphorylation of the respiratory chain. 282 Thus, iron deficiency may partially account for hypometabolism in AD since women with iron deficiency anemia have a higher prevalence of dementia. 283 Interestingly, iron deficiency and iron accumulation in AD seem paradoxical. One potential explanation is that tau differentially regulates the motor proteins dynein and kinesin; specifically, tau may preferentially inhibit kinesin, which transports cargo toward the cell periphery. 284 Tau is distributed in a proximal-to-distal gradient with a low concentration in the cell body. [284][285][286][287] When tau is hyperphosphorylated, it is released from the distal microtubules into the neuronal axon and soma, and thus inhibits kinesin activity and prevents the transport of ironcontaining cargo and other cargo (including mitochondria) to the neuronal periphery; this may result in the accumulation of mtDNA and iron accumulations in the soma of neurons in AD 145,280 and deficiencies in mitochondria and iron homeostasis in the white matter of the brain. Iron-targeted therapies were recently updated and reviewed. 288 Similar to the amyloid hypothesis, the conjecture that the therapeutic chelation of iron ions is an effective approach for treating AD remains widespread despite a lack of evidence of any clinical benefits. 288 Aluminum (Al), the most abundant metal in the earth's crust, is a nonessential metal ion in organisms. The role of Al in AD needs to be further elucidated. Exley et al. hypothesized that Al is associated with Aβ in AD brains and Al can precipitate Aβ in vitro into fibrillar structures; in addition, Al is known to increase the Aβ burden in the brains of treated animals, which may be due to a direct or indirect effect on Aβ anabolism and catabolism. 289,290 Biometals may play various roles in AD and may influence the pathogenesis directly or indirectly. For example, biometals indirectly influence energy metabolism and APP processing, 249 while cellular iron levels can directly regulate APP through IREs identified in the 5′ -UTR of mRNA. 291,292 Lymphatic system hypothesis The lymphatic network and the blood vasculature are essential for fluid balance in the body. 293,294 Below the human skull, the meninges, a three-layer membrane that envelopes the brain, contains a network of lymphatic vessels. This meningeal lymphatic system was first discovered in 1787, and interest in this system has been revived recently. [295][296][297] Proteins, metabolites, and waste produced by the brain flow through the interstitial fluid (ISF) and reach the CSF, which circulates through the ventricles and brain meninges. 298 In the classical form of transvascular removal, metabolic waste and other molecules in these fluids are drained from the brain, are transported across capillary walls, and cross the BBB. 298,299 Thrane et al.'s found that, in addition to transvascular removal, perivascular removal, in which the blood vasculature allows the CSF to flow into or exit the brain along the para-arterial space or via paravenous routes, occurs and that aquaporin-4 water channels that are expressed in astrocytes are essential for CSF-ISF exchange along the perivascular pathway. 300,301 This perivascular route is called the glymphatic system. 302,303 During aging, impairments in the transvascular/perivascular removal of waste may result in Aβ accumulation in the brain. 40,304 Animals that lack aquaporin-4 channels show a 70% decrease in the ability to remove large solutes, such as Aβ. 305,306 Da Masquita et al.'s investigated the importance of meningeal lymphatics for Aβ production in AD mouse models. They found that ablating meningeal lymphatics leads to Aβ accumulation in the meninges, accelerates Aβ deposition, and induces cognitive deficits. These findings are consistent with Aβ accumulation observed in the meninges of AD patients. Strategies for promoting the growth of meningeal lymphatic vessels may have the potential to enhance the clearance of Aβ and lessen the deposition of Aβ, 307,308 but this remains to be further validated.
Other hypotheses In addition to the above hypotheses, there are many other factors that can affect the occurrence of AD. For a long time (at least 60 years), investigators have suspected that microbes may be involved in the onset and progression of AD, this was hypothesized by Sjogren et al. beginning in 1952. 309 In addition to McLachlan et al.'s proposal in 1980, 310 several investigators have proposed that AD may be caused by a viral form of herpes simplex. [311][312][313][314] There have been intensive reports suggesting that AD may be associated with various bacterial and viral pathogens, 315-317 especially herpesviridae (including HSV-1, 318,319 EBV, HCMV, HHV-6A, and HHV-7 314,320 ). However, these studies did not determine the underlying mechanisms or identify a robust association with a specific viral species. Recent reports have suggested that Aβ aggregation and deposition may be stimulated by different classes of microbes as a part of the innate immune response. Microbes trigger amyloidosis, and newly generated Aβ acts as an antimicrobial peptide to coat microbial particles to fight the infection. [321][322][323] Valaciclovir, an antiviral drug that is used for the management of herpes simplex and herpes zoster, is now in a phase II trial for AD (Table 2).
MicroRNAs (miRNAs) are involved in posttranscriptional gene regulation. [324][325][326][327] The decreased expression of miRNA-107 (miR-107) in AD may accelerate disease progression by regulating the expression of BACE1. 328 In SAD patients, the expression of miR-29a/b-1 is inversely correlated with BACE1 expression. 329 Only one clinical trial related to miRNAs is underway. Gregory Jicha launched a phase I trial to assess the safety and efficacy of gemfibrozil in modulating miR-107 levels for the prevention of AD in subjects with cognitive impairment (Table 2).
Mannose oligosaccharide diacid (GV-971) was developed by researchers at the Shanghai Institute of Medicine, the Chinese Academy of Sciences, the Ocean University of China, and the Shanghai Green Valley Pharmaceutical Co., Ltd. GV-971 is an oceanic oligosaccharide molecule extracted from seaweed. GV-971 may capture multiple fragments of Aβ in multiple sites and multiple states, inhibit the formation of Aβ filaments, and depolymerize filaments into nontoxic monomers 330,331 ; however, an understanding of the exact mechanism is still lacking. GV-971 has been reported to improve learning and memory in Aβ-treated mice. 332 In phase II trials, GV-971 improved cognition in AD patients. 333 In addition, a phase III clinical trial of GV-971 finished with positive results, and it is on its way to the market in China (Table 2).
Interestingly, a pilot clinical trial that included 120 nondemented elderly Chinese individuals (ages 60-79) living in Shanghai compared the effects of interventions (such as walking, Tai Chi, and social interaction) on cognition and whole brain volume, as determined by a neuropsychological battery and MRI scans. 334 The results showed that Tai Chi and social interaction were beneficial, but walking had no effect. Therefore, in addition to promising drugs, a healthy lifestyle can delay the progression of AD.
Opinions
The whole brain atrophy rate is −0.67 to −0.8% per year in adulthood. 335 Freeman et al.'s results demonstrated that, although the frontal and temporal regions of the cortex undergoing thinning, the total number of neurons remains relatively constant from age 56 to age 103. However, there is a reduction in the number of hippocampal neurons in AD but not in normal aging. The loss of neuronal structural complexity may contribute to the thinning that occurs with aging. 336 The integrity of neurons and dendritic structures is the basis for maintaining the normal function of neurons. [337][338][339] Brain atrophy affects the function of neurons, which in turn impairs signal transmission and causes movement disorders, cognitive disorders etc. [340][341][342][343] Brain atrophy has been shown to be a key pathological change in AD. [344][345][346][347] In particular, the annual atrophy rate of the hippocampus in AD patients (−3.98 ± 1.92%) is two to four times that of the atrophy rate in healthy individuals (−1.55 ± 1.38%). At the same time, the annual increase in the temporal lobe volume of the lateral ventricle in AD patients (14.16 ± 8.47%) is significantly greater than that in healthy individuals (6.15 ± 7.69%). 348 The ratio of the volume of the lateral ventricle to the volume of the hippocampus may be a reliable measurement for evaluating AD since the ratio can minimize variances and fluctuations in clinical data and may be a more objective and sensitive method for diagnosis and evaluating AD. In 1975, brain atrophy and a reduction in perfusion were detected in AD patients. 349 In 1980, atrophy of hippocampal neurons and abnormal brain metabolism were first discovered in AD patients with PET. 350 Brain volume reduction in patients with AD is significantly associated with dementia severity and cognitive disturbances as well as neuropsychiatric symptoms. 351 The development of broad-spectrum drugs that target brain atrophy, a common feature of neurodegenerative diseases, is still ongoing. In our previous work, RAS-RAF-MEK signaling was demonstrated to protect hippocampal neurons from atrophy caused by dynein dysfunction and mitochondrial hypometabolism (tetramethylrhodamine ethyl ester mediated mitochondrial inhibition), suggesting the feasibility of interventions for neuronal atrophy. 352 The MAPK pathway protects neurons against dendritic atrophy and relies on MEK-dependent autophagy. 352 Autophagy is the principal cellular pathway by which degraded proteins and organelles are recycled, and it plays an essential role in cell fate in response to stress. [353][354][355][356][357] Aged organelles and protein aggregates are cleared by the autophagosome-lysosome pathway, which is particularly important in neurons. [358][359][360] Growing evidence has implicated defective autophagy in neurodegenerative diseases, including AD, PD, amyotrophic lateral sclerosis and HD. 358,[361][362][363][364] Recent work using live-cell imaging determined that autophagosomes preferentially form at the axon tip and undergo retrograde transport to the cell body. 365 As a key protein in autophagy, Beclin 1 is decreased in the early stage of AD. 357,366,367 Moreover, a decrease in autophagy induced by the genetic ablation of Beclin 1 increases intracellular Aβ accumulation, extracellular Aβ deposition, and neurodegeneration. 368 Autophagy decline also causes microglial impairments and neuronal ultrastructural abnormalities. 368 On the other hand, transcriptome evidence has revealed enhanced autophagy-lysosome function in centenarians. 369 PPARA-mediated autophagy can reduce AD-like pathology and cognitive decline. 370 These results suggest that autophagy is a potential therapeutic target for AD. MEKdependent autophagy is protective in neuronal cells. 352 The activation of the MEK-ERK signaling pathway can reduce the production of toxic amyloid Aβ by inhibiting γ-secretase activity. [371][372][373][374][375] Thus, MEK-dependent autophagy may provide a potential way to enhance Aβ and NFT clearance and may also be a new potential target for AD therapy (Fig. 4).
Hypometabolism is sufficient to cause neuronal atrophy in vitro and in vivo. 352,376,377 Hypometabolism may be a potential therapeutic target for AD. 378 Regional hypometabolism is another characteristic of AD brains (Fig. 5). The human brain makes up 2% of the body weight but consumes up to~20% of the oxygen supply; the brain is energy demanding and relies on the efficiency of the mitochondrial TCA cycle and oxidative phosphorylation for ATP generation. [379][380][381][382] However, glucose metabolism in the brain in AD and mild cognitive impairment is significantly impaired compared with that in the brain upon normal aging, and the decline in cerebral glucose metabolism occurs before pathology and symptoms manifest and gradually worsens as symptoms progress. [383][384][385] In 1983, de Leon et al. examined aged patients with senile dementia and found a 17-24% decline in the cerebral glucose metabolic rate. 386 Inefficient glucose utilization, impaired ATP production, and oxidative damage are closely correlated, and these deficiencies have profound consequences in AD. 387,388 For example, ATP deficiency causes the loss of the neuronal membrane potential since Na + /K + ATPase fails to maintain proper intracellular and extracellular gradients of Na + and K + ions. In addition, the propagation of action potentials and the production of neurotransmission is hindered by energy insufficiency. Moreover, after membrane depolarization (mainly due to the dissipation of Na + and K + ion gradients), Ca 2+ flows down the steep gradient (~1.2 mM of extracellular Ca 2+ to~0.1 μM of intracellular Ca 2+ ) into the cell to raise intracellular Ca 2+ levels and stimulates the activities of various Ca 2+ -dependent enzymes (including endonucleases, phospholipases, and proteinases), eventually contributing to neuronal dysfunction and death. 158 Mitochondria are the most energetically and metabolically active organelles in the cell. 389,390 Mitochondria are also dynamic organelles; they experiences changes in their functional capacities, morphologies, and positions [391][392][393] so that they can be transported, and they respond to physiological signals to meet the energy and metabolic demands of cellular activities. [394][395][396] In addition to neuronal atrophy, mitochondrial dysfunction leads to hypometabolism, which in turn contributes to the progression of AD. [397][398][399] Indeed, there is evidence that hypometabolism and neuronal atrophy coexists in patients with amyloid-negative AD. 400 In addition to mitochondrial dysfunction, hypoperfusion and hypoxia in vascular diseases may also cause hypometabolism in the brain and thus contribute to the progression of AD (Fig. 5). Meanwhile, as the synthesis of acetylcholine requires the involvement of acetyl-CoA and ATP, hypometabolism leads to a decrease in acetylcholine synthesis in neurons, which suggests that hypometabolism may be an underlying explanation for the acetylcholine hypothesis (Fig. 5).
The relationship between hypometabolism and autophagy in neurons is still unknown, 352 but calorie restriction (CR) is known to enhance autophagy. CR-induced autophagy can recycle intracellular degraded components and aggregates to maintain mitochondrial function. 401 Hypometabolism and a simultaneous decrease in autophagy can worsen the situation and lead to the dysfunction and atrophy of neurons. Hypometabolism and a simultaneous decrease in autophagy may be causative factors of brain atrophy and AD (Fig. 6).
Perspective AD, like the aging population, has increasingly become a medical and social concern. There are currently four clinically used drugs (a total of five therapies, the fifth one of which is a combination of two drugs) that have been approved by the FDA, but they only treat the symptoms and have no significant effect on the progression of AD. Based on this retrospective review of AD and the lessons learned, we propose that fluoxetine, 402 a selective serotonin reuptake inhibitor (SSRI), may have strong potential for the treatment of AD (Fig. 7).
Based on functional brain imaging with PET, there is evidence that serotonin plays an important role in aging, late-life depression, and AD. 403 Short-term treatment with the antidepressant fluoxetine can trigger pyramidal dendritic spine synapse formation in the rat hippocampus. 404 In an MRI study of fluoxetine for the treatment of major depression, Vakili et al. found that female responders had a statistically significant higher mean right hippocampal volume than that of nonresponders. 405 Long-term treatment with fluoxetine can promote the neurogenesis and proliferation of hippocampal neurons in mice through the 5-HT 1 A receptor, and this can relieve anxiety phenotypes in mice 406 and enhance mitochondrial motility. 407 A receptors that are expressed by mature neurons in the hippocampal dentate gyrus are also important for promoting neurogenesis and dematuration. [408][409][410] Fluoxetine can promote neurogenesis not only in the hippocampus but also in the anterior cortex and hypothalamus. 411 This action depends on BDNF, as fluoxetine can enhance the phosphorylation of methyl-CpG binding protein 2 (MeCP2) at serine 421 to relieve its transcriptional inhibition and thereby promote the expression of BDNF. 412,413 In addition to promoting Fig. 4 Schematic representation of autophagy. Yellow box: mTOR-dependent autophagy pathways. Growth factors can inhibit autophagy via activating the PI3K/Akt/mTORC1 pathway; under nutrient-rich conditions, mTORC1 is activated, whereas under starvation and oxidative stress, mTORC1 is inhibited. AMPK-dependent autophagy activation can be induced by starvation and hypoxia. 449 Ras can also activate autophagy via activating PI3K, 352 while p300 can inhibit autophagy. 450 p38 promotes autophagy by phosphorylating and inactivating Rheb and then inhibiting mTOR under stress. 451 Green boxes: mTOR-independent autophagy pathways. The PI3KCIII complex (also called the Beclin 1-Vps34-Vps15 complex) is essential for the induction of autophagy and is regulated by interacting proteins, such as the negative regulators Rubicon, Mcl-1, and Bcl-XL/Bcl-2, while proteins including UVRAG, Atg14, Bif-1, VMP-1, and Ambra-1 induce autophagy by binding Beclin 1 and Vps34 and promoting the activity of the PI3KCIII complex. 357 In addition, various kinases also regulate autophagy. ERK and JNK-1 can phosphorylate Bcl-2, release its inhibition, and consequently induce autophagy; the phosphorylation of Beclin 1 by Akt inhibits autophagy, whereas the phosphorylation of Beclin 1 by DAPK promotes autophagy. 452 Autophagy can be inhibited by the action of PKA and PKC on LC3. Finally, Atg4, Atg3, Atg7, and Atg10 are autophagy-related proteins that mediate the formation of the Atg12-Atg5-Atg16L1 complex and LC3-II. 453 RAS and p300 can also regulate autophagy via the mTOR-independent pathway 454 neurite outgrowth and neurogenesis, enhanced BDNF signaling can rearrange the subcellular distribution of α-secretase, which increases its binding to APP peptides; in addition, the activity of β-secretase is inhibited after BDNF treatment. 414 Moreover, the serotonylation of glutamine (at position 5) in histone H3 by a transglutaminase 2-mediated manner is a sign of permissive gene expression. 415 Furthermore, fluoxetine has been reported to bind and inhibit NMDA receptors directly in the CNS, 416 and this can reduce the inhibition of α-secretase and thus prevent the production of Aβ. 203,417 Fluoxetine also inhibits γ-secretase activity and reduces the production of toxic amyloid Aβ by activating MEK-ERK signaling. 371,372 In addition, fluoxetine can bind to the endoplasmic reticulum protein sigma-1 receptor. 418 Sigma-1 receptor ligands can enhance acetylcholine secretion. 419,420 The sigma-1 receptor activator Anavex 2-73 has entered a phase III clinical trial after it was granted fast-track status by the FDA because of the promising results in phase II. The sigma-1 receptor is located in the mitochondrion-associated ER membrane so that the activation of the sigma-1 receptor can prolong Ca 2+ signaling in mitochondria. 421 Consequently, the local and specific elevation of [Ca 2+ ] in the mitochondrial matrix can enhance ATP synthesis, 422,423 which ameliorates hypometabolism.
In addition, our group examined the effect of SSRIs on cognitive function in AD by conducting a meta-analysis of randomized controlled studies. Of the 854 articles identified, 14 articles that involved 1091 participants were eligible for inclusion. We compared changes in MMSE scores between SSRI treatment groups and the placebo group, and we found that SSRIs may contribute to improved cognitive function, with a mean difference (MD) of 0.84 (95%CI: 0.32-1.37, P = 0.002) compared with the control. Further subgroup analysis exploring the effect of fluoxetine and other SSRIs revealed a beneficial effect of fluoxetine (MD = 1.16, 95%CI: 0.41-1.90, P = 0.002) but no benefit of other SSRIs (MD = 0.58, 95%CI: −0.17-1.33, P = 0.13) on cognitive function. 424 Consequently, all of the above evidence indicates that fluoxetine has strong potential for the treatment of AD. In addition, because of above wealthy supporting documentation and the weak role of other SSRIs such as escitalopram in promoting BDNF release, 425 fluoxetine was singled out as a potential therapy for the treatment of AD, not just as a complementary treatment. 426 As summarized and illustrated in Fig. 5 In addition to mitochondrial dysfunction, hypometabolism may underlie the cholinergic hypothesis, metal ion hypothesis, and neurovascular hypothesis. a Glucose is enzymatically catalyzed to produce pyruvate. Pyruvate is converted to acetyl-CoA and then enters the TCA cycle or is used in the cytoplasm to synthesize acetylcholine. However, in AD patients, because of hypometabolism, the production of acetyl-CoA and ATP is insufficient, which leads to a reduction in acetylcholine synthesis. b Mitochondrial complexes I-III require Fe/S clusters, and complexes II-IV need hemoproteins for electron transfer and the oxidative phosphorylation of the respiratory chain. When iron deficiency occurs, the production of Fe/S and hemoproteins decreases, thereby affecting mitochondrial function and resulting in hypometabolism. In addition, copper is essential for the function of complex IV. Clearly, Cu-Zn superoxide dismutase (SOD1) requires copper and zinc. 455,456 c Hypoperfusion and hypoxia in vascular diseases leads to insufficient oxygen supply, which in turn leads to insufficient ATP synthesis, resulting in hypometabolism in AD patients. TCA: tricarboxylic acid cycle; SOD1: superoxide dismutase 1 Fig. 7, the exact mechanisms of the effects of fluoxetine remain to be further clarified.
Finally, to summarize this review of the history and progress of hypotheses and clinical trials for AD, the most perplexing question is in regards to amyloid hypothesis and its failed clinical trials, which account for 22.3% of all clinical trials (Fig. 1). Although mutations in APP, PSEN1, or PSEN2 only account for~0.5% of all AD cases, 11 mutations in PSEN1, which is the most common known genetic cause of FD and functions as the catalytic subunit of γsecretase, 427,428 may cast light upon Aβ and its paradox. In 2017, Sun et al. analyzed the effect of 138 pathogenic mutations in PSEN1 on the production of Aβ−42 and Aβ−40 peptides by γsecretase in vitro; they found that 90% of these mutations led to a decrease in the production of Aβ−42 and Aβ−40 and that 10% of these mutations result in decreased Aβ−42/Aβ−40 ratios. 429 This comprehensive assessment of the impact of FD mutations on γsecretase activity and Aβ production does not support the amyloid hypothesis and suggests an alternative therapeutic strategy aimed at restoring γ-secretase activity 430 ; this is also supported by the fact that the functional loss of both PSEN1 and PSEN2 in the mouse postnatal forebrain causes memory impairment in an age-dependent manner. 431 Considering that the activation of Notch signaling by the cleavage of γ-secretase 432 is not involved in age-related neurodegeneration, 433 other signaling pathways mediated by Aβ and/or other products of γ-secretase substrates, such as ErbB4, 434 E-cadherin, 435 N-cadherin, 436 ephrin-B2, 437 CD44, 438 and LDL receptor-related protein, 439 may play active roles in neuronal survival in the adult brain.
The most interesting and challenging phenomena regarding fluoxetine is that fluoxetine is clinically more effective in women than in men 440 and that the prevalence of AD and other dementias is higher in women than in men 441 ; meanwhile, women live significantly longer than men. 442 These phenomena suggest that there are interplays or trade-offs between AD and longevity. In particular, APOE is the strongest genetic risk factor for AD [18][19][20][21] and is the only gene associated with longevity that achieves genome-wide significance (P < 5 × 10 -8 ). 443 APOE4 is associated with a risk of AD that declines after the age of 70; the OR for APOE4 heterozygotes remains above unity at almost all ages; surprisingly, however, the OR for APOE4 homozygotes dips below unity after the age of 89. 444 There may be genetic and nongenetic factors that interact with APOE4, lead to shorter survival in more aggressive form of AD, or promote longevity in an age-dependent manner. 11 Uncovering the puzzle of APOE4 and the mystery of longevity may provide insights for AD prevention. Fig. 7 The potential mechanisms of fluoxetine in the remission of AD. As a selective 5-HT reuptake inhibitor, fluoxetine can increase the extraneuronal concentration of 5-HT. 5-HT binds to the 5-HT 4 A receptor to promote neuronal dematuration through a Gs-mediated pathway. 5-HT binds to the 5-HT 1 A receptor, which is involved in BDNF-dependent neurogenesis through the Gi-mediated signaling pathway. After 5-HT stimulation, MeCP2 is phosphorylated at Ser421 through CaMKII-dependent signaling, and this promotes the dissociation of CREB from HDAC and then increases the expression of BDNF. BDNF activates downstream signaling pathways, including the MEK-ERK pathway, which might promote the activity of α-secretase, inhibit γ-secretase, and reduce the production of toxic amyloid Aβ. Moreover, the serotonylation of histone H3 at glutamine 5 (Q5) enhances the binding of H3K4me3 and TFIID and allows gene expression. Fluoxetine has been reported to bind and inhibit NMDA receptors directly, which can reduce the inhibition of α-secretase and thus prevent the production of Aβ. In addition, fluoxetine can bind to the endoplasmic reticulum protein sigma-1 receptor, which induces the dissociation of Bip from the sigma-1 receptor and promotes neuroprotection. 5-HT: serotonin; ER: endoplasmic reticulum | 2019-09-23T13:46:51.825Z | 2019-09-23T00:00:00.000 | {
"year": 2019,
"sha1": "c441a6d89eef52919613d66561403947a66c7cf5",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41392-019-0063-8.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0c3ee3d3fdf0d846d24cb3fbc3264add12320954",
"s2fieldsofstudy": [
"History",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
249463289 | pes2o/s2orc | v3-fos-license | RNAsolo: a repository of cleaned PDB-derived RNA 3D structures
Abstract Motivation The development of algorithms dedicated to RNA three-dimensional (3D) structures contributes to the demand for training, testing and benchmarking data. A reliable source of such data derived from computational prediction is the RNA-Puzzles repository. In contrast, the largest resource with experimentally determined structures is the Protein Data Bank. However, files in this archive often contain other molecular data in addition to the RNA structure itself, which—to be used by RNA processing algorithms—should be removed. Results RNAsolo is a self-updating database dedicated to RNA bioinformatics. It systematically collects experimentally determined RNA 3D structures stored in the PDB, cleans them from non-RNA chains, and groups them into equivalence classes. It allows users to download various subsets of data—clustered by resolution, source, data format, etc.—for further processing and analysis with a single click. Availability and implementation The repository is publicly available at https://rnasolo.cs.put.poznan.pl.
Introduction
RNA molecules constitute a rich, heterogeneous universe-both at a structural and functional level. They are a fascinating object of basic and applied research in many scientific disciplines. A significant fraction of this research focuses on the tertiary [three-dimensional (3D)] structure. Scientists look for its relationship to intermolecular interactions and, in the longer term, to the molecule's role in the organism at the molecular and cellular levels. They also try to predict the 3D structure and design molecules with predefined properties.
Computer algorithms specialized for processing structural data are a great help in such studies. Their correctness and precision depend highly on reliable training and test sets (Carrascoza et al., 2022;Popenda et al., 2021). Well-structured datasets also help to perform comparative analyses of different computational methods. Such data collections should be sufficiently numerous and contain non-redundant, representative information selected appropriately for the problem solved. The primary source of reliable structural data is the Protein Data Bank (Berman et al., 2000) that collects molecular structures determined by various experimental methods. Here, researchers interested in ribonucleic acid molecules can find naked (solo) RNAs, protein-RNA complexes and DNA-RNA hybrids. In turn, the repository created by the RNA-Puzzles initiative makes available RNA 3D structures predicted by various stateof-the-art computational methods (Magnus et al., 2020). Searching one of these archives often starts the process of creating a training set (to train an ML algorithm), a test-or a benchmark set (to verify the quality and accuracy of a new algorithm or compare it with the state-of-the-art ones). Found data are usually clustered, stripped of redundancy, cleaned of metadata and non-RNA data, and then supplemented according to the assignment of a collection. Such processing is relatively easy for in silico models from the RNA-Puzzles repository since they are standardized and grouped by challenge and computational method. In contrast, organizing a set of PDB structures requires additional operations and resources-for example, searching the BGSU RNA site (Leontis and Zirbel, 2012) that provides a list of non-redundant RNA 3D structures. Examples of archives created using a similar procedure include RNABase, no longer maintained (Murthy and Rose, 2003) or the recently published RNANet, which collects sequences and structures of RNA homologs (Becquey et al., 2021).
In this work, we respond to the necessity for fast and easy, automatic creation of sets of RNA 3D structures to train, test and benchmark bioinformatics algorithms. We present RNAsolo, designed to systematically collect experimentally determined RNA 3D structures stored in the Protein Data Bank (Berman et al., 2000), clean them from non-RNA data, annotate and assign them to equivalence classes according to Leontis and Zirbel (2012). Its primary advantage is the ability to select RNA structures of interest and download them as a dataset ready for further processing-all with a single click. The RNAsolo database has been freely accessible online since July 2021. It automatically updates every Thursday. As part of each
3668
This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial License (https://creativecommons.org/licenses/by-nc/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited. For commercial re-use, please contact journals.permissions@oup.com Bioinformatics,38(14), 2022, 3668-3670 https://doi.org/10.1093/bioinformatics/btac386 Advance Access Publication Date: 8 June 2022 Applications Note update, 192 benchmark sets are prepared as ZIP archives, so the users have them ready at a glance.
Materials and methods
Data processing in the RNAsolo system consists of six steps: primary data collection, non-RNA information removal, structure data completion and populating the database, data visualization, statistics compilation and ZIP archives creation ( Fig. 1). At first, RNAsolo connects to the BGSU RNA site (Leontis and Zirbel, 2012). This webpage once a week publishes a list of equivalence classes of PDB-deposited RNA structures. They aim to support building benchmark sets of RNA structures by filtering redundancies that could bias the results while retaining the sequence variation.
The classes are defined based on a pairwise analysis of structural redundancy. In general, two RNAs are non-redundant if they come from different species. If associated with the same organism, they may or may not be redundant-this is decided based on sequence comparison and structural superposition focused on geometricsbased similarities and differences. Every class has a representative selected following the three criteria: the number of FR3D-annotated base pairs per nucleotide (Sarver et al., 2008), experimental resolution and release date. High-resolution structures with more recent publication time are preferred (Leontis and Zirbel, 2012). From the BGSU RNA site, we retrieve information about changes in equivalence classes resulting from the recent update of the Protein Data Bank (Berman et al., 2000). Classes are analyzed separately for each resolution cutoff x, where x2 f1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 20.0g. Additionally, a list of all equivalence classes independent of resolution is retrieved. Based on this information, the RNAsolo procedure downloads from the PDB archive new and modified RNA structures (in mmCIF and PDB format if available) and dereferences entries deleted from the PDB. Assignments to equivalence classes in the RNAsolo system are automatically updated. Then, the cleaning procedure removes non-RNA chains from all downloaded files. In the next step, the system prepares a set of structural data (sequence, 3D structure, number of residues, model and chain identifiers-in the latter case, for consistency between mmCIF and PDB formats, author asym_id is preferred over label asym_id) and metadata (source structure and title) and populates the database. Wherever it is required, the RNAsolo procedure adds symmetry operators. They are extracted from the source mmCIF data, transformed into the PDB format, and saved in the RNAsolo system in the mmCIF and PDB files. If a PDB file does not exist for some RNA in the Protein Data Bank, we produce one for the cleaned tertiary structure if possible (i.e. all restraints of the PDB format are met) and add it to the RNAsolo resources. Note, however, that not all structures can be saved in the PDB format, and therefore the set of PDB files in the RNAsolo database may be less numerous than the corresponding mmCIF set. For each RNA molecule, the system creates a Pymol-based, static 3D structure image and visualizes it in Mol* viewer (Sehnal et al., 2021). Basic statistics about equivalence classes and structures in different subgroups are collected and visualized. Finally, RNAsolo creates 192 ZIP archives. They contain different subsets of RNA data grouped by the data format (3D structure in mmCIF or PDB, sequence in FASTA), molecule classification (solo RNA molecules, RNAs from protein-RNA complexes, RNAs from DNA-RNA hybrids, all molecules), redundancy (representatives or all members of equivalence classes) and resolution ( 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 20.0, all). The resolution cutoff values have been adopted from the BGSU representative sets.
RNAsolo has a multi-layer architecture. Front-end, implemented in TypeScript with React.js and Ant Design libraries, is served by the Nginx web server. Its responsive UI supports all modern web browsers and platforms, including mobile devices. The back-end layer, written in Python3, uses the Django framework. It integrates a relational database provided by PostgreSQL and Celery-a queueing system for reliable execution of cyclic operations, like database updates. Our Python and BioPython scripts are applied to download data files, preprocess RNA structures and filter non-RNA data.
Results
The RNAsolo database is updated every Thursday. All changes are recorded in the Update log accessible from the main menu. The distribution of data is illustrated by graphs and tables on the Database statistics page. The database currently contains 12 914 RNA tertiary structures, including 2101 solo RNAs, 10 694 RNAs from protein-RNA complexes and 119 from DNA-RNA hybrids, determined in different experiments (Table 1). As we rely on representative datasets from the BGSU RNA site (Leontis and Zirbel, 2012), we adopt its understanding of a structure. Thus, the RNA 3D structure files in RNAsolo store individual chains or sometimes multiple chains kept together by the BGSU site. On the other hand, the PDB files may contain more than one structure. It causes that the structure counter in RNAsolo indicates a higher value than in PDB. Structures in RNAsolo are clustered in 3271 equivalence classes counting from
Conclusions
RNA applications in biomedicine and biotechnology have raised the need to learn this molecule structure and explore its properties. The Protein Data Bank (Berman et al., 2000) currently stores 1586 solo RNAs, 4070 protein-RNA complexes and 96 DNA-RNA hybrids ready to explore (data as of January 5, 2022). Including the two latter subsets, one gets quite a satisfying amount of structural data-5752 PDB structures. However, to process them with a focus on RNA, it is necessary to clean up the structures of complexes and hybrids from non-RNA chains. So far, no online tool existed that could automatically extract naked RNAs from multi-molecule PDB files and make them available to the users. When needed, we used homemade scripts for data cleaning to enable their processing by the RNApolis tools (Szachniuk, 2019). Based on this experience, we have developed RNAsolo, a system that collects cleaned RNA structures, clusters them into equivalence classes, makes them searchable and allows users to create diverse datasets for further study. Although its current functionality is quite broad, the extensions are possible. They include expanding the database scheme to allow storage of the secondary structure, adding new filtering criteria to the RNAsolo search engine and developing additional functions to collect data statistics. We also plan to broaden the scope of web services to facilitate the work of users automatically processing a wide variety of structural data. | 2022-06-09T06:23:14.331Z | 2022-06-08T00:00:00.000 | {
"year": 2022,
"sha1": "aae58359127e83c4a3cc05b34fcf3669a79c9d71",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "c4fc62e06db94bb54e037c3a0dec1f141aeba011",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
22184051 | pes2o/s2orc | v3-fos-license | Association of orexin receptor polymorphisms with antipsychotic-induced weight gain.
OBJECTIVES
Antipsychotic-induced weight gain (AIWG) is a common side effect of treatment with antipsychotics such as clozapine and olanzapine. The orexin gene and its receptors are expressed in the hypothalamus and have been associated with maintenance of energy homeostasis. In this study, we have analysed tagging single nucleotide polymorphisms (SNPs) in orexin receptors 1 and 2 (HCRTR1 and HCRTR2) for association with AIWG.
METHODS
Schizophrenia or schizoaffective disorder subjects (n = 218), treated mostly with clozapine and olanzapine for up to 14 weeks, were included. Replication was conducted in a subset of CATIE samples (n = 122) treated with either olanzapine or risperidone for up to 190 days. Association between SNPs and AIWG was assessed using analysis of covariance (ANCOVA) with baseline weight and duration of treatment as covariates.
RESULTS
Several SNPs in HCRTR2 were nominally associated with AIWG in patients of European ancestry treated with either clozapine or olanzapine (P<0.05). In the replication analysis two SNPs rs3134701 (P = 0.043) and rs12662510 (P = 0.012) were nominally associated with AIWG. None of the SNPs in HCRTR1 were associated with AIWG.
CONCLUSION
This study provides preliminary evidence supporting the role of HCRTR2 in AIWG. However, these results need to be confirmed in large study samples.
Introduction
Development of severe weight gain and metabolic syndrome continues to be a major hindrance in the use of second-generation antipsychotics (SGA) such as clozapine and olanzapine. Weight gain and obesity have a detrimental effect on the physical and psychological health of the patients and contribute to non-adherence to antipsychotic medication (Crisp et al. 2000;Lieberman et al. 2005a). Concordance of weight gain in monozygotic twins and sibling pairs exposed to antipsychotics suggests a role of genetic factors in AIWG Wehmeier et al. 2005;Gebhardt et al. 2010). Genetic association studies from our laboratory and others have shown an important influence of genetic variation in genes involved in the maintenance of energy homeostasis. Two of the most important and best replicated findings to date are association of genetic variation in the melanocortin 4 receptor (rs489693) and in the serotonin 5HT2c receptor genes (rs3813929) with AIWG (Muller and Kennedy 2006;Lett et al. 2012). In this study, we investigate the impact of genetic variation in the important but less studied orexin/hypocretin system genes on AIWG.
The orexin system includes the orexin gene coding for pre-pro-orexin which is cleaved into two polypeptides, orexin A (OXA, hypocretin 1, 33 amino acids) and orexin B (OXB, hypocretin 2, 28 amino acids). The biological action of the orexin peptides is mediated through two G-protein coupled receptors: orexin receptor 1 (OX1R or HCRTR1) and orexin receptor 2 (OX2R or HCRTR2; Sakurai and Mieda 2011;Kukkonen 2013;Perez-Leighton et al. 2013). Orexin receptors are expressed in several regions in the brain. OX1R, compared to OX2R, are predominant in the locus coeruleous, paraventricular thalamic nucleus and bed nucleus of the stria terminalis. OX2R are mainly expressed in the arcuate nucleus (ARC), paraventricular nucleus and lateral hypothalamic area (Marcus et al. 2001;Funato et al. 2009). The OX2R has been shown to play a major role in preventing high-fat diet-induced obesity and insulin insensitivity in mice (Funato et al. 2009). Continuous infusion of an OX2R selective agonist to the lateral ventricles of wild-type mice on a high-fat diet suppresses food intake, leads to significantly less fat mass and greater energy expenditure. In the same study mice with OX1R deletion showed improved glucose tolerance and insulin sensitivity on a high-fat diet suggesting that OX1R may also have a role in mediating the effect of high-fat diet on glucose metabolism (Funato et al. 2009). Overall, OX2R appears to play a major role in adverse dietary conditions with OX1R making minor contribution. The orexin gene and its receptors have also been associated to narcolepsy in mice, dogs and humans (Kukkonen 2013). Interestingly, individuals with narcolepsy have decreased caloric intake but have a higher body mass index and increased incidence of metabolic syndrome (Schuld et al. 2000;Nishino 2007). However, the orexin receptors have not been investigated for association with obesity in the general population using focussed comprehensive candidate gene studies.
The orexin system is modulated by leptin (Funato et al. 2009), and sends excitatory signals to neuropeptide Y (NPY) expressing neurons in the ARC increasing food intake (Muroya et al. 2004). In addition, it has been shown that the orexin system also interacts with endocannabinoids as injection of the cannabinoid receptor type 1 (CB 1 or CNR1) antagonist, rimonabant, abolishes feeding induced by intracerebroventricular OXA injection (Crespo et al. 2008). Recently, Cristino et al. (2013), reported that in murine models of obesity (leptin deficient), increased endocannabinoid synthesis causes activation of CB 1 receptors (Cristino et al. 2013). This reduces inhibition of orexinergic neurons and enhances OXA release leading to hyperphagia and increased body weight gain. Thus, the orexin system interacts with both NPY and the CB 1 expressing neurons. We have previously shown that SNPs in CNR1 (rs806378) and NPY (rs16147) were associated with AIWG and significantly interact with each other to increase the risk for AIWG (Tiwari et al. 2010(Tiwari et al. , 2013. More importantly, the potential role of orexin system genes in AIWG is underlined by the observations that antipsychotics associated with weight gain increase neuronal activity in orexin neurons compared to antipsychotics with no weight gain liability (Fadel et al. 2002). In addition, antipsychotics associated with higher risk of weight gain (e.g., clozapine and olanzapine) activate orexin neurons significantly more than antipsychotics with relatively less AIWG risk (e.g., risperidone, Fadel et al. 2002). Similarly, in female Sprague-Dawley rats injected with olanzapine, 50% of the neurons activated in the perifornical region of lateral hypothalamus were OXA positive (Stefanidis et al. 2009). This suggests that antipsychotics with weight gain risk modulate orexin neurons. However, the impact of genetic variation in the orexin system on AIWG has not been investigated to date.
Based on the important role of the orexin system in energy homeostasis and on the influence of antipsychotics with high risk for AIWG on orexin neurons, we analysed tagSNPs in HCRTR1 and HCRTR2 genes for association with AIWG. We hypothesised that the SNPs in these two genes are likely to be associated with AIWG caused by high weight gain risk medications such as clozapine and olanzapine.
Discovery subjects
Patients were diagnosed with schizophrenia or schizoaffective disorder according to DSM-III-R or DSM-IV criteria (n ¼ 218). Written informed consent was obtained from all participants. Detailed demographic and clinical characteristics are provided in Table I and have been published previously (Tiwari et al. 2010(Tiwari et al. , 2013. Patients included in this study were 18-60 years old and were recruited from Charité University Medicine, Berlin, Germany (Sample A, n ¼ 88); Case Western Reserve University, Cleveland, OH, USA (Sample B, n ¼ 74); Hillside Hospital in Glen Oaks, NY, USA (Sample C, n ¼ 56). Patients from sample A were given mixed antipsychotic medication and assessed for up to 6 weeks. In sample B patients were either treatment refractory or intolerant to treatment with typical antipsychotics with no prior exposure to atypical antipsychotics. Patients received clozapine for 6 weeks (dosage were titrated 600 mg/day) following a 7-14-day washout period (Masellis et al. 1998). In sample C, patients who had suboptimal response to previous treatment with typical antipsychotic drugs, defined by persistent positive symptoms and poor level of functioning over the past 2 years, were included. Patients were randomly assigned to receive either clozapine (500 mg/day), olanzapine (20 mg/day), risperidone (8 mg/day) or haloperidol in a 14-week double-blind study (for details see Volavka et al. 2002). Exclusion criteria for these studies included pregnancy, organic brain disorder, severe head injuries, previous medical conditions which required treatment and were not stable (hepatitis C, HIV, thyroid disorder or diabetes mellitus), substance dependence, clinically relevant intellectual disability and severe personality disorder.
Genotyping
Genomic DNA was extracted from blood samples using the high-salt method (Lahiri and Nurnberger 1991). TagSNPs were selected from the CEU population in HapMap (Haploview 4.2, Barrett et al. 2005) using a region $10 kb upsetream and 2 kb downstream of the HCRTR1 and HCRTR2 genes (minor allele frequency 40.05, r 2 !0.8). A total of 5 tagSNPs covering a 20-kb region including HCRTR1 ($9.6 kb) and 28 tagSNPs covering 120 kb including HCRTR2 ($108 kb) were investigated in this study. All genotyping was carried out using customised Golden Gate Genotyping Assays (Illumina, Inc. San Diego, CA, USA). As a quality control, 5% of the total sample was re-genotyped and 100% concordance rate was observed in this study.
Replication subjects
The replication subjects were a subset of patients that participated in the Clinical Antipsychotic Trials of Intervention Effectiveness (CATIE, Lieberman et al. 2005b). Briefly, CATIE was a multicenter, double-blind, multiphase study conducted at 57 sites in the USA between January 2001 and December 2004. A genomewide association study for pharamcogenomic factors has been conducted (Adkins et al. 2011) and the genome wide genotyping data are available for 741 choric schizophrenia patients. These patients were randomly assigned to olanzapine, risperidone, quetiapine, perphenazine or ziprasidone. These drugs have different propensities to cause weight gain and the patients may have been exposed to weight gain associated antipsychotics (e.g., olanzapine) prior to randomisation. In addition, if patients are markedly obese at baseline, the probability to gain significant amounts of weight during the study is low. Therefore, using the CATIE Phase I information, we selected a subset of patients suitable for antipsychotic induced weight gain studies. This refined sample (n ¼ 122) consisted of patients of European ancestry who were not being treated with high weight gain risk medication (e.g., olanzapine) for more than 14 days before baseline assessment, body mass index (BMI)540, randomised to either risperidone or olanzapine and had more than one weight measure available after baseline (Table II).
Orexin receptor SNPs found to be nominally associated with AIWG were not genotyped in the GWAS. Therefore, we performed imputation. The quality control measures included removing individuals with less than 95% of the markers genotyped and excluded SNPs that were less than 95% genotyped or had a minor allele frequency of less than 5%. We checked for cryptic relatedness and, calculated mean heterozygosity and outliers (4 SD from the mean) were removed. In a subsequent step, some SNPs were removed if the 2 -test for Hardy-Weinberg equilibrium was 51 Â 10 À3 . Multi-dimensional scaling (MDS) analysis was used to cheque for population stratification, and outliers were removed and only subjects of European ancestry were selected. Finally, we updated the map position for build 37 and conducted imputation in 1 Mb segments upstream and downstream HCRTR2 using IMPUTE v2
Statistical analysis
Pearson's 2 -test for categorical variables and Student's t-test or analyses of variance for continuous variables were used for statistical comparisons (IBM Statistical Package for the Social Sciences (IBM SPSS Statistics, version 20). Analysis of covariance (ANCOVA) was applied to test association between genotype and weight change (%) from baseline as the dependent variable ([Weight at the end of study-weight at Baseline/ weight at Baseline] Â 100). Genotypes were entered as fixed factor and baseline weight and duration of treatment as covariates. Linkage disequilibrium (LD) was calculated using Haploview 4.2 (Barrett et al. 2005) and haplotypes were analysed with UNPHASED version 3.1.5 (Dudbridge 2003(Dudbridge , 2008. Corrections for multiple tests were done using Single Nucleotide Polymorphism Spectral Decomposition (SNPSpD, Nyholt 2004). Power calculations were performed in Quanto 1.2.4 (Gauderman and Morrison 2006). Assuming a minor allele frequency of 0.25 a sample size of n ¼ 218, we had more than 80% power to detect a mean difference of over 2% between carriers and non-carriers of the risk genotype in an additive model. In the subsample of European ancestry patients treated with either clozapine or olanzapine (n ¼ 86), we had more than 80% power to detect over 3.25% difference in an additive model. In the discovery sample, the Programme mbmdr was applied to detect gene-gene interaction between HCRTR2, NPY and CNR1 and significance was estimated using permutation test (Calle et al. 2010).
Results
All the SNPs analysed in this study were in Hardy-Weinberg equilibrium (P40.05; Supplementary Table 1 available online). The SNPs rs77324737 (monomorphic), rs12057176, rs3134712 and rs12111299 occurred at a minor allele frequency of55% and were excluded from the study. LD plots among the SNPs in HCRTR1 and HCRTR2 are provided in Supplementary Figures 1 and 2, respectively (available online). Among the clinical sites, no significant difference in the amount of weight gained was observed (Tables I and II). The baseline weight and the duration of treatment were significantly different between the three sites and were entered as covariates in all the association analysis.
Association study in the total and European ancestry sample
We performed an exploratory analysis in the total sample and did not observe association of any of the SNPs with AIWG (P40.05). Since the sample consisted of patients of different ancestry, we focussed our further analyses on the subset of patients of European ancestry only (n ¼ 151). In this subset, we observed nominal genotypic association of rs4467775 (P ¼ 0.015), rs3134701 (P ¼ 0.026) and rs4142972 (P ¼ 0.007) in HCRTR2 with weight change.
Association study in patients of European ancestry treated with either clozapine or olanzapine
Considering that clozapine and olanzapine carry the highest risk of weight gain and have similar pharmacology, we stratified our sample and conducted an analysis on patients of European ancestry treated with clozapine or olanzapine (n ¼ 86). This subset consisted primarily of individuals who have had no or minimal prior exposure to atypical antipsychotic drugs. We observed nominal genotypic association of the above 3 SNPs as well as rs6922310, rs12662510 and rs2653350 (Table III). Patients with risk genotypes, for example the AA genotype of rs6922310 or A-allele carriers for rs4142972 gained $2.6 and $2.8 kg more weight than the non-risk genotype. In this subsample, nominal allelic associations of several SNPs with weight change (%) were also observed ( Table III). None of the SNPs in HCRTR1 were associated with AIWG (Supplementary Table 1 available online). Haplotype analysis within this subsample of European ancestry patients treated with clozapine or olanzapine (n ¼ 86) revealed that haplotypes of the HCRTR2 SNPs rs12111375, rs4142972 and rs6922310 were significantly associated with AIWG after correction for multiple testing (Supplementary Figure 3 available online). Carriers of the T-G-G haplotype (versus all the others pooled together) gained less weight (n chr ¼29, frequency ¼0.173; P ¼ 9.1 Â 10 À4 ) whereas the T-A-A haplotype was associated with higher weight gain (n chr ¼36, frequency ¼0.214; P ¼ 0.0064). Considering that a majority of the significant SNPs are present in $12 kb LD block (Supplementary Figure 2, Block 3, available online), we also explored association of the five SNPS haplotypes with AIWG. The five SNP haplotype (rs3134701, rs12111375, rs4142972, rs6922310 and rs12662510) was significantly associated with AWIG (P ¼ 0.012). Carriers of the G-T-G-G-G haplotype gained less weight (n chr ¼23, frequency ¼0.160; P ¼ 0.0074) whereas the A-T-A-A-A haplotype carriers gained significantly more weight (n chr ¼33, frequency ¼0.229; P ¼ 0.025).
The correction for multiple comparisons was carried out using SNPSpD. This method takes the linkage disequilibrium between SNPs into account and determines the effective number of independent SNPs. The number of independent tests (MeffLi) for HCRTR1 and HCRTR2 were 2 and 14.84, respectively (Nyholt 2004;Li and Ji 2005). The association of weight change and SNPs rs4142972 and rs6922310 remained nominally significant in a dominant model at a gene wide level (P corrected ¼0.03 and 0.045, respectively; Table III). However, if we consider all the tests done in this study none of the observations are statistically significant.
We also carried out an exploratory gene-gene interaction analysis between rs4142972 in HCRTR2 with rs16147 in NPY and rs806378 in CNR1 in the subsample of European ancestry patients treated with either clozapine or olanzapine. The SNPs rs806378 (CNR1) and rs16147 (NPY) were observed to be significantly associated with AIWG in our earlier studies (Tiwari et al. 2010(Tiwari et al. , 2013. We observed a trend for interaction between the three SNPs (Permutation P value ¼0.05). Carriers of the TT genotype at rs16147, the CC genotype of rs806378 and the GG genotype at rs4142972 gained the least weight (beta¼-7.11, P ¼ 0.00012). An ANCOVA of patients carrying the low risk (TT, CC and GG) genotypes vs. other genotypic combinations pooled together further supported the finding (TT, CC, GG vs. others; -1.48 ± 4.3% vs. 5.7 ± 5.3%, n ¼ 10 vs. 69, P ¼ 6.13 Â 10 À5 ). Interaction between rs6922310 in HCRTR2 with rs16147 in NPY and rs806378 in CNR1 were not significant (Permutation P-value ¼0.1).
Association study in the replication sample
We carried out the replication analysis of the HCRTR2 SNPs nominally associated with AIWG in the refined CATIE sample (Table IV). In the overall sample of patients treated with either olanzapine or risperidone, rs3134701 (P ¼ 0.043) and rs12662510 (P ¼ 0.012) were nominally associated with BMI change (%). Although the remaining SNPs were not statistically significant, the risk genotypes were similar to the discovery sample. In addition, among the subset of patients treated only with olanzapine the risk genotypes and associations continued to be similar to the discovery sample (Table IV).
Discussion
Our study is the first to investigate genetic variation in the HCRTR1 and HCRTR2 genes and their role in antipsychotic-induced weight gain. In the discovery sample, we observed nominal association of several SNPs, in particular rs4142972 and rs6922310, in the orexin 2 receptor gene (HCRTR2) with AIWG. The SNP rs4142972 remained significant after accounting for all the independent tests in the discovery sample. The SNPs rs4142972 and rs6922310 are intronic, present within 250 bp of each other but are not correlated (r 2 ¼0.06), and have no known functional effect. The SNP rs6922310 is correlated with SNPs rs12662510 (r 2 ¼0.84) and rs3134701 (r 2 ¼0.75) which are also intronic and nominally associated with AIWG (Table III). In addition, rs6922310 is primarily correlated with SNPs in the introns 1 region only ( Supplementary Figure 4 available online). The SNP rs4142972 is moderately correlated with rs4467775 (r 2 ¼0.49) and other SNPs in the putative promoter region of HCRTR2 gene ( Supplementary Figures 2 and 5 available online). These distinct correlation patterns and independent associations suggest that HCRTR2 may be contributing to the development of AIWG via two independent mechanisms.
In the replication analysis rs3134701 and rs12662510 was nominally associated with AIWG. In addition, the remaining SNPs nominally associated with AWIG in the discovery sample had the same genotypes associated with higher weight gain in the replication sample (Table IV). The trends in the replication sample are notable as these observations are made in a sample of chronic schizophrenia patients that may have been exposed to atypical antipsychotics in the past. Our discovery sample consisted primarily of patients undergoing first exposure to atypical antipsychotic drugs. Generally larger effect sizes are observed in patients undergoing first exposure to antipsychotics compared to chronically treated patients and genome-wide significant genetic polymorphisms have been detected in small sample sizes (e.g., rs489693, n ¼ 139) (Malhotra et al. 2012). These observations suggest that HCRTR2 gene is likely to play a role in AIWG development.
We also observed a nominal gene-gene interaction between functional SNPs in NPY and CNR1 with rs4142972 in HCRTR2. This interaction may be biologically relevant since NPY neurons in the arcuate nucleus received projections from orexin neurons in the lateral hypothalamus (LH) and i.c.v. injection of NPY in the lateral hypothalamus activates orexin neurons (Kageyama et al. 2012). Administration of an HCRTR2 agonist reduces NPY and AGRP expression in mice on a high fat diet compared to those on a low fat diet (Funato et al. 2009). Furthermore, endocannabinoids cause increased OXA (Cristino et al. 2013) and NPY release (Gamber et al. 2005) suggesting that these neurotransmitters interact with each other and can together influence feeding behaviour and energy homeostasis. The rs806378 and rs16147 polymorphisms were not genotyped in the CATIE samples so replication analysis for this interaction was not performed.
Limitations of our study include a relatively small sample size after refining the samples by ethnicity and antipsychotic medication, use of self-reported ancestry in the discovery sample, and limited power to detect small gene effects. In addition, if we consider all the tests performed in this and previous studies, none of the observation will meet the statistical threshold of P50.05. The other limitation of this study is that genetic polymorphisms in the orexin gene were not tested for association with AIWG. The orexin gene is small ($1.4 kb), and highly conserved with no known common variations (45%). Therefore, we did not investigate this gene in our study. However, this gene is a good candidate for sequencing studies in subjects exhibiting extremely high weight gain.
In summary, we provide preliminary evidence that genetic variation in the orexin receptor 2 gene (HCRTR2) is associated with AIWG. Analysis of this gene in larger sample sets will provide a clearer picture on its contribution to AIWG. Innovation, Ontario, and an OMHF New Investigator Fellowship. JLK is a recipient of a CIHR operating grant. NIC is a recipient of OMHF Research Studentship.
Statement of interest
NF reports no competing interests. AKT/EJB/CCZ/VFG/NIC/ DJM/JLK are authors on a patent application for a multigene model (including SNPs presented here) predicting antipsychotic induced weight gain. HYM has received grants or is or was a consultant to: Abbott Labs, ACADIA, Alkemes, Bristol Myers Squibb, DaiNippon Sumitomo, Eli Lilly, EnVivo, Janssen, Otsuka, Pfizer, Roche, Sunovion, and BiolineRx. HYM is a shareholder of ACADIA and Glaxo Smith Kline. In the past 3 years JAL reports having received research funding or is a member of the advisory board of Allon, Alkermes Bioline, GlaxoSmithKline Intracellular Therapies, Lilly, Merck, Novartis, Pfizer, Pierre Fabre, Psychogenics, F. Hoffmann-La Roche LTD, Sepracor (Sunovion) and Targacept. JAL receive no direct financial compensation or salary support for participation in these researches, consulting, or advisory board activities. JLK has been a consultant to GSK, Sanofi-Aventis, and Dainippon-Sumitomo. | 2018-04-03T03:45:03.307Z | 2016-04-02T00:00:00.000 | {
"year": 2016,
"sha1": "275e66ee00880f340b55a7e535c5075023ca8194",
"oa_license": "CCBY",
"oa_url": "https://figshare.com/articles/journal_contribution/Association_of_orexin_receptor_polymorphisms_with_antipsychotic_induced_weight_gain/1569415/files/2351887.pdf",
"oa_status": "GREEN",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "7569abd2d0f1f46de8fc762c3fef3271a43cde37",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18908060 | pes2o/s2orc | v3-fos-license | Modeling complex systems with adaptive networks
Adaptive networks are a novel class of dynamical networks whose topologies and states coevolve. Many real-world complex systems can be modeled as adaptive networks, including social networks, transportation networks, neural networks and biological networks. In this paper, we introduce fundamental concepts and unique properties of adaptive networks through a brief, non-comprehensive review of recent literature on mathematical/computational modeling and analysis of such networks. We also report our recent work on several applications of computational adaptive network modeling and analysis to real-world problems, including temporal development of search and rescue operational networks, automated rule discovery from empirical network evolution data, and cultural integration in corporate merger.
Introduction
The rapidly growing research on complex networks has presented a new approach to complex systems modeling and analysis [1][2][3][4][5]. It addresses the self-organization of complex network structure and its implications for system behavior, which holds significant cross-disciplinary relevance to many fields of natural and social sciences, particularly in today's highly networked society.
Interestingly, complex network research has historically addressed either "dynamics on networks" or "dynamics of networks" almost separately, without much consideration given to both at the same time. In the former, "dynamics on networks" approach, the focus is on the state transition of nodes on a network with a fixed topology and the trajectories of the system states in a well-defined phase space [6][7][8][9][10][11][12]. This is a natural extension of traditional dynamical systems research to a high-dimensional phase space with non-trivial interaction between state variables. On the other hand, in the latter, "dynamics of networks" approach, the focus is on the topological transformation of a network and its effects on statistical properties of the entire network [13][14][15][16][17][18][19], where a number of key concepts and techniques utilized are borrowed from statistical physics and social network analysis.
When looking into real-world complex networks, however, one can find many instances of networks whose states and topologies "coevolve", i.e., they interact with each other and keep changing, often over the same time scales, due to the system's own dynamics (Table 1). In these "adaptive networks", state transition of each component and topological transformation of networks are deeply coupled with each other, producing emergent behavior that would not be seen in other forms of networks. Modeling and predicting state-topology coevolution is now becoming well recognized as one of the most significant challenges in complex network research [1,5,20,21].
Individuals
Physical contacts In this paper, we introduce fundamental concepts and unique properties of adaptive networks through a brief, noncomprehensive review of recent literature on mathematical/computational modeling and analysis of such networks. We also report our recent work on several applications of computational adaptive network modeling and analysis to real-world problems, including temporal development of search and rescue operational networks, automated rule discovery from empirical network evolution data, and cultural integration in corporate merger.
The rest of the paper is structured as follows. In the next section, some of the recent literature is reviewed briefly to illustrate the increasing attention to the field of adaptive networks. In Section 3, we introduce Generative Network Automata (GNA), a theoretical framework for modeling adaptive network that we have proposed. In Sections 4-5, we present the aforementioned three examples of applications of adaptive network modeling to study the dynamics of complex systems. The final section summarizes and concludes the paper.
Growing Literature on Adaptive Networks
Over the past decade, several mathematical/computational models of state-topology coevolution in adaptive networks have been developed and studied on various subjects, ranging from physical, biological to social and engineered systems. In this section, we introduce a small number of samples taken from the recent literature, categorizing them into five major subjects of interest in the field.
Self-Organized Criticality in Adaptive Neural Systems
The present interest in adaptive networks was triggered by a paper published by Bornholdt and Rohlf in 2000 [22]. They built on an observation by Christensen et al. [23] who investigated the dynamics of a simple dynamical model for extremal optimization on complex networks. In the penultimate paragraph of their paper, Christensen et al. remarked that letting the structure of the network coevolve with the dynamics on the network, leads to a peculiar self-organization such that topological properties of the network approached a critical point, where the dynamics on the network changed qualitatively.
Inspired by Christensen et al., Bornholdt and Rohlf proposed a different model in which the self-organization to the critical state could be understood in greater detail. Their investigation showed that the dynamical processes taking place on the network effectively explored the network topology and thereby made topological information available in every network node. This information then fed into the local topological evolution and steered the dynamics toward the critical state. Thus a global self-organization is possible through the interplay of two local processes. Importantly, the paper of Bornholdt and Rohlf demonstrated that this is not only the case in rare, specifically engineered examples, but should be expected under fairly general conditions. Self-organized criticality is interesting because it can be argued that every information processing system should be in a critical state. It was therefore suspected that also the brain should reside in a critical state [24]. The mechanism of Bornholdt and Rohlf provided a plausible mechanism, explaining how criticality in the brain could be achieved. This "criticality hypothesis" [25] was subsequently supported by further models [26,27] and laboratory experiments [28,29].
Today there is growing evidence that self-organized criticality is a central process for brain functionality. Adaptive networks models remain a major tool for understanding this process. In the neural context, understanding selforganized criticality in adaptive networks is thus paving the way to new diagnostic tools and a deeper understanding of neural disorders [30]. Furthermore, understanding self-organized criticality in biological neural networks is thought to hold the key to the artificial systems that can self-organize to a state where they can process information. This may be enable the use of future nano-scale electronic components that are too small to arrange precisely using photolitography, and thus have to use adaptive principles to self-tune to a functional state after quasi-random assembly.
Epidemics on Adaptive Networks
While adaptive self-organized criticality requires dynamics on different time scales, other dynamical phenomena in adaptive networks occur when topology and node states evolve simultaneously. The resulting interplay has been investigated in detail in a class of epidemiological models where the agents rewire their social contacts in response to the epidemic state of other agents.
The first adaptive-network-based epidemic model was the adaptive Susceptible-Infected-Susceptible (SIS) model studied by Gross et al. [31]. By a so-called moment closure approximation, the authors were able to compute transition points in the model analytically. The main value of this work was to provide detailed analytical insights into the emergence of system-level phenomena from the node-level coevolution. Today, the model remains a benchmark for the performance of analytical approximations to adaptive networks [32][33][34][35]. Furthermore, it triggered a large body subsequent investigations into the effect of social responses to epidemics on disease propagation and vaccination strategies [36][37][38][39].
Adaptive networks have produced significant implications for real-world epidemiological practice, as they capture more realistic dynamics of social networks where people tend to alter social behaviors according to epidemiological states of their neighbors [40]. For example, Epstein et al. [41] and Funk et al. [42] considered a spatial context for the influence of human behavior in the outbreak of epidemics (although the former did not explicitly use network models). Also, Shaw and Schwartz [43] recently showed that vaccine control of a disease is significantly more effective in adaptive social networks than in static ones, because of individuals' adaptive behavioral responses to the vaccine application.
Adaptive Opinion Formation and Collective Behavior
Another active direction in adaptive networks research focuses on models of collective opinion formation. These models describe the diffusion of competing opinions through a networked population, where agents can modify their contacts depending on the opinions held by their neighbors. Two similar pioneering models in this direction were published by Holme and Newman [44] and Zanette and Gil [45]
in 2006.
A central question in opinion formation is whether the coevolutionary dynamics will eventually lead to consensus or to a fragmentation splitting the population into two disconnected camps. The transition points between these two long-term outcomes is known as the fragmentation transition. The simplest and best-understood model exhibiting the fragmentation transition is the adaptive voter model [46]. A detailed understanding of the fragmentation transition in this model was gained through the work of Vazquez et al. [47] and the independent parallel study of Kimura and Hayakawa [48].
Although the adaptive voter model is similar to the adaptive SIS model, analytical tools that perform well for the SIS model yield poor results for the voter model [48]. Nevertheless, the transition point can be computed analytically, using a different approach [49,50].
It was sometimes criticized that mathematical models of opinion formation fall short of the complexity of opinion formation processes in the real world, and that hence no connection to real-world observations and experiments can be made. However, Centola et al. [51] studied agent-based adaptive network models of more realistic cultural drift and dissemination processes, finding similar dynamics including the fragmentation transition. Centola also experimentally examined how social network structures interact with human behaviors [52,53]. More recently, the works of Huepe et al. [54] and Couzin et al. [55] showed that voter-like models can be used to understand the dynamics of decision making in the collective motion of swarms of locusts [54] and schools of fish [55]. Their studies demonstrated that analytically tractable adaptive network models could predict the result of laboratory experiments.
Social Games on Adaptive Networks
Besides opinion formation, also other types of social dynamics have been investigated on adaptive networks. In particular, many adaptive extensions of classical game theoretical models have been proposed.
Three early works that appeared already in 2000 are a study of the minority game on adaptive networks by Paczuski, Bassler, and Corral [56], an exploration of various coordination and cooperation games by Skyrms and Pemantle [57], and a study of the Prisoner's dilemma by Zimmermann et al. [58]. Another influential work is a paper by by Bornholdt and Ebel, which remains unfinished but is available as a preprint [59].
These papers above triggered a large number of subsequent work that explored how coevolutionary dynamics affects the evolution of cooperation in adaptive networks. Notable examples are the work of Pacheco et al. [60] and van Segbroeck et al. [61] who demonstrated clearly that coevolution can lead to increased levels of cooperation; Poncela et al. [62], who showed that coevolutionary dynamics can facilitate cooperation not only by building up beneficial structures, but through the dynamics of growth itself; and Zschaler et al. [63], who identified an unconventional dynamical mechanism leading to full cooperation.
Research in adaptive networks also gave rise to a different class of games, where agents do not aim to optimize some abstract payoff, but struggle for an advantageous position in the network. The earliest example of these adaptive network formation games is perhaps the paper of Bala and Goyal [64] which was published in 2001. Another early paper is Holme and Ghoshal's model [65], where the nodes tried to maximize their centralities by adaptively changing their links based on locally available information, without paying too much costs (i.e., maintaining too many connections). The resulting time evolution of the network was highly nontrivial, involving a cascade of strategic and topological changes, leading to a network state that was close to the transition between well-connected and fragmented states. A recent work by Do et al. [66] presents an analytical investigation of network formation and cooperation on an adaptive weighted network.
Organizational Dynamics as Adaptive Networks
Applications of adaptive networks do not stop at abstract social models like those reviewed above. One of the latest application areas of adaptive networks is the computational modeling of complex organizational behavior, including the evolution of organizational networks, information/knowledge/culture sharing and trust formation within a group or corporation. Studies on organizational network structures actually have several decades of history (including the well-known structural holes argument by Burt [67]), but computational simulation studies of organizational adaptive networks have begun only recently, e.g., the work by Buskens and Van de Rijt on the simulation of social network evolution by actors striving for structural holes [68].
More recent computational models of organizational adaptive networks are hybrids of dynamical networks and agent-based models, where mechanisms of the coevolution of network topologies and node states can be a lot more complex and detailed than other more abstract mathematical models. Such models are therefore hard to study analytically, yet systematic computational simulations provide equally powerful tools of investigation. Adaptive network models are still quite novel in management and organizational sciences, and thus the relevant literature has just begun to develop.
For example, Dionne et al. [69] developed an agent-based model of team development dynamics, where agents (nodes) exchange their knowledge through social ties and then update their self-confidence and trust to other team members dynamically. In their model, the self-confidence (node state) and trust (link weight) were represented not by a simple scalar number, but by a complex function defined over a continuous knowledge domain. Computational simulations illustrated the nontrivial effects of team network topology and other parameters on the overall team performance after the team development process.
Another computational model addressing organizational dynamics at a larger scale was developed by Lin and Desouza [70] on the coevolution of informal organizational network and individual behavior. In their model, a node state includes behavioral patterns and knowledge an individual has, and the knowledge is transferred through informal social links that are changed adaptively. Computational simulations showed that knowledgeable individuals do not necessarily gain many connections in the network, and that when high knowledge diversity exists in the organization, the network tends to evolve into one with small characteristic path lengths.
Our most recent work on cultural integration in corporate merger [71] also models organizational dynamics as adaptive networks, which will be discussed in more detail in Section 6.
Note that the literature introduced in this section is not meant to be a comprehensive review of adaptive network research. More extensive information about the literature and other resources can be found online [72].
Generative Network Automata
In this and the following sections, we present some of our recent work on computational modeling of adaptive networks and its applications to complex systems.
To provide a useful modeling framework for adaptive network dynamics, we have proposed to use graph rewriting systems [73,74] as a means of uniform representation of state-topology coevolution. This framework, called Generative Network Automata (GNA), is among the first to systematically integrate graph rewritings in the representation and computation of complex network dynamics that involve both state transition and topological transformation.
Definitions
A working definition of GNA is a network made of dynamical nodes and directed links between them. Undirected links can also be represented by a pair of directed links symmetrically placed between nodes. Each node takes one of the (finitely or infinitely many) possible states defined by a node state set S . The links describe referential relationships between the nodes, specifying how the nodes affect each other in state transition and topological transformation. Each link may also take one of the possible states in a link state set S . A configuration of GNA at a specific time t is a combination of states and topologies of the network, which is formally given by the following: • V t : A finite set of nodes of the network at time t. While usually assumed as time-invariant in conventional dynamical systems theory, this set can dynamically change in the GNA framework due to additions and removals of nodes.
• C t : V t → S : A map from the node set to the node state set S . This describes the global state assignment on the network at time t. If local states are scalar numbers, this can be represented as a simple vector with its size potentially varying over time.
• L t : V t → {V t × S } * : A map from the node set to a list of destinations of outgoing links and the states of these links, where S is a link state set. This represents the global topology of the network at time t, which is also potentially varying over time.
States and topologies of GNA are updated through repetitive graph rewriting events, each of which consists of the following three steps: 1. Extraction of part of the GNA (subGNA) that will be subject to change. 2. Production of a new subGNA that will replace the subGNA selected above. 3. Embedding of the new subGNA into the rest of the whole GNA.
The temporal dynamics of GNA can therefore be formally defined by the following triplet E, R, I : • E: An extraction mechanism that determines which part of the GNA is selected for the updating. It is defined as a function that takes the whole GNA configuration and returns a specific subGNA in it to be replaced. It may be deterministic or stochastic.
• R: A replacement mechanism that produces a new subGNA from the subGNA selected by E and also specifies the correspondence of nodes between the old and new subGNAs. It is defined as a function that takes a subGNA configuration and returns a pair of a new subGNA configuration and a mapping between nodes in the old subGNA and nodes in the new subGNA. It may be deterministic or stochastic. The new subGNA produced by R is embedded into the rest of the GNA according to the node correspondence also specified by R. In this particular example, the top gray node in the old subGNA has no corresponding node in the new subGNA, so the bridge links that were connected to that node will be removed. (d) The updated configuration after this rewriting event.
• I: An initial configuration of GNA.
The above E, R, I are sufficient to uniquely define specific GNA models. The entire picture of a rewriting event is illustrated in Figure 1, which visually shows how these mechanisms work together.
This rewriting process, in general, may not be applied synchronously to all nodes or subGNAs in a network, because simultaneous modifications of local network topologies at more than one places may cause conflicting results that are inconsistent with each other. This limitation will not apply, though, when there is no possibility of topological conflicts, e.g., when the rewriting rules are all context-free, or when GNA is used to simulate conventional dynamical networks that involve only local state changes but no topological changes.
Uniqueness and Generality of GNA
The function of the extraction and replacement mechanisms (E and R) may be defined as either deterministic or stochastic, as opposed to typical deterministic graph grammatical systems [75]. A stochastic representation of GNA dynamics will be particularly useful when applied to the modeling of real-world complex network data, in which a considerable amount of random fluctuations and observation errors are inevitable.
Also, the GNA framework is unique in that the mechanism of subGNA extraction is explicitly described in the formalism as an algorithm E, not implicitly assumed outside the replacement rules like what other graph rewriting systems typically adopt (e.g., [76]). Such algorithmic specification allows more flexibility in representing diverse network evolution and less computational complexity in implementing their simulations, significantly broadening the areas of application. For example, the preferential attachment mechanism widely used in network science to construct scale-free networks is hard to describe with pure graph grammars but can be easily written in algorithmic form in GNA.
The GNA framework is highly general and flexible so that many existing dynamical network models can be represented and simulated within this framework. For example, if R always conserves local network topologies and modifies states of nodes only, then the resulting GNA is a conventional dynamical network model, including cellular automata, artificial neural networks, and random Boolean networks (Figure 2 (a), (b)). A straightforward application of GNA typically comes with asynchronous updating schemes, as introduced above. Since asynchronous automata [77], the GNA framework covers the whole class of dynamics that can be produced by conventional dynamical network models. Moreover, as mentioned earlier, synchronous updating schemes could also be implemented in GNA for this particular class of models because they involve only state changes on each localized node but no topological transformation. On the other hand, many network growth models developed in network science can also be represented as GNA if appropriate assumptions are implemented in the subGNA extraction mechanism E and if the replacement mechanism R causes no change in local states of nodes ( Figure 2 (c)).
We also conducted extensive computational experiments of simple binary-state GNA, which revealed several distinct types of the GNA dynamics, illustrating the richness and subtleness in the dynamics of this modeling framework [74].
Application I: Dynamics of Operational Networks
In this section, we consider an application of adaptive network models to socio-technical systems -a special type of complex systems that comprise social and technological components or a combination of both types in one entity [78,79]. The services provided by socio-technical systems can be categorized into two main functions: (1) detection of a significant event, and (2) execution of an appropriate response action. The second function involves a creation of a new network between system components that will be called upon to execute a response. This new network that dynamically develops on the nodes of the existing network is termed as an operational network [80].
In what follows, an adaptive network-based model of the operational network will be illustrated on an example of the Canadian Arctic Search and Rescue (SAR) system. The case log of a real incident in the Arctic will be used to develop and analyze a sample operational network.
SARnet -an Adaptive Network Model of the Canadian Arctic SAR System
The Canadian Arctic Search and Rescue (SAR) system comprises a large number of highly specialized SAR assets that are trained or designed to provide a comprehensive range of SAR services. These are Canadian Coast Guard (CCG) officers, teams of SAR technicians, Joint Rescue Coordination Centres (JRCCs), aircraft and ships equipped with the crew, and various information and communication systems. A detailed description of the Canadian Arctic SAR system is given in [81]. SARnet is a network model of the Canadian Arctic SAR system that comprises multiple networks with embedded heterogeneous agents, where agents are SAR assets. Unlike agents of other typical social network models, heterogeneous agents of the SAR system cannot easily be re-trained and replace other agents. The agent specialization results in a distinctive pattern of network dynamics, as we elaborate below.
SARnet distinguishes between five classes of agents according to their specialization (sensor, router, actor, database, and controller; see Table 2), six environmental realms in which these agents operate (maritime, land, air, space, cyber, and cognitive), and four SAR operational domains according to traditional subdivision of SAR services (Air, Maritime, Ground, and Joint SAR). In addition, SARnet represents such agent properties as skill sets, access to resources, home organizations, and technical specifications.
The SARnet agent is represented by a string of data of dimension N, i.e., where σ i is a binary, categorical or continuous variable that represents a property of the agent. Here each node's own state-transition rule is embedded as part of its state, and the replacement mechanism R refers to that information when calculating the next state of a node. (c) Simulation of a network growth model with the Barabási-Albert preferential attachment scheme [15]. Time flows from left to right. Each new node is attached to the network with one link. The extraction mechanism E is implemented so that it determines the place of attachment preferentially based on the node degrees, which causes the formation of a scale-free network in the long run.
We say that agents belong to the same heterotype if they are identical in the first several key positions of string σ. The distribution of the numbers of agent heterotypes can be used to measure the agent heterogeneity. If K is the number of heterotypes and X k is a fraction of agents of heterotype k (k = 1, . . . , K), then the network entropy can be defined as follows: In Eq. 2, the network entropy S is normalized by its maximum value S max = ln K. As follows from Eq. 2, S ∈ [0, 1]. The minimum value S = 0 corresponds to a network composed of one heterotype. The maximum value S = 1 corresponds to a network composed of agents evenly distributed between all K heterotypes (i.e. X k = 1/K for k = 1, . . . , K). As the network entropy approaches 1, the agent distribution between heterotypes becomes uniform. On a day-to-day basis, SAR assets are connected in a standby network, which represents the standby posture of the system [81]. The operational network dynamically develops on the nodes of the standby network in response to a particular SAR incident. It links SAR assets, which are called upon to provide specified SAR services. A responsible JRCC initiates a SAR response by appointing one of the controller agents, as the Search Master. The Search Master is responsible for the SAR operation in question until closure of the case. The sequence of services, which will be provided after a distress alert is received, follows prescribed protocols and procedures, which serve as a blueprint for tasking SAR assets based on their specialization and availability. The nature and size of the incident (e.g., location of the crash site and number of people on board) also determine the choice of SAR assets being called upon. The dynamics of the operational network differs from that of the standby network, as the architecture of the former evolves at the time scale of minutes or hours instead of months or even years, as in the latter case.
We developed the SARnet simulation software, called OpNetSim, for automated generation of operational networks, which is described in [82]. OpNetSim has its theoretical basis on GNA [73,74]. The simulation code was developed in Python, and NetworkX [83] was used for network representation and analysis. The network dynamics are described as a set of possible rewriting events. A rewriting event is defined as an establishment of a new link between two agents, possibly involving changes of their states.
Each possible event is specified by the following eight properties: 4. Link type: Type of the event (i.e., interaction between the two agents). The following three types are allowed: • "Request": The source agent requests the destination agent for specific information.
• "Flow": The source agent sends specific information to the destination agent.
• "Task": The source agent commands the destination agent to do a particular task.
5. Knowledge required: (Optional) List of internal variables the source agent needs to have in order for the event to occur. 6. Knowledge transferred: (Optional) List of internal variables whose values are requested or shared between the two agents during the "Request" or "Flow" event. 7. Duration: Amount of time the event takes. 8. Duration variation: Amount of stochastic variation for the duration.
OpNetSim reads the set of possible rewriting events given in the above format. The algorithm of simulation of this network proceeds in the following steps: 1. Select all the events that are currently executable (i.e., all conditions are met and the source agent has all the knowledge required). 2. Make the selected events active and set a duration time (with stochastic variation added according to the duration variation property of the event) to their respective internal time counters. 3. Decrease the time counters of all of the active events by a unit time. 4. If the counter of any of those active events hits zero, establish a new directed link from the source agent to the destination agent in the network. Also, depending on the type of the event, update the internal variables of both agents. Then deactivate the event. 5. Repeat the process above until no more executable events exist.
The operational network will emerge as the simulation progresses and more agents are connected by information exchange and task allocation. OpNetSim implements interactive graphical user interface (GUI) by which the user can operate and inspect the simulation status ( Figure 3).
The December 2008 SAR Incident in the Arctic
OpNetSim was used to simulate the operational network of a real SAR incident in the Arctic that occurred in December 2008.
On 7 December 2008 a small two-engine Cessna plane with two people on board crash-landed in the Arctic approximately 120 nautical miles (nm) from Iqaluit, Nunavut. Three mayday calls were intercepted by a commercial aircraft and by a Royal Canadian Air Force (RCAF) aircraft, and then relayed to JRCC Halifax. (JRCC Halifax, located in Halifax NS, is a JRCC responsible for that sector of the Arctic.) The Canadian SAR system mounted a response to the incident, which involved three RCAF SAR squadrons, Canadian Coast Guard (CCG) resources (including database resources and marine communication systems), regional units of the Royal Canadian Mounted Police (RCMP) and Civil Air Search and Rescue Association (CASARA), local police, air-ground-air communication systems, and private-sector assets. One of the CCG officers on duty was appointed as the Search Master to coordinate the SAR operation. In less than 18 hours from the time of the first mayday call, the two survivors were rescued (with mild frostbites, otherwise in good condition).
The concept of the operational network was used to represent and analyze the operational architecture of the SAR response to this incident, and to identify factors contributing to a successful outcome.
The agent heterogeneity was identified as the main driving mechanism for the development of the operational network. Such agent attributes as agent class, realm, and SAR domain influenced the formation of network architecture. The agents' skill sets and access to resources as well as the crash-site information also plays a role in shaping the operational network. The architecture of the resulting network evolved at the time scale of minutes or hours instead of months or even years, as in the case of the standby network. Figure 4 shows the snapshots of the actual operational network one, three and 18 hours after the response initiation, respectively.
The number of agent heterotypes was increasing in the course of the SAR response, meaning that the network heterogeneity was also increasing. At the same time, the distribution of agents between heterotypes became less balanced, as follows from a decline in normalized entropy after 6 hours of the response. Table 3 summarizes the development of the operational network after 1, 3, and 18 hours. According to our analysis, all major players were quickly identified and added to the network at early stages of its development. By the end of the first hour of the response, the operational network included 28 agents and 41 links, i.e. 30% of the final network. After the first three hours, 40 agents and 64 links, i.e. nearly 50% of the final net, were in place. By the time when the survivors were rescued (i.e. 18 hours after the response initiation), more than 80% of the operational net had developed.
The Search Master (which was an isolate in the standby network) quickly became the most influential entity of the operational network, coming first in all node-level measures, including standard Social Network Analysis measures of degree centrality and extended measures of cognitive demand and shared situation awareness (see [84] for measure definitions). This value was almost an order of magnitude higher than that of the second-ranked agent. Average communication speed between any two nodes within the Search Master's sphere of influence (or 67% of the network) was close to 0.5, and the average speed with which the Search Master interacted with 67% of all agents was 1.0 -the maximum value for this measure.
High centralization of the operational net was identified as a contributing factor to the operational effectiveness. However, it can also be viewed as a vulnerability factor. According to our analysis results, the removal of the Search Master will lead to maximum network fragmentation when almost 80% of SAR assets will become disconnected. Moreover, there was no other entity in the network capable of assuming the leadership role in the SAR response in question. A detailed summary of network analysis results can be found in [81].
We examined the actual log of inter-agent communications during this SAR incident, and manually reconstructed the rewriting rules that drove the operational network formation. OpNetSim was then used to simulate the temporal development of the operational network under several hypothetical scenarios. Figure 5 shows snapshots of the simulated operational network produced by OpNetSim. Since the simulation algorithm involves stochasticity, the topology of the simulated network does not exactly match the actual one, but the general trend of increasing agent heterogeneity and concentration on the Search Master node were correctly represented in this model.
Application II: Automated Rule Discovery from Empirical Network Evolution Data
In the previous section, we developed an adaptive network model based on our knowledge and understanding about local dynamics of node and link interactions. In the meantime, it has remained an open question how one could derive dynamical rules of an adaptive network model directly from a given empirical data of network evolution.
In this section, we describe an algorithm that automatically discovers a set of dynamical rules that best captures state transition and topological transformation expressed in the empirical data [85]. Network evolution is formulated using the GNA framework and the subnetwork extraction and replacement phases are analyzed separately. Within the scope of this paper, we will simplify the problem by requiring the data to satisfy the following: 1. A given data set is a series of configurations of labeled directed or undirected networks in which labels (states) and topologies coevolve over discrete time steps (Figure 6(a)). 2. The data set contains information about the correspondence of nodes between every pair of two successive time points (Figure 6(a)). 3. States are discrete, finite, and assigned only to nodes, not to links. 4. Changes that take place between successive time points are reasonably small so that they can be identified as one small network rewriting event per each time step. 5. The extraction mechanism E and the replacement mechanism R are memoryless, i.e., they produce outputs solely based on inputs given to them.
We note that the GNA framework has a significant advantage for the algorithm design. It formulates the network evolution using two separate phases, i.e., the extraction of subGNA (performed by E) and its replacement (performed by R). Therefore, the estimation and construction of models of E and R can be conducted independently and concurrently using separate training data sets, which will make the algorithm simple and tractable.
Proposed Algorithm
A general procedure of the proposed algorithm is as follows ( Figure 6): 1. Preprocess the original network evolution data using data-dependent heuristics, if necessary, so that they meet all the aforementioned requirements. 2. Detect the difference between each pair of configurations at two successive time points (G t , G t+1 ) and represent it as a rewriting event s t ⇒ r t (Figure 6(b)), where s t is a subGNA to be replaced, r t is another subGNA that replaces s t , and "⇒" denotes correspondence from nodes in s t to nodes in r t . The difference between two configurations (G t = V t , C t , L t , G t+1 = V t+1 , C t+1 , L t+1 ) will be detected in the following way: (a) Let A be a set of nodes in (c) Add to A and B all the nodes whose states or neighbors changed between G t and G t+1 ( At this point, A and B contain the nodes that experienced some changes (enclosed by solid lines in Figure 6(b)).
(d) Add to A and B all the nodes which have a link to any of the nodes in
The above step includes in A and B additional nodes that may have influenced the rewriting event (enclosed by dashed lines in Figure 6(b)). (e) Let s t and r t be subgraphs of G t and G t+1 induced by nodes in A and B, respectively. Then the detected rewriting event is represented as s t ⇒ r t , where "⇒" is the set of all the node correspondences between s t and r t present in the original data. 3. Construct a model of the extraction mechanism E by using (G t , s t ) as training data, where G t is the input given to E and s t the output that E should produce ( Figure 6(c),(e)). This step is the most challenging part in this algorithm development effort. The task to be achieved in this step is to identify an unknown mechanism that chooses a subset of a given set of nodes. Exact identification of an unknown computational mechanism is theoretically not possible in general.
Here, we will assume several predefined candidate mechanisms (e.g., random selection, preferential selection based on node degrees, motif-based selection, etc.) and calculate the likelihood for each extraction result given in the training data to occur with each candidate mechanism. This calculation will be conducted and multiplied sequentially over the whole training data to evaluate how likely the given training data could result from each of the candidate mechanisms. If a mechanism includes parameters, they will be optimized to attain the maximal probability. Then the mechanism with the highest likelihood will be returned as the estimated mechanism of E. 4. Construct a model of the replacement mechanism R by using (s t , s t ⇒ r t ) as training data, where s t is the input given to R and s t ⇒ r t the output R should produce ( Figure 6(d),(f)). In this step, the task can be achieved in a much simpler manner than in step (3) (though technically it still remains identification of an unknown mechanism). This is because a single rewriting event typically involves just a few nodes so the number of possible inputs given to the replacement mechanism R is virtually finite in contrast to the number of possible inputs to E that is virtually infinite. Therefore we use straightforward pattern matching methods to construct a model of R from the data. Specifically, the algorithm will construct R as a simple procedure that searches for a rewriting event in the training data whose left hand side matches the given input. If there is only one such event found, the event itself will be the output of R. If multiple events are found, the output will be determined either deterministically (e.g., event with greatest frequency) or stochastically (e.g., random selection with weights set proportional to event frequencies). Or, if no event is found, either identity ("input ⇒ input"; no change) will be returned or seek similar events will be sought using partial graph matching schemes. 5. Construct a complete GNA model by combining the results of the above steps (3) and (4) together with the initial configuration I (Figure 6(g)).
Software Implementation
We have designed the details of the algorithm described above and implemented them in Python with NetworkX and GraphML. The software, called PyGNA [86,87], is designed to automatically discover a set of dynamical rules that best captures both state transition and topological transformation in the data of spatio-temporal evolution of a complex network. PyGNA is still at its alpha stage, but is publicly available from SourceForge.net 1 .
We conducted preliminary experiments applying PyGNA to data generated by abstract adaptive network models, in order to test if it could correctly identify the actual network generation mechanisms used to reproduce the input data. The following four abstract network models were used as inputs to PyGNA: (a) Barabasi-Albert network, grown using the standard degree-based preferential attachment method [15]. (b) "Degree-state" network, grown by degree-based preferential attachment whose mechanism is influenced by the randomly determined state of the newcomer node. The state of the target node could also be altered by the attachment. (c) "State-based" network, grown by repeated random edge addition between a node that has a particular state (shown in red in Figure 7) and any other randomly selected node. New isolated nodes are also continuously introduced to the network, with a randomly selected state. (d) "Forest fire" network, generated by the method proposed in [88]. Figure 7 shows typical results that visually compare the original input networks and the networks reconstructed by PyGNA. For the Barabasi-Albert (a), degree-state (b) and state-based (c) networks, both input and reconstructed networks have visually similar structures. PyGNA also correctly identified that the growth of those networks was determined by degrees (for (a)), degrees and states (for (b)), and states (for (c)). For the forest fire network (d), however, PyGNA failed to capture the unique topological characteristics of the original input network, because of the complexity in the original network generation method.
We also quantified the accuracy of the reconstructed network models by measuring the distance of probability distributions of extracted subgraphs between original and simulated networks. Specifically, for the original input data and the reconstructed network simulation results, we counted how many times each of the different kinds of subgraphs was selected for graph rewriting events, and then computed the Bhattacharyya distance [89] between the two distributions, defined as where s is the unique subgraph, S the set of all extracted subgraphs, and p(s) and q(s) the probability distributions of subgraphs extracted for rewriting in the input network and in the reconstructed network, respectively. D B = 0 means the two distributions were exactly the same, while higher value of D B means they are far apart. The results are summarized in Figure 8. For the Barabasi-Albert network (a), the low D B value indicates that the simulated network is indeed very close to the original input network. The D B value was a little higher for the degreestate (b) and state-based (c) networks, but the overall trends of the extracted subgraph distributions were generally in agreement between the input and simulated networks. For the forest fire network, however, the extraction mechanism selected by PyGNA was over-choosing certain subgraphs and was unable to generate many subgraphs seen in the input data, resulting in the apparent topological difference seen in Figure 7. The D B value for this case is therefore substantially larger than the other three cases. These preliminary results tell us that the current algorithm in PyGNA is effective for certain types of networks while still limited for the analysis of others, especially those that involve pure randomness and/or mesoscopic topological structures such as motifs. We are currently revising and expanding our algorithm by addressing these issues in order to improve the performance of PyGNA.
Application III: Cultural Integration in Corporate Merger
The final example we present is a computational model of cultural integration taking place on a dynamically changing adaptive social network when two firms merge into one. This example is more complex than the previous two, firstly because the model has continuous link weights that adaptively change due to node state dynamics, but more importantly because the node states are far more complex than in the previous two models, in order to represent complex sociocultural aspects of agents. In this sense, this example can be better understood as a hybrid of agent-based models and adaptive network models.
It is recognized that cultural integration, or sharing a common corporate culture, is crucial for the success of corporate mergers. However, previous studies have been limited to firm-level analyses only, while cultural adoption and diffusion in a merged firm actually occurs among individuals. We thus explored, using the computational model, how cultural integration emerges from the patterns of dynamic social interactions among individuals [71]. Our computer simulation model is an agent-based model operating on a dynamic network structure, where individuals (nodes) exchange elements of a corporate culture with others who are connected to it through social ties (links). In this model, we set two merging firms, A and B, each consisting of 50 individuals. Our goal is to find initial network structures that promote or impede post-merger cultural integration. Although the number of individuals in the firms is by far smaller than that of publicly traded firms, we found that this parameter has a negligible impact on the simulation results when network density is kept at the same level.
Representations of Corporate Cultures
We represent a corporate culture as a vector in a multi-dimensional continuous cultural space. The cultural space is composed of several cultural dimensions; each dimension represents an element of a corporate culture. We set 10 cultural dimensions for the cultural space; this number is founded on previous empirical studies of corporate culture. For example, O'Reilly et al. [90], who investigated eight large U.S. public accounting firms, found eight dimensions of organizational cultures: innovation, attention to detail, outcome orientation, aggressiveness, supportiveness, emphasis on rewards, team orientation, and decisiveness. Likewise, Chatterjee et al. [91] measured cultural distance perceived by the top management teams of acquired firms across seven dimensions of organizational cultures: innovation and action orientation, risk-taking, lateral integration, top management contact, autonomy and decision making, performance orientation, and reward orientation. Therefore, setting 10 dimensions as elements of corporate cultures would be a more conservative approach.
In our model, we characterize the distance between two cultures by the Euclidean distance between two vectors in the cultural space. The average cultural difference between the two merging firms is characterized as the average cultural distance between two individuals-one in Firm A and the other in Firm B. If the value of this measurement is large, the corporate culture that individuals perceive in Firm A is, on average, far different from that in Firm B. We initialized the individual cultural vectors as follows: First, two cultural "center" vectors were created for the two merging firms, and these center vectors were separated by 3.0 (in an arbitrary unit) in the cultural space. Then individual cultural vectors were created for individuals in each firm by adding a small random number drawn from a normal distribution with a mean of 0 and a standard deviation of 0.1 (in the same unit used above) to each component of the cultural center vector of that firm. This setting creates an initial condition where the average between-firm cultural difference is approximately seven times larger than the average within-firm cultural difference.
Adaptive Changes of Cultural States and Tie Strengths
Individuals in our model are connected to each other through directed social ties. A tie going from one individual to another works as a conduit that can transmit, from the origin node to the destination node, information and knowledge that include the elements of their corporate cultures. Each tie has a weight associated with it, called tie strength in the social network literature [92,93]. The range of possible tie strength values is bounded between 0 and 1. Corporate cultures diffuse among individuals through their ties. The algorithm for simulating the dynamics of cultural diffusion, and subsequent social network changes, is as follows.
One iteration in a simulation consists of simulations of individual actions for all individuals in a sequential order (therefore there are always 100 individual actions simulated in each iteration). When it is its turn to take an action, an individual first selects an information source. For 99% of the time, the individual chooses the information source from its local in-neighbors, that is, the nodes from which directed ties are coming to the individual. The probability for a neighbor to be selected as the information source is proportional to the strength of the tie that connects the neighbor to the individual; this represents that individuals tend to listen more often to others whom they trust more or with whom they have stronger connections. Otherwise (with a 1% chance), the individual chooses as the information source any individual in the connected component in which the individual belongs. If there is no existing tie from the randomly selected source to the individual, a new tie with a very weak strength (0.01) will be created between them. This represents an informal, incidental communication, like a "water-cooler" conversation within an organization.
Once the information source is selected, the individual receives the source's cultural vector and then measures the distance between the received cultural vector and its own cultural vector. With a probability that decreases monotonically with increasing cultural distance, the individual accepts the received culture. The probability of acceptance, P A , is mathematically represented as where d is the distance between the two cultural vectors and d c is the characteristic cultural distance at which P A becomes 50%. We used d c = 0.5 for our simulations. If the individual accepts the received cultural vector, it adopts the mean of the two vectors (i.e., the sum of the two vectors divided by 2) as its new cultural vector, and the strength of the tie from the source to the individual is increased by the following formula: Here S current and S new are the current and updated tie strengths, respectively (this formula guarantees that the tie strength is always constrained between 0 and 1). On the other hand, if the individual rejects the received cultural vector, its own vector will not change, and the tie strength is decreased by the following formula: The mechanism of the update of tie strength caused by cultural acceptance or rejection is illustrated in Figure 9. If the tie strength falls below 0.01, the tie is considered insignificant and is removed from the social network.
Initial Network Structures
We set the network structures within and between merging firms so that there are substantially more within-firm ties than between-firm ties at the beginning of each simulation. The number of ties within each merging firm is 490. Since the number of individuals in each firm is 50, the network density of the firm is 490/(50*49) = 0.2. The number of ties from one merging firm to the other (that is, A→B or B→A) is 50 for each direction. All tie strengths of those connections are initialized using random numbers drawn from a uniform distribution between 0 and 1.
In our computational experiments, we set two experimental parameters that control topological characteristics of the initial social network among individuals. One is what we call the within-firm concentration, denoted by variable w. This parameter determines the probability for each individual to be selected as an information source of a within-firm tie. It is mathematically defined as where i is the ID of the individual within a firm, n the firm size (n = 50 in our simulations), and P w (i) the probability for individual i to be selected as an information source when within-firm ties are initially created. When w = 0, within-firm ties are uniformly distributed within the firm so that the organizational structure of the firm is "flat". For larger w, the within-firm information sources are more concentrated on a small number of individuals with greater IDs, which represent a highly centralized organizational structure of the firm, such as that with a one-man CEO. In our model, we used w = 1, 3, 5, 10, 20, and 30. The other experimental parameter is what we call the between-firm concentration, denoted by variable b. This parameter determines the probability for each individual to be selected as either an origin or a destination of a betweenfirm tie. It is mathematically defined as where i and n are the same as in the previous formula, c i the within-firm closeness centrality of individual i, and P b (i) the probability for individual i to be selected as a connecting person, either as origin or destination, when between-firm ties are created, which is done only after all the within-firm ties have been created. When b = 0, between-firm ties randomly connect individuals across firms, regardless of their social positions. For larger b, the between-firm ties are more concentrated on a small number of individuals with higher centralities that represent the formation of top-level (only) inter-firm communication channels. In our model, we used b = 0.1, 0.5, 1, 3, and 5. Figure 10 illustrates images of within-firm and between-firm concentrations. Note that the above two parameters affect only the initial social network structure. As cultural integration progresses, the network topologies will change dynamically in our simulations.
Outcome Measurements and Results
As a primary dependent variable of our computational experiments, we measure the average cultural distance between individuals who used to belong to different pre-merger firms and who still remain in the largest connected component of the social network. If the average cultural distance decreases from its initial value, cultural integration proceeds among individuals in the merged firm.
Likewise, we use three measures of the consequences of cultural integration: turnover, interpersonal conflict, and organizational communication ineffectiveness. All the measures should influence overall firm performance.
Turnover is measured by the number of individuals in the simulations who do not stay in the largest connected component of the social network. In our model, if an individual terminates all ties with his neighbors, he is considered to have left the merged firm.
Interpersonal conflict is calculated as the cultural distance across a social tie between two individuals, multiplied by their tie strength. This quantity is summed up for all the tied pairs of individuals within the largest connected component. Since tie strength can be considered to represent communication frequency [92], individuals who are strongly tied to neighbors with different perceptions of corporate culture would often encounter greater communication conflict in the workplace.
Lastly, organizational communication ineffectiveness is calculated by the cultural distance across a social tie between two individuals, multiplied by the edge betweenness of the tie between them. This quantity is, again, summed up for all the tied pairs of individuals within the largest connected component. Edge betweenness is defined as the
Within-firm concentration
Between-firm concentration number of geodesics (shortest paths) going through an edge [93]. If a tie with high edge betweenness is filled with cultural conflict, most communication between individuals in a firm would be conflicted. As a result, information and knowledge transfer in the firm would be delayed or impeded.
We implemented the simulation model and analysis tools by using Python with NetworkX. The program codes of the model are available from the authors upon request. We set 200 time steps in one simulation. We ran 50 simulations for each experimental condition and conducted statistical analysis of the generated simulation results. Figure 11 plots the results showing the effects of within-firm and between-firm concentrations. The highest level of cultural integration is achieved when social ties are more centralized within each merging firm and the social ties between the merging firms are less concentrated on central individuals. Additionally, the results show that within-firm and between-firm network structures significantly affect individual turnover, interpersonal conflict and organizational communication ineffectiveness, and that these three outcome measurements do not vary in tandem. The most turnovers were observed when within-firm concentration was high while between-firm concentration was low, which is the same condition as that promoted cultural integration. Interpersonal conflict was highest when within-firm concentration was low, without much interaction with between-firm concentration. Organizational communication ineffectiveness was highest when both within-and between-firm concentrations were high. For more detailed discussion of the results, see [71].
Note that those findings were all the outcomes of the adaptive changes of social ties in our model. Results would be different if the social network structure was fixed like in other, more typical opinion spreading network models.
Conclusions
As briefly reviewed above, the co-evolution of network states and topologies is an emerging research topic that has great potential and applicability to many real-world complex systems. It combines two separate dynamics, i.e., dynamic state changes on a network and topological transformations of a network, into a single picture that will allow Figure 11: Cultural distance and organizational dysfunctions by within-firm and between-firm concentrations obtained through simulations of our adaptive network model of corporate merger (from [71]). one to better understand and represent the nature of evolving complex systems, possibly leading to new properties that were not discovered before.
The application areas of adaptive networks are now expanding to various disciplines, not only social sciences and operations research (as demonstrated by a few examples in this paper) but also biology, ecology and physical sciences. The key challenges in adaptive network research include (1) how to generate meaningful dynamical models from large-scale temporal network data, and (2) how to mathematically analyze the dynamics of adaptive networks in which the time scales of state changes and topological transformations are inseparable. We hope that the work reviewed in this paper helps indicate the future directions of this exciting field of study.
This material is based upon work supported by the US National Science Foundation under Grant No. 1027752. The development of OpNetSim was supported by the Canadian Government contract W7714-125419. | 2013-01-11T17:59:20.000Z | 2013-01-11T00:00:00.000 | {
"year": 2013,
"sha1": "702b005603cf11b32d1d09727b514241d0c92efe",
"oa_license": "elsevier-specific: oa user license",
"oa_url": "https://doi.org/10.1016/j.camwa.2012.12.005",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "d68eba4de7e927bcd1c885f31c66a4c98e6e1584",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Physics"
]
} |
257650389 | pes2o/s2orc | v3-fos-license | Synthesis, Characterization of New Metal Complexes of Co (II), Cu (II), Cd (II) and Ru (III) from azo ligand 5-((2-(1H-indol-2-yl)ethyl) diazinyl)-2- aminophenol, Thermal and Antioxidant Studies
: A novel metal complexes Cu (II), Co (II), Cd (II), Ru (III) from azo ligand 5-((2-(1H-indol-2-yl) ethyl) diazinyl)-2-aminophenol were synthesized by simple substitution of tryptamine with 2-aminophenol. Structures of all the newly synthesized compounds were characterized by FT IR, UV-Vis, Mass spectroscopy and elemental analysis. In addition measurements of magnetic moments, molar conductance and atomic absorption. Then study their thermal stability by using TGA and DSC curves. The DCS curve was used to calculate the thermodynamic parameters ΔH, ΔS and Δ G. Analytical information showed that all complexes achieve a metal:ligand ratio of [1:1]. In all complex examinations, the Ligand performs as a tridentate ligand, connecting Cu (II), Co (II), Cd (II), and Ru (III) ions through the nitrogen atom of amine , azo groups and the oxygen phenolic group. Cu (II), Co (II), and Cd (II) complexes were characterized as having tetrahedral geometry, while Ru (III) complex was found to have octahedral geometry. The antioxidant activity of the metal complexes was assessed against the DPPH radical (1.1-diphenyl-2-picrylhydrazyl) and compared to that of a common natural antioxidant Gallic acid to observe the produced compounds. The results demonstrated ligands have more antioxidant activity than metal complexes.
Introduction:
Coordination chemistry for transition metal complexes with azo ligands is An important and interesting area of chemistry that plays an essential function in technology, industry, and biological systems 1 .Azo compounds are a significant class of chemical compounds that are being studied by scientists.They have long been used as dyes and pigments because they are intensely colored.Additionally, they have received a lot of attention because of their superior thermal and optical qualities in uses such as oil-soluble lightfast dyes, ink-jet printing, and optical recording medium toner 2 .Azo-dyes are substances made up of at least one conjugated chromophore azo bond (-N=N-) and one or more aromatic or heterocyclic moieties.Because they are distinguished by the presence of an azo moiety (N=N) in their structure, they make up the most significant group of dispersed dyes 3 .During the review of literature it was noticed that, there are many common methods in preparation and as an example we mentioned the classical method which uses diazonium salt which is considered one of the most important materials in preparation of a large number of pure organic compounds and according to the electrophilic features in diazonium salt 4,5 .Azo compounds are the most common class of organic dyes produced industrially.Due to their wide range of applications in fields like dyeing textile fiber, biomedical research, advanced organic synthesis, and high technology fields like a laser, liquid crystalline displays, and electro optical devices6 6,7 .They have good thermal and optical properties, are highly colored, and demonstrate significant applications like optical data storage and nonlinear optical materials.They are participating in a number of biological processes, including bacterial and fungal defense mechanisms and carcinogenesis.Azo compounds metal complexes are also receiving a lot of interest to their applicability in optical computing, functional materials, dyes, and pigments 8,9 .The aim of this work is to synthesize a novel metal ions complexes Cu(II), Co(II), Cd(II) and Ru(III) from azo ligand (L) as well as characterization with spectroscopic analysis and studying of thermal decomposition and thermal stability by using TGA and DSC curve.
Material and Methods:
The following chemical ingredients were obtained from metal salts CuCl2.H2O, CoCl2.6H2O,CdCl2.H2O,and RuCl3.3H2O)(Sigma-Aldrich, Merck, and others).2(1H -Indole -3yl)-ethylamine ,3-aminophenol, hydrated sodium nitrate NaNO2 hydrochloric acid HCl , pure ethanol , DMSO ,NaOH .The following chemical ingredients were obtained from metal salts were employed to explore the IR spectra captured as CSI discs.8300FTIR spectroscopy, 4000-200 cm -1 range .C, H, and N elemental analyses were utilizing Euro vector model EA/3000, single-V.3.O-single, utilized to conduct elemental analyses.. UV-1800 Shimadzu spectrophotometer was employed to study electronic spectra to the (200-1100 nm) range using 10 -3 M solutions in DMSO at 25 °C.Using a Shimadzu (A.A) 680 G atomic clock, metals were identified.Spectrum analyzer for absorption.A conductometer WTW was used to detect conductivity while it was at room temperature with DMSO solutions.On QP50A: DI Analysis ShimadzuQP-2010-Plus (E170Ev) spectrometer, electron impact (70 eV) mass spectra were captured.Gravimetric estimation of the chloride concentration was made.The balancing magnetic susceptibility model MSR-MKI was used to measure magnetic characteristics.Perkin-Elmer Pyris Diamond DS/TGA was used for all prior sorts of thermal analysis.
Synthesis of Azo Ligand (L)
Tryptamine has been dissolved in (2 ml HCl, 10 ml of ethanol) at 0-5°C during refrigeration.To minimize temperature increases of up to 5°C, gradually added (10%, 0.42 g, 0.006 mol) NaNO2.After the reaction has been stirred for approximately 30 minutes, add the (0.671g, 0.006 mol) of 2aminophenol dissolved in 10ml of ethanol.Then, 10 ml of a 1M NaOH solution was added, and the precipitation of a dark brown-colored azo ligand was observed.This product is collected after being filtered and dried.Its melting point was 191 ° C, and its yield was 81.5% 10 . 1
Synthesis for complexes
The metal salt (1 mmol) was dissolved in 10 ml of ethanol.Drop by drop the addition of (15 ml) from Azo ligand (1 mmol).The resulting mixture was refluxed for 2 h.The solid complexes were separated, and any unreacted components were removed by briefly immersing them in hot ethanol.The complexes were collected, dried and weighed.Scheme1 shows the formation of the metal ions complexes.
Result and Discussion:
The concrete results of produced metal complexes' elemental analyses were in good accord with what was predicted theoretically.The metalligand ratios in complexes were [1:1], according to elemental studies.The complexes of ligands, which are soluble in DMSO solvent in 10 -3 M solution at 25 0 C, are described by the molar conductance of complexes are all by nature non-electrolytic.Table1 lists the physical and analytical information about the azo ligand and their metal complexes.).The pattern for these peaks may be assigned to various fragments with C16H14N4CoO + ,C7H6CoN3O + ,C9H8N + ,C6H3N2 + and H3CoNO + ) respectively [11][12][13][14] .
IR spectra
The complexes were diagnosed by infrared spectra and then the spectra were compared with the spectra of the ligands in the free state 15 .The shift of some bands towards longer or shorter wavelengths, changes in their shapes and intensity, the disappearance of bands and the emergence of new bands, and the following is a careful study of the infrared spectra of the prepared complexes, and the spectra data are shown in table 2. The FTIR spectrum ligand (L) shows bands at(3408) cm -1 were assigned to the stretching vibration asυ (NH2), sυ (NH2), and δ(NH2), (3759,3286)cm -1 that, when compared to the free raw materials, assign to the (O -H), (NH) indole ring, and at 1485 cm -1 , the novel azo group (N=N) is attributed, indicating the creation of the ligand [16][17][18] .On the other hand, the FT-IR spectra of the complexes of Co(II), Cd(II) and Ru(III) showed the O-H phenolic group's stretching characteristic had eliminated, proving that coordination through phenolic oxygen had taken place.Additionally, they demonstrated that the N=N mode has altered in size, strength, and location when compared to the mode of ligand.And new bands are appeared that belong to (M-N) at (541, 526, and 561) cm -1 for the Co(II), Cd(II) and Ru(III) complexes, respectively, (M-O) at (442, 431, and 411) cm -1 for the Co(II), Cd(II) and Ru(III) complexes, .respectivelywhich supports coordination occurrence through the nitrogen and oxygen atoms Furthermore, it was discovered that the water molecule was coupled to displayed stretching vibrational activity at 3689,1618 and 746 cm -1 , as well as at 3566,1635 cm -1 , and 749 cm -1 , 3532, 1611, 713 cm -1 assigned to the (H2O) aqua of the Co(II), Cd(II) and Ru(III) complexes, respectively.This suggests that the metal ion and ligand were coordinated by the H2O atom 19.20 .A list of all is provided in Table 2
Thermal studies
The thermal analysis curves were obtained to determine each of the following thermal parameters' typical values from the TGA curves produced for the stages in the decomposition process of metal complexe [28][29][30][31] .The decomposition's starting point temperature (Ti), is this point at which the TG curve departs from its origin.In conclusion (Tf) the temperature and where the TG curve begins to decompose.Back to its starting point The highest temperature, or the temperature of the maximum rate of weight loss.Furthermore, the DSC curve can be used to determine whether a heat source is exothermic or endothermic as well as its amount and calculated the thermodynamic parameters ΔH, ΔS and Δ G [32][33][34]
Antioxidant assay
The assay is used to determine how well antioxidants can scavenge it.Antioxidants provide a hydrogen atom to hydrazine, which reduces the single electrons from nitrogen atoms in DPPH .When the DPPH radical solution is combined with the antioxidant, the color of the corresponding hydrazine changes from violet to yellow, which is characterized by an absorption band in an ethanol solution centered at approximately (517 nm).electron delocalization also produces dark purple 35,36 .The interaction of [Co(L) (H2O)], [Cu(L) (H2O)] ,[ Ru(L)(H2O)2Cl] and[Cd(L) (H2O)] complexes with DPPH radicals and subsequent hydrogen donation to scavenge the radicals are displayed with table 6. Effective DPPH radical scavenging is indicated by a lower IC50 value.In the DPPH assay, the practically ligand has more antioxidant activity than the metal complexes 37 .
Conclusion:
In this study, we examined the preparation for some metal complexes produced by the reaction of the tridentate Azo ligand with metal ions Cu(II), Co(II), Cd(II) and Ru(III).The metal ions complexes study their physical characteristics and many analyses.Collection of information demonstrated from the electronic, infrared, and mass spectrum as well as magnetic moments and molar conductance and atomic absorption indicated that most Cu (II), Co (II), and Cd (II) complexes contain four coordinates and have tetrahedral geometry, while Ru (III) complex was found to have octahedral geometry.Molar conductivity measurements of the prepared Complexes indicate that complexes with the formula[M(L2) (H2O)] where M(II)= Co, Cu, Cd and the formula of Ru (III) [Ru(L2)(H2O)2Cl] .All complexes were (non -electrolyte).The metal complexes' antioxidant activity was evaluated against the DPPH radical and compared to that of a common natural antioxidant Gallic acid to observe produced compounds.The results demonstrated that the metal complexes have more antioxidant activity than ligand and Gallic acid.
1 .
Mass spectrum The mass spectra of [Cu(L) (H2O)] show in the Fig2. the pattern of fragmentation is summarized in Scheme2.The molecular ion peak, which corresponds to the ligand formula weight, has peaked at m/z=359.87.And other peaks at (m/z) (341.85,211.69,130.10 and 96.58) might be related to (C16H14CuN4 ,C7H6CuN3O + ,C9H8N + ,C6H3N2 + and H3CuNO + ) respectively .The mass spectrum of [Co(L) (H2O)]is shown in the Fig4 .The pattern of fragmentation is summarized in Scheme3.The molecular ion peak, which corresponds to the ligand formula weight, has peaked at m/z=355.42 and shows many peaks at(m/z) (337.24,207.08,130.17,103.10 and 91.96
Figure 3 .
Figure 3. UV-Vis Spectrum of [Co(L) (H2O)] . The findings of the thermogravimetric analysis of the metal complexes are presented in tables 4,5 and Fig 4,5.Tentative decomposition reaction of metal complexes f is summarized in Scheme4.
Table 6 . Means, standard deviations, coefficients of variation, Correlation coefficient and IC50 of antioxidant activity in percentage (aa%) of the tested samples at 30 Minute
Conflicts of Interest: None.-We hereby confirm that all the Figures and Tables in the manuscript are mine ours.Besides, the Figures and images, which are not mine ours, have been given the permission for re-publication attached with the manuscript.-Ethical Clearance: The project was approved by the local ethical committee in University of Baghdad. | 2023-03-22T15:18:42.066Z | 2023-03-20T00:00:00.000 | {
"year": 2023,
"sha1": "e36ec346f7945b5332d7fb3252ba321f88d00807",
"oa_license": "CCBY",
"oa_url": "https://bsj.uobaghdad.edu.iq/index.php/BSJ/article/download/7629/4350",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d78e4a05f374530a0336ffa51f608003bfd4ad01",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": []
} |
220249821 | pes2o/s2orc | v3-fos-license | Design of a $\beta$-Ga$_2$O$_3$ Schottky Barrier Diode With p-type III-Nitride Guard Ring for Enhanced Breakdown
This work presents the electrostatic analysis of a novel Ga$_2$O$_3$ vertical Schottky diode with three different guard ring configurations to reduce the peak electric field at the metal edges. Highly doped p-type GaN, p-type nonpolar AlGaN and polarization doped graded p-AlGaN are simulated and analyzed as the guard ring material, which forms a heterojunction with the Ga$_2$O$_3$ drift layer. Guard ring with non-polar graded p-AlGaN with a bandgap larger than Ga$_2$O$_3$ is found to show the best performance in terms of screening the electric field at the metal edges. The proposed guard ring configuration is also compared with a reported Ga$_2$O$_3$ Schottky diode with no guard ring and a structure with a high resistive Nitrogen-doped guard ring. The optimized design is predicted to have breakdown voltage as high as 5.3 kV and a specific on-resistance of 3.55 m$\Omega$-cm$^2$ which leads to an excellent power figure of merit of 7.91 GW/cm$^2$.
I. INTRODUCTION
G ALLIUM OXIDE (Ga 2 O 3 ) has a huge potential for power device applications due to its high breakdown field. β-Ga 2 O 3 has a band gap (4.6 eV) larger than GaN and SiC, with an estimated critical breakdown field as high as 8 MV/cm. Due to the large critical electric field, the Baliga Figure of Merit (BFOM) relevant to power switching could be 20003400 times that of Si, which is several times larger than that of SiC or GaN. Low doped drift layers in conjunction with large band gap materials can enable very high breakdown voltage. Various power devices using β-Ga 2 O 3 have been demonstrated recently with high breakdown voltage in the vertical geometry. [1]- [12].
Several field management techniques have been explored for Schottky diodes over the years that includes edge terminations, superjunctions etc. Guard ring is one such edge termination technique where the anode metal edge is surrounded by a doped region with opposite polarity to that of the drift region to screen the high electric field generated at the metal edge. Lin et. al. [13] recently demonstrated a Schottky barrier diode (SBD) with nitrogen ion implanted guard ring (GR) with a maximum breakdown voltage of 1.43 kV. Zhou et. al. [14] also demonstrated a Ga 2 O 3 SBD using Mg ion implanted guard ring with a breakdown voltage of 1.65 kV. A similar design with Argon implanted edge termination have also been reported by Gao et. al. [15]. Although these devices can Saurav Roy, Arkka Bhattacharyya, and Sriram Krishnamoorthy are with the Department of Electrical and Computer Engineering, The University of Utah, Salt Lake City, UT, 84112, United States of America (e-mail: u1268405@utah.edu; a.bhattacharyya@utah.edu; sriram.krishnamoorthy@utah.edu). achieve a breakdown improvement compared to the case with no guard rings, the lack of electric field screening due to the absence of carriers in the guard ring limits the breakdown voltage. A high resistive guard ring, as demonstrated in the previous devices can spread the depletion region and can reduce the field crowding at the metal edges. However, a guard ring with mobile holes can be very effective in screening the electric field at the metal edges due to the presence of a quasi neutral region and can dramatically shift the high field region from the metal edge to deep inside the device, thereby eliminating the effect of surface states which causes premature breakdown. Because of the difficulty in having hole concentration in β-Ga 2 O 3 due to the absence of a shallow acceptor and hole self trapping, p-doped III-Nitrides would be a viable option to get a reasonably high hole concentration. The idea of heterostructure guard rings have been proposed on Silicon Carbide substrate previously [16]. Muhammed et. al. [17] have reported the growth of c-plane n-GaN epilayer on (2 0 1) β-Ga 2 O 3 substrate using MOCVD. Vertical blue LEDs have also been demonstrated on (2 0 1) β-Ga 2 O 3 substrates [18]. Shimamura et. al. [19] reported growth of c-plane GaN on (1 0 0) oriented β-Ga 2 O 3 substrate using MOCVD. On the other hand Cao et. al. [20] reported the growth of non-polar a-plane GaN on (0 1 0) oriented β-Ga 2 O 3 substrate by MOCVD. All these reports confirm the viability of growing electronic grade polar and non-polar GaN on β- In this paper, we propose and design a Ga 2 O 3 SBD with a p-doped III-Nitride guard ring using electric field simulations. We have explored three guard ring configurations including (i) p-Gallium Nitride (p-GaN) GR, (ii) Non-polar graded p-Aluminum Gallium Nitride (p-AlGaN) GR and a (ii) polar graded p-AlGaN GR. In this work we perform detailed electrostatic simulations to capture and manage high in III-nitride/β-Ga 2 O 3 heterostructures. The design is optimized to extract the optimum device parameters which efficiently reduces the electric field. We have also explored the additional use of field plate in the aforementioned optimum design to further minimize the peak electric field in the device structure.
II. SIMULATION METHODOLOGY
The schottky barrier diode device structure ( Fig. 1) with varied guard ring thickness of T GR , a 10 µm thick drift layer (N D = 10 16 cm −3 ), Ni/Au Schottky metal with a barrier height Φ B of 1.4 eV [24], and a Schottky metal guard ring overlap of 3 µm is simulated using Sentaurus [25] 2D TCAD device simulator. Width of the guard ring is considered to be 50 µm. In the simulation, adequate numerical convergence was reached by an optimized meshing, with subnanometer grid spacing for the key electrical layers and their interfaces and larger spacings for drift region. Spontaneous polarization model is used to include polarization effect in the case of graded polar p-AlGaN.The sponataneous polarization values of -0.034 and -0.09 C/m 2 is considered for GaN and AlN [23]. For p-type doping in GaN and AlGaN, incomplete ionization model is also used to reflect the accurate hole concentration. In order to capture the accurate results for high doping case, Fermi-Dirac model is included for device operating biases. The device simulation setup uses well calibrated mobility model and thermodynamic transport model to match the recent experimental results [13]. The device breakdown voltage can be extracted from E-field simulation when the peak E-field reaches the GaN (3.3 MV/cm) or Ga 2 O 3 (8 MV/cm) critical Efield. Band offsets were determined using electron affinity rule. The GaN/Ga 2 O 3 band offset estimated using electron affinity rule matches well with the experimentally determined band offsets [26]. The ionization integrals for avalanche breakdown were not evaluated in order to avoid excessive computation time. Furthermore, accurate ionization rate parameters are currently unknown for Ga 2 O 3 . Hence it should be noted that the breakdown is not directly calculated, but can be estimated based on the generated electric field distributions [27]- [29]. It should also be noted that in real devices field crowding can also occur in the device corners. However those fields are always lower than the field crowding at the electrode edges which is the primary cause of device breakdown. Hence our comparison of device breakdown for the various configurations based on 2-D simulation is considerably reliable and was also demonstrated by other works [27] .All the material parameters for β-Ga 2 O 3 , GaN and AlN assumed in the device simulation are presented in Table I. All the other material parameters for intermediate Al compositions in Al x Ga 1−x N have been calculated based on GaN and AlN parameters using Vegard's law.
III. RESULTS AND DISCUSSIONS
The SBD with nitrogen doped GR and the one with no GR is simulated for comparison and the electric field profiles are shown in Fig. 2(a) (i) and (ii) respectively. The circled cross-section in Fig. 1 is magnified and shown for all the electric field contours. The bias voltage is taken to be 1500 V. Ionization energy of Nitrogen in β-Ga 2 O 3 is considered to be 2 eV [30]. The maximum electric field is at the metal edge in both the cases and we can see that the field is reduced in the guard ring structure in Fig. 2(b) as expected and also experimentally reported [13].
We now explore the use of p-GaN as the guard ring material ( Fig. 3(a)). The use a of magnesium-doped GaN guard ring, enables screening of the electric field at the metal edges due to the presence of mobile holes. The activation energy for Mg in case of p-GaN is considered to be 0.2 eV [31]. The doping concentration is taken to be 10 20 cm −3 (hole concentration = 8.2× 10 18 cm −3 ) and the guard ring thickness is 0.5 µm and the anode voltage is 2000 V. The peak electric field has now moved to the p-GaN/n-Ga 2 O 3 heterojunction as can be seen in the electric field contour shown in Fig. 3(a). To be able to leverage the high critical E-field of β-Ga 2 O 3 , a better design would be to use a guard ring material with an enhanced critical field as compared to β-Ga 2 O 3 . So, we now study a graded p-AlGaN guard ring with aluminum composition graded from 70 % at the AlGaN/Ga 2 O 3 interface to 40 % at the SBD surface. Lower Aluminum content is employed closer to the surface so as to achieve higher mobile hole concentration. Fig. 3(c) shows the electric field distribution using graded AlGaN for a doping concentration of 10 20 cm −3 . However since the ionization energy of Mg is very high for AlGaN with high Al composition, it is very difficult to realize a high hole concentration. In fact for Mg concentration below 10 18 cm −3 , the peak electric field is always at the metal edge because of the depletion of the entire GR. But if we grow c-axis oriented AlGaN with graded Al composition with decreasing Al composition from heterointerface to the surface, we can realize a 3-D slab of holes (3DHS) [32] due to the polarization doping effect. This is expected to significantly increase the hole concentration even with low Mg doping due to field ionization of dopants. Fig. 3(e) shows the electric field distribution in the polarization doped p-Al x Ga (1−x) N (where x = 70 % at the heterojunction to 40 % at the SBD surface) GR. The hole concentration in this guard ring structure increased from the non-polar case by a significant amount (2× 10 17 to 6× 10 18 cm −3 ). Band diagram of the polarization-doped polar graded p-AlGaN guard ring configuration is shown in Fig. 3(f). It can be observed that the peak electric field is now at the heterojunction mainly due to the positive polarization sheet charge at the p-Al 0.7 Ga 0.3 N/n-Ga 2 O 3 heterojunction. The energy bands fall rapidly at the polar p-AlGaN/n-Ga 2 O 3 interface, thus increasing the electric field compared to the non-polar p-AlGaN/n-Ga 2 O 3 interface as shown in Fig. 3(g). Even for the low doped case, the bands fall rapidly at the p-AlGaN/n-Ga 2 O 3 heterointerface as shown in Fig. 4. So, a polarization doped p-AlGaN GR would serve no benefit in reducing the peak electric field. Fig. 5 shows the peak electric field vs applied bias for SBD with the five different guard ring structures. The doping concentration for p-type guard ring is taken to be of 10 20 cm −3 and for Nitrogen-doped case it is taken to be of 10 16 cm −3 and the GR thickness is taken to be 0.5 µm. The nonpolar graded p-AlGaN, ungraded p-AlGaN, and p-GaN guard ring shows best performance in terms of reducing the peak electric field. However the SBD with GaN guard ring crosses its critical electric field of 3.3 MV at 750 V. The non polar p-Al x Ga 1−x N GR with uniform Al composition (x = 60%) has a critical electric field as high as gallium oxide and hence the breakdown voltage can be as high as 2000 V as shown in the Fig. 5. The electric field is very high in case of the SBD with polarization doped AlGaN GR because of the high field at the heterointerface. Doping and thickness of the guard ring are critical parameters that would determine the amount of electric field screening and location of the peak electric field. We simulated the three guard ring configurations excluding the SBD with no GR and nitrogen-doped GR, as a function of Mg doping for a fixed thickness of 0.5 µm at a bias of 2000 V. Doping in the guard ring is found to determine the location of the peak electric field as shown in Fig. 6. For the polar p-AlGaN guard ring, the peak electric field is always at the heterojunction irrespective of the doping. In the case of p-GaN and non-polar p-AlGaN guard ring, the peak field is at the metal edge for doping concentrations lower than 10 18 cm −3 . Since a high GR doping is not necessary to minimize the peak field at the metal edge and since non polar graded and ungraded p-AlGaN GR shows similar performance, compositional grading in the GR is not required, which will mitigate the challenge involving growth of graded epitaxial layer of AlGaN inside the pocket of Ga 2 O 3 .
The electric field simulations clearly establish the comparably superior performance of non-polar p-AlGaN guard ring configuration. We now further focus on this particular configuration and study the effect of thickness of the guard ring. In the case of low-doped guard rings, in order to take advantage of holes, it is essential that the the thickness must be sufficiently large to realize an undepleted region close to the metal. Equilibrium energy band diagram of non-polar graded p-AlGaN GR with low doping (10 16 cm −3 ) with two different thicknesses is shown in Fig. 7(a) and Fig. 7(b). Presence of a quasi neutral region near the metal by employing a thick low doped guard ring ( Fig. 7(b)) is expected to be beneficial for electric field screening. The quasi neutral region near the metal edge provides room for the growth of the depletion region at high reverse bias. The effect of guard ring thickness as a function of Mg doping is summarized in Fig. 7(c). The peak electric field at a bias of 2000 V can be reduced from 9 MV/cm to 7.5 MV/cm at a guard ring doping of 10 16 cm −3 . The peak electric field reduces as the doping increases for doping concentration less than 10 18 cm −3 . As the GR thickness increases, the peak electric field reduces till a doping concentration of 10 17 cm −3 . Above this concentration the peak electric field shifts to the pn-heterojunction and the GR thickness has no effect on the peak electric field. In this regime, there is no advantage of using a field plate, since the peak field region is buried.
To further improve electric field management, we choose guard ring configuration B with a thickness of 1 µm and doping concentration of 10 16 cm −3 . In this case, the peak electric field is at the metal edge. Now we analyze the potential for further improvement in breakdown voltage with a field plate. Fig. 8(a) shows the electric field distribution in the SBD for a GR thickness of 1 µm and at a bias voltage of 3600 V (i) without and (ii) with a field plate respectively. An SiO 2 layer of 200 nm is used as the field plate oxide. We also compared our design with a field plated SBD without GR as shown in 8(b). The field plating has more impact on the reduction of the peak electric field compared to the SBD with only GR. However, SBD having a field plate combined with a thick p-AlGaN GR dramatically reduces the peak electric field at the metal edge. Here again the peak electric field reduces as the doping increases till doping concentration reaches 10 17 cm −3 . So, the best design would be a thick low doped p-AlGaN GR with a field plate. The Fig. 9 shows the effect of field plating on reducing the peak electric field for the SBD with non-polar p-AlGaN GR. In Fig. 9, we can see that for doping concentration less than 10 18 cm −3 , in the case of non-polar p-AlGaN GR, field plating significantly reduces the electric field at the metal edge and the electric field crosses the β-Ga 2 O 3 critical electric field of 8 MV/cm at a bias voltage of around 3600 V. For doping concentration above 10 17 cm −3 , the field plating has no effect on reducing the electric field because of shifting of the peak electric field from metal edge to the pn-heterojunction. We can also see that use of a high-k dielectric (HfO 2 , in this case with relative permittivity of 22) with same dimension as the previous case as field plate oxide in conjunction with the guard ring reduces the peak electric field dramatically and the device reaches the β-Ga 2 O 3 critical electric field of 8 MV/cm at a reverse bias voltage of 5200 V, which is significantly higher than the highest reported breakdown voltage for any vertical SBD with 10 µm drift layer [33]. The high permittivity difference between the semiconductor and the dielectric generates polarization bound charge inside the dielectric which balances the depletion charge at the semiconductor interface [29], [34], [35]. This charge balance results in flattening of the electric field profile at the dielectric/semiconductor interface reducing its peak magnitude. We have also simulated a field plated SBD with HfO 2 as field plated oxide without a GR and it achieves a breakdown voltage of 4300 V. So, the use of a GR in conjunction with a field plate increases the breakdown voltage by 900 V compared to the field plated SBD with no GR. We have also analyzed other extreme highk material such as BaTiO 3 (relative permittivity of 300) as field plate oxide and the breakdown voltage was found to reach 7800 V when used in conjunction with GR. Since BaTiO 3 has no conduction band offset with β-Ga 2 O 3 , use of BaTiO 3 underneath the metal might cause charge trapping at the dielectric/semiconductor interface. To mitigate the charge trapping, we have analyzed a stacked dielectric of HfO 2 (5 nm)+BaTiO 3 (295 nm). Here the large thickness ratio is used to maintain the high dielectric constant for the series configuration which results in an effective dielectric constant of 248. Using this structure in conjunction with the thick guard ring we are able to get a breakdown voltage of 6200 V. Among all the devices described the SBD with non-polar p-AlGaN GR added with field plate and a high-k field plate oxide is found out to be best choice to reduce the electric field. The thick β-Ga 2 O 3 epilayers (10µm) on (010) β-Ga 2 O 3 substrates used in this design can be grown using HVPE and has already been demonstrated [36], [37]. Selective area epitaxy of AlGaN GRs inside Ga 2 O 3 trench pockets can be done using MOCVD [38], [39]. We understand AlGaN heteroepitaxy on Ga 2 O 3 could lead to compromised material quality and will require extensive growth and process optimizations. Experimentally, trench Ga 2 O 3 SBDs with field plate structures, were able to achieve FOM as high as 0.95 GW/cm 2 [33]. Using the concepts explored in this work, if Ga 2 O 3 SBDs with GR in conjunction with field plates are implemented, we expect this design to surpass the already high FOM achieved with Ga 2 O 3 power SBDs. For instance, with a breakdown voltage of 6200 V and an estimated R ON,SP of 3.55 mΩ-cm 2 , assuming a mobility of 176 cm 2 /V.s [40], we should be able to achieve extremely high FOM of 10.8 GW/cm 2 .
IV. CONCLUSION
A novel approach to reduce the electric field and thus increasing the breakdown voltage for schottky barrier diode by using p-doped III-nitride guard ring is proposed and shown through a detailed device simulation. This approach circumvents the issue of lack of p-type doping in gallium oxide. The SBD with thick low doped non-polar p-AlGaN GR in conjunction with a field plate and a high-k dielectric can serve best in terms of reducing the peak electric field. The inclusion of field plate and a high permittivity field plate oxide in case of low doped GR is shown to further reduce the electric field at the metal edges. Further research into the interface properties of the AlGaN/Ga 2 O 3 heterointerface will lead to better understanding and use of such heterojunction-based structures for high performance power electronic devices. | 2020-06-30T01:00:36.747Z | 2020-06-28T00:00:00.000 | {
"year": 2020,
"sha1": "53f0de4415870e00712613c31468283809c6a66e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2006.15645",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "53f0de4415870e00712613c31468283809c6a66e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics",
"Computer Science",
"Engineering"
]
} |
257392592 | pes2o/s2orc | v3-fos-license | The use of animal by-products in a circular bioeconomy: Time for a TSE road map 3?
In 2005 and 2010, the European Commission (EC) published two subsequent ‘Road Maps’ to provide options for relaxation of the bans on the application of animal proteins in feed. Since then, the food production system has changed considerably and demands for more sustainability and circularity are growing louder. Many relaxations envisioned in the second Road Map have by now been implemented, such as the use of processed animal proteins (PAPs) from poultry in pig feed and vice versa. However, some legislative changes, in particular concerning insects, had not been foreseen. In this article, we present a new vision on legislation for increased and improved use of animal by-products. Six current legislative principles are discussed for the bans on animal by-products as feed ingredients: feed bans; categorization of farmed animals; prohibition unless explicitly approved; approved processing techniques, the categorization of animal by-products, and monitoring methods. We provide a proposal for new guiding principles and future directions, and several concrete options for further relaxations. We argue that biological nature of farmed animals in terms of dietary preferences should be better recognised, that legal zero-tolerance limits should be expanded if safe, and that legislation should be revised and simplified.
Since 2001, a large epidemiological surveillance system to monitor for incidence of TSEs has been set up in the EU. Initially, almost all slaughtered bovine animals for human consumption were tested, but most Member States have now been allowed to implement revised monitoring programmes which only require them to test for the presence of TSEs in specific target groups of animals such as emergency slaughtered animals and fallen stock (Commission Decision 2009/719/EC). In 2017, the European Food Safety Authority (EFSA) concluded that there had been a significant decrease in classical BSE and that incidence in the EU could be considered low [8]. Since then, the situation in the EU has largely remained the samealthough incidental single cases of classical BSE remain a cause for concern [9]. At a global level, most World Organization for Animal Health (WOAH) member states have a negligible risk statusexcept for Chinese Taipei, Ecuador, Greece, Russia, and the United Kingdom (except Northern Ireland) [10].
Prohibitions on the use of animal by-products as feed consist of three primary types of restrictions: a) the ruminant ban; b) the extended feed ban, and; c) the species-to-species ban. Certain animal by-products had been exempted from these bans from the start, such as dairy products and eggs -among others (Regulation (EC) No 999/2001, Annex IV, point 2). In addition, a series of later relaxations have been installed over the course of the years, such as the use of fishmeal in calf milk replacers (Regulation (EU) No 56/ 2013). Included in the Supplementary Materials in Ref. [11] is a complete timeline with major relaxations until 2019. Most recently, the use of processed animal proteins (PAPs) of poultry origin in pork feed and pig PAPs in poultry feed was approved, as was the use of ruminant collagen and gelatine in feed for non-ruminant farmed animals (Regulation (EU) 2021/1372 amending Regulation (EC) No 999/2001). In addition, the use of reared insects in feed for aquaculture animals (legal since 2017: Regulation No 2017/893) was extended to pig and poultry in Regulation (EU) 2021/1372. At this time, eight insect species (black soldier fly (Hermetia illucens), common housefly (Musca domestica), yellow mealworm (Tenebrio molitor), lesser mealworm (Alphitobius diaperinus), house cricket (Acheta domesticus), banded cricket (Gryllodes sigillatus), field cricket (Gryllus assimilis), and silkworm (Bombyx mori)) are permitted to be used in feed for these animals, but the substrate on which the insects may be reared is restricted to vegetable materials (i.e., not meat or fish).
The European Commission launched a first TSE Road Map in 2005 in order to provide options for relaxation of the installed restrictions for use as a feed ingredient, initially providing a short-term (2005)(2006)(2007)(2008)(2009)) and long-term (2009-2014) vision [12]. The basis to this process of gradual lifting the restrictions on the use of animal by-products were, and still are, four-fold: on-going publication of risk assessments by EFSA, balanced proposals for relaxations in concordance with the general principles, development of monitoring methods and societal requirements, and finally commitment of member states. Road Map 2 was published in 2010 [13], presenting a vision for the period 2010-2015. Both Road Maps showed a graphical overview of restrictions and legal applications in terms of source animal/material and target animal for consumption. A new Road Map 3 was not published in or after the year 2015. However, several major developments have taken place since 2010. Despite a similar risk status in terms of BSE incidence to other countries, as discussed above, the EU has lagged behind in implementing more permissive rules on the use of animal proteins in feed [14,15]. The rapidly increasing demand of sustainability and circular bioeconomy (the Green Deal, Farm to Fork) is a major shift in the mandate and priorities of legislators. We argue here that animal by-products should be reused to a larger extent. In the context of full circularity, every type of by-product should find a destination for reuse with a better ecological footprint in terms of nutrient recycling and production of greenhouse gasses [16]. In the view of the developments of the last five years, we consider a new TSE Road Map 3 a necessary guidance for future policies. This paper will propose this vision by evaluating the state-of-the-art and discussing legislative principles (sections 2-3) and presenting directions for further relaxations. These future directions are firstly discussed in a general sense (section 4), and subsequently by providing specific options for relaxations (section 5).
Legislative principles and state-of-the-art
The current legislation on the use of animal proteins in feed is based on six different principles and elements. The first five principles are: 1) the feed bans; 2) the categorization of farmed animals; 3) prohibition unless explicitly approved; 4) approved processing techniques, and 5) the categorization of animal by-products. These first five aspects will be presented in this section; additionally, as sixth element, monitoring methods are discussed separately in section 3. These principles are used for an evaluation of future approaches and relaxations in the subsequent sections.
Firstly, the foundations of the legislative framework comprise three different bans: a) the permanent ruminant ban prohibiting the use of animal proteins in ruminant feed (Regulation (EC) 999/2001, Article 7, item 1), b) the permanent species-to-species ban prohibiting the use of animal proteins of a given source animal in feed intended for the same species (Regulation (EC) 1069/2009, Article 11, item 1a), and c) the extended feed ban prohibiting the use of animal proteins in a large range of other applications (Regulation (EC) 999/2001, Article 7, item 2). Exemptions have been installed for each of these three bans, but most of the relaxations as presented in the Road Maps 1 and 2 -in particular those put in force more recentlyconcern the extended feed ban (Regulation (EC) 999/2001, Annex IV). Important exemptions to the ruminant ban are the feeding of fish proteins to young ruminants (Regulation (EC) 999/2001, Article 7, item 3). One exemption to the species-to-species ban is the relaxation of caught fish, being a mixture of species, which is allowed to be fed to farmed fish of a species which might be included in the mixture of caught fish species (Regulation (EC) 142/2011, Annex VIII, Chapter II, item 2).
New derogations were put into force in 2021 (Regulation (EU) 2021/1372 amending Regulation (EC) 999/2001). The feeding of pig-PAP to poultry and vice versa had been under serious discussion since the 2010 TSE Road Map 2 [13]. However, at that time, monitoring methods for material of a certain (group of) species were not available. In 2019, the European Reference Laboratory for Animal Proteins (EURL-AP) finished the validation of a method for pig and one for chicken-turkey (excluding ducks and geese, although part of the definition of poultry; see Ref. [11]. A method for poultry, covering all species(-groups) of the definition of poultry, has been developed and tested successfully [17,18]. Validation of these methods allowed for differentiation between different PAPs and cleared the way for the authorisation of pig PAPs in poultry feed and vice versa (Regulation (EU) 2021/1372, preamble 12 and 13). The use of gelatine and collagen originating from ruminant material has been under discussion since 2005 [19]. After further evaluation by EFSA [49], these materials are now authorised as ingredients in feed for non-ruminants. For this evaluation, a probabilistic model was developed to estimate the BSE infectivity load, taking into account of different 'risk pathways' via which infected material could enter the food/feed chain. It was concluded that the probability that there would be no new cases of BSE was almost certain. Finally, based on the biological background of pigs and poultry, being omnivorous and (partly) insectivorous, respectively; insects have been authorised as ingredient in pig feeds and poultry feeds. The broader perspective of the biological background of the bans on animal by-products is discussed by Ref. [20].
Secondly, farmed animals are defined as one category and in a broad sense (Regulation (EC) No 1069/2009, Article 3, item 6). This definition includes animals for food and non-food production (fur animals); only pet animals are excluded. Clear differentiation is made between ruminant and non-ruminants in terms of permitted feed materials, but this broad definition necessitates highly comparable routes for gradual relaxations for most farmed animals despite biological differences in susceptibility and diet preferences.
Thirdly, the general principle of the legislation is to prohibit the use of animal proteins unless a specific relaxation is installed. This is an extension of the 'precautionary principle' (Regulation (EC) No 178/2002, Article 7). In the context of animal proteins, this is laid down in Regulation 999/2001, article 7. Relaxations following this general principle are provided in Annex IV of that Regulation. This Annex IV comprised less than one page in 2001, whereas the current version of Annex IV covers almost 30 pages and is highly complex. The consequence of this "prohibited, unless" principle is that new by-products start with case-by-case legalised application, which is a slow process. This is exemplified by the insect situation, which were legalised for aqua-feed only in 2017 and for poultry and pigs in 2021 -but a large number of legal barriers for the insect utilisation persist.
Fourthly, certain category 3 materials may be processed and used for feeding farmed animals. At this time; the subcategories of Category 3 (n) [hides, skins, hooves, etc. of dead animals that did not show signs of zoonotic disease], (o) [adipose tissue from animals that did not show signs of zoonotic disease; slaughtered in a slaughterhouse; and considered fit for human consumption], and (p) [catering waste] are not allowed to be used as basis for the production of PAPs and HPs (Regulation (EC) No 1069/2009, Article 14(d)(i)). These materials, as well as subcategory (m) [parts of Rodentia and Lagomorphia] are also not allowed to be processed into gelatine, hydrolysed proteins, or dicalcium or tricalcium phosphate (Regulation (EU) 142/2001, Annex X, Chapter II). Seven standard processing methods are defined In Chapter III of Annex IV of Regulation (EC) No 142/2011. Animal proteins of mammalian origin may only be manufactured into processed animal proteins (PAPs) intended for feeding to farmed animals by way of method 1; pressure sterilisation (Regulation (EU) 142/2001, Annex X, Chapter II, section 1). The process of fermentation is only mentioned in the context of use as petfood. The nutritional value of different types of PAPs must be carefully considered and evaluated for the composition of compound feeds to meet all nutritional requirements of the target animal [21]. This is also the case for reared insects such as fly larvae: although generally considered to be omnivorous, optimal yields and fatty acid profiles depend on composition of the substrate [22,23].
Finally, animal by-products are classified in three categories depending on their origin in terms of animal tissues or organs, type of processing and the degree of risk involved (Regulation (EC) 1069/2009, Articles 8-10; see Fig. 1.1 in Ref. [24]. Materials in category 1 consist of, for instance, animals suspected of being infected by a TSE. Category 2 materials consist of, for example, manure and animal by-products derived from animals which have been submitted to illegal veterinary medicines. The disposal and use of animal by-products and derived products varies per category (Regulation (EC) No 1069/2009, Articles 12-16). Only category 3 materials may be used for feeding farmed animals other than fur animals (Regulation (EC) 1069/2009, preamble 45). This category includes a variety of types of materials which are allowed to enter the food production chain as basis for several derived products. Processed animal proteins (PAPs; Regulation (EC) 142, 2011, Annex I, item 5; [50]) may be produced from the subcategories (a) to (l) (Regulation (EC) 142/2009, Annex X, Section 1, part A). An important derived product consists of hydrolysed proteins (HPs; Regulation (EC) 142, 2011, Annex I, item 14). This product can be produced from subcategories (a) to (l) as well (Regulation (EC) 142/2009, Annex X, Section 5, part A). At the other hand, the subcategories (m) to (p) are listed as source for the production of dicalcium phosphate, tricalcium phosphate or collagen (Regulation (EC) 142/2009, Annex X, Sections 6, 7 and 8). Subcategory (p), catering waste although included in Category 3, is prohibited for any application in the food production chain (Regulation (EC) 1069/2009, Article 11, item 1b). There are some legal applications for hydrolysed proteins, blood products, and gelatine. A more in-depth discussion on the applicability of hydrolysis for more sustainable use of animal proteins is discussed in Ref. [16].
Monitoring methods
The 6th element of the current legislation on the use of animal proteins in feed is related to monitoring methods. Animal byproducts can be produced from a variety of sources in terms of animal species and types of tissue and are processed in multiple ways. This diversity of materials is monitored by only a few legally permitted methods of which the range of applicability is maximised. In principle, a diverse set of methods are available for legal monitoring, including: microscopy, DNA-based methods such as polymerase chain reaction-tests (PCR), protein-/antibody-based methods such as ELISA, and spectral analysis [11]. However, currently only two types of methods are legally authorised: microscopic detection and PCR (Regulation (EC) 152/2009 Annex VI). Both PCR and microscopy have proven to be suitable to monitor PAPs, which is a major animal by-product. Limitations of these methods are the lack of species identification down to the legal species groups of ruminants, pig and poultry (microscopy) and the lack of discriminating between prohibited and authorised materials of the same species (PCR: ruminant PAP vs. milk, pig PAP vs. blood products, poultry PAP vs. egg material, etc.).
The legislation of the use of animal by-products implies a zero-tolerance policy [25]. Consequently, the level of detection of monitoring methods should be as low as reasonably achievable (ALARA principle). The first technical limit for a monitoring method was 0.1% for microscopic detection, the only method available at the time (Directive 98/88/EC). This minimal required level of detection was carried over to subsequent versions of the legalised method for microscopy (Directive (EC) 2003/126/EC) and to other methods (PCR; Regulation (EC) 152/2009). The documented level of detection is much lower for both methods (microscopy: 0.005%, [26,27]; PCR: 0.0125%, [28]. The EURL-AP published a report on a 'technical zero' concept in 2017, which would be an action limit rather than a zero-tolerance policy [29]. As a consequence of implementing this 'technical zero' concept for ruminants, the risk of propagation of TSEs might be increased. Therefore, the EURL-AP calculated the mass fraction equivalent of the data generated by PCR methods, which allowed EFSA to calculate the cattle oral infectious dose and associated theoretical increase in BSE numbers per year in case porcine PAP were to be authorised in poultry feedand vice versawhich was later permitted via Regulation (EU) 2021/1372, discussed in section 2 above (EFSA, 2018).
Besides PAPs (Regulation (EC) 142/2011, Annex I, Point 5) a set of other types of material, such as blood products, fat derivatives, milk products, gelatine and hydrolysed proteins, dicalcium phosphate, tricalcium phosphate, collagen, egg products and former food stuffs containing animal proteins (sections 1-10, respectively), is excluded from the definition of PAPs. This is acknowledged and discussed in the same Annex of Regulation (EC) 142/2011. Each of these types of materials require dedicated monitoring methods targeting at the appropriate legal parameters, since bone or muscle fragments for microscopy or DNA for PCR might be absent. Immunoassays (ELISA) are discussed in EFSA (2011) as not meeting the requirement of an LOD below 2%. This type of monitoring method is based on antibodies which can detect tissue-specific proteins, among other substances. Two immunoassays targeted at ruminant troponin have been validated for the detection of PAP at a level of 0.5%, which is four times lower than indicated by EFSA. Milk was not detected in this validation study [30,31]. These methods are capable of both animal-specific and tissue-specific detection of materials as demanded by the current legislation.
General future directions
Based on the legal principles and in the framework of the two permanent bans, the ruminant ban and the species-to-species ban, several modifications of legal principles can be considered for facilitating sustainability and circularity of the feed industry. The recognition of the biological nature of farmed animals in terms of diet, exemplified by the recognition of pigs as omnivorous animals and poultry as insectivorous animals in Regulation (EU) 2021/1372 (Preamble 16), should be extended to all farmed animals. This is most notably important for insects. At this time, eight insect species from a diverse spectrum of taxonomic orders are permitted to be included in animal feed. The types of feed matrices permitted for this large group of animal specieswith a wide spectrum of feeding habitsshould be diversified accordingly. Examples are the group of termites (Isoptera) that could be reared on materials containing lignin such as wood, garden products [32], and fly larvae (e.g., Diptera) for conversion of manure or manure-like materials [33][34][35]. Ringed worms (Annelids) have been proposed as a good source of proteins [36]: these worms feed partly on soil, but this type of feed material appears to be prohibited. In wider terms of legislation, the broad definition of farmed animals could be diversified to different groups of animals, primarily classified according to their feeding behaviour and their susceptibility to TSEs. A diversified classification of farmed animals would ease the authorisation of certain feeding strategies for defined groups of animals, especially in cases of larger biological distances between the source species of a feed material and the target species (consumer) [20].
The principle of zero tolerance in feed materials is supported by the ALARA principle and by the legal requirement of a limit of detection of 0.1%. In its updated quantitative risk assessment (QRA) of the BSE risk posed by PAPs, EFSA calculated the risk of accidental incorporation of infected ruminant PAP in ruminant feed (EFSA, 2018). It was concluded that a higher contamination level than the detection limit of 0.1% -under certain circumstances up to 2% -could be deemed acceptable as this would not lead to significant increases in BSE cases. A similar model could be developed for the risk of a low level of porcine PAP in pig feed, or poultry PAP in poultry feed. The primary safety concern of intraspecies recycling in case of poultry and pigs was the transfer of zoonoses or animalspecific diseases, such as swine fever (Regulation (EC) No 1774/2002). Ethical considerations against cannibalism in this context are valid, but that discussion is of a different nature than one on safety concerns. The same is true for concerns in the context of religious law (e.g., halal): both in case of inter-and intra-species recycling [37,38]. It has been shown that appropriate treatments of PAPs can reduce the virulence of a range of zoonoses by magnitudes of 10 up to 90 [39]. This aspect is further discussed in van Raamsdonk et al. (in press). Concerning the use of monitoring methods in a framework of varying LODs or technical limits, higher than 0.1% when applicable, monitoring methods other than PCR and light microscopy can provide added value. An overall requirement for a future use of an increasing set of animal by-products for circular bioeconomy and sustainability should be the development, validation and legal authorisation of a set of dedicated monitoring methods that are capable of identifying and adequately quantifying feed materials of different origin.
One of the basic principles of the current legislation of the feed ban is a general prohibition, with relaxations where possible. The aforementioned proposals (diversification of the definition of farmed animals and a diversification of the limit of detection depending on the specific situation) could be used as factors in a principle of 'provisional authorisation', i.e., safety assured for specific applicationsas is for instance found in legislation on limits for undesirable substances in feed (Directive 2002/32/EC). We emphasize that in all cases, regardless of any type of authorization, feed ingredients should comply with the applicable restrictions for chemical and microbiological safety, for processing and for purpose (Regulation (EU) 68/2013, Preambles 2 to 5).
Finally, the current categorization of animal by-products in three categories according to their TSE risk is complicated and problematic. The derived products PAPs, HPs, blood products, milk products and egg products are all legally based on different subsets of subcategories of Category 3 (Regulation (EU) 142/2001, Annex X, Chapter II). While catering waste (subcategory (p) with the exemption of Category 1 (f)), is included in Category 3 as well, it is fully prohibited for feeding purposes. A clear categorization of animal by-products based on their potential use could be installed instead; for instance, with Category I materials' only use being for incineration, landfill, and possibly fertilizers. The use of Category 2 materials could be defined as limited to non-food and technical purposes, while all Category 3 materials' main use would be for feed purposes. In the view of sustainability and circular bioeconomy demands, subcategories could shift to higher categories for better valorisation when technological and containment measures for assuring safety are progressing.
Options for specific new modifications and relaxations
Considering the legal principles and general directions towards full sustainability and circularity, a range of concrete proposals can be made for further relaxations. Proposals below are based on three prerequisites: safety, opportunities for monitoring the origin of the animal by-products (in most cases the source of the material, or the process when relevant), and options for management (physical separation of streams, either as pure material or as ingredient in compound feed). In all cases the required safetyin terms of prions, zoonoses, other microbiological hazards, accumulation of chemical hazardshas to be proven to be at a sufficiently high safety level according to the total set of applicable legislation.
1. The framework for permitted and not-permitted use of animal by-products in feed, depending on the 'source animal' and 'intended consuming animal', is shown graphically in Fig. 1 is not yet allowed. Reared insects acting as an intermediary in this process may not necessarily result in efficiency gains, but that in itself should not present a barrier for change. If containment can be ensured, monitored, and enforced, this practice should be permitted. At this time, we are unaware of any published literature on the capacity of insects to transfer any material of animal origin on which the insects are reared to the next step in the chain. It can be hypothesized that certain types of processing could affect such transmission, in particular the process of starving the insects prior to harvest to enable the insects to empty their gut contents. If transfer of DNA and disease via insects were to be absent or possibly controlled via processing, the need for containment would be less relevant. Other processing methods such as enzymatic hydrolysis should also be assessed [40]. b. Species differentiation. Farmed animals are defined as one category and in a broad sense. This definition includes animals for food and non-food production (fur animals). Clear differentiation is made between ruminant and non-ruminants in terms of permitted feed materials, but this broad definition necessitates highly comparable routes for gradual relaxations for most farmed animals despite biological differences. We propose that this broad definition of farmed animals is further diversified to different groups and species of animals, primarily classified according to their feeding behaviour and their susceptibility to TSEs. We propose that a new, diversified classification of farmed animals is used, specifically: herbivores susceptible to TSEs (ruminants); minor herbivores (horses and relatives, rabbits); omnivorous and non-herbivorous terrestrial animals (pigs, poultry); vertebrate aquatic animals (fish); invertebrate aquatic animals (crustaceans, molluscs); and other invertebrates (insects, ringed worms). The latter group (invertebrates, most notable insects) could feed on a variety of different materials depending on the biological dietary preferences of the specific species in question, such as wood, manure and soil. The species-to-species ban can be applied less strictly to some extent for animals in the proposed classes with a larger genetic distance from humans (invertebrates, fish), due to a minimal risk in transmission of TSEs. Some reared insect species can exhibit cannibalism-like behaviour under certain (natural) conditions such as nutritional stress, but this has been suggested as a potential transmission route for spread of disease in such facilities [41]. Although research on that behaviour has focused on spread via cannibalism of infected live hosts rather than via (processed) protein meal of the same species, this area, and potential consequences in terms of disease proliferation is largely under-investigated. The large-scale recycling of insect proteins from the same species should therefore not be encouraged. This may need to be extended to insect species within the same taxonomic family or order. This type of cannibalism has also been observed for fish [42,43] and other aquaculture animals [44], for which similar rules should apply. c. Ruminant hydrolysed proteins. Hydrolyzation is a process that results in severe modification of proteins and peptides (van Raamsdonk et al., in press; [45]. After proof of sufficient safety (inactivation of zoonoses, prions), all ruminant material should be allowed to be hydrolysed and used as feed ingredient, with the exception of Specified Risk Material as defined in Article 3(1) (g) of Regulation (EC) No 999/2001. The required method development for both authenticity (rate of hydrolysation) and identification of source animal is currently being finalized. d. Ruminant blood products: The use of blood products derived from ruminants for non-ruminant feeds is prohibited (Regulation (EC) 999/2001, Annex IV, Chapter 2 (b)(iv)). These products (dried/frozen/liquid plasma, dried whole blood, dried/frozen/ liquid red cells, haemoglobin powder) are highly processed and derived versions of animal by-products. Use as feed ingredient for non-ruminants should be considered. 2. Increased use of alternative processing methods.
I. Hydrolysation. Material belonging to Category 3 (a) to and inclusive of (l) can be subjected to a process of hydrolysation (Regulation (EU) 142/2011, Annex X, Chapter II, Section 5). This includes all former food products containing animal proteins still fit for human consumptionbut discarded for economic reasons or packaging defects. Hydrolysation is a process that results in severe modification of proteins and peptides, generating derived products which should be subject to neither the extended feed-ban nor the species-to-species ban, since these products are excluded from the definition of PAPs (van Raamsdonk et al., in press). As discussed in section 2, at this time, certain subcategories of Category 3 -most importantly subcategory (m) [parts of Rodentia and Lagomorphia] -are not allowed to be used as basis for the production of gelatine, hydrolysed, or dicalcium or tricalcium phosphate. We advocate for these options being reassessed. See also item 1c above. II. Fermentation, as many other production procedures, is a legally recognised method, but fermented materials do not have a separate position within the animal by-products legislation aside from petfood. Since fermentation severely alters the structure of the protein, it could be recognised as a process resulting in products which would not be subjected to the species-to-species ban (part of Regulation (EU) 142/2011, Annex X). The subcategories (m), (n), and (o) of Category 3 discussed in the item above concerning hydrolysation may also be suitable for fermentation. III. Finally, other novel processing options might result in pure (technical) materials, such as vitamins, minerals, or specific fatty acids that could be used to supplement animal feed. A risk assessment should be conducted to determine whether the resulting specified materials should be subjected to the species-to-species ban. Research on production of new technical materials from novel food and feed source should be encouraged and funded. 3. The standard technical limit for monitoring methods (Regulation (EC) 152/2009, Annex VI: 0.1%) could be diversified depending on the type of material. Higher contamination levels could apply to single unmixed materials, intended as feed ingredient. After mixing in the final feed, the initial contamination should reach a final level below 0.1%. Initial contamination levels of up to 2% have been evaluated in a Quantitative Risk Assessment [46]. Examples are a PAP of a specified source species containing PAP material of another source species, and vegetal ingredients containing minor levels of animal proteins. The aim of a final contamination level, i.e., for compound feeds ready for the intended feeding, would comply to the species-to-species ban. This ban is based on both ethical principles (avoiding cannibalism) and prevention of zoonotic diseases. Higher limits of detection for single unmixed animal products would only be acceptable in the framework of sufficient risk management. 4. Catering waste is prohibited in a general sense (Regulation (EC) 1069/2009, article 11, part 1 (b)). Catering waste is currently defined as: "all waste food, including used cooking oil originating in restaurants, catering facilities and kitchens, including central kitchens and household kitchens" (Regulation (EU) 142/2011). This definition shows, in contrast to all other subcategories of Category 3, that catering waste is a mixture of products of plant and animal origin. A distinction should be made between professional and domestic handling of food, and between the resulting types of materials. Other countries such as Korea and Japan have shown that incorporating catering waste into animal diets can be done safely if appropriate controls are implemented [47,48]. I. Food materials which have not been served to consumers in restaurants or canteens -i.e., those materials that have not left the kitchens of restaurants, catering facilities or other institutionsshould be treated in the same manner as 'former foodstuffs' from industrial facilities and thus become available for processing as a feed ingredient. Proper assurance of all safety measures, the ruminant ban and the species-to-species ban remain applicable, unless processing results in products which should not be subjected to these bans (e.g., hydrolysation). Correct and sufficient handling by the food business operator of the establishment have to be demonstrated; and the operator and/or processor would also have to register with the national authority as a feed producing company; in line with current requirements for food companies producing former foodstuffs for feed. II. Domestic food waste and all food waste resulting from finally prepared and served dishes in restaurants and other catering facilities are currently prohibited as feed ingredient. In a full circular agronomy, these types of food products should be eligible for specified reuse, reprocessing or remanufacturing as feed ingredient. Proper assurance of all safety measures, the ruminant ban and the species-to-species ban remains applicable, unless processing results in products which are not subjected to these bans (e.g., hydrolysation). Monitoring and containment options would need to be developed.
Concluding remarks
In this article, we have presented a list of potential relaxations that provides a range of opportunities for valorisation of animal byproducts of the food production chain. In the view of the major transition towards full sustainability and circularity of the feed industry, major steps in the application of animal by-products are needed. Ideally, the currently highly complex system of rules on use of animal proteins is reworked and simplified substantially, but small amendments may also allow for more circular opportunities. Promising instruments for higher valorisation of animal by-products are biological solutions, predominantly a diversification and extension of commodities for rearing insects; and technological solutions, primarily hydrolysation for producing modified peptides not reflecting the characteristics of the original proteins. Especially for the emerging EU insect rearing sector, a wider variety of suitable feed materials should be permitted so as to avoid needing to compete with feed for conventional livestock animalsthereby negating the insects' potential for valorising products otherwise unsuitable for feed. Finally, we have argued that several current 'zero-tolerance' limits can be relaxed by permitting the presence of certain materials to some degree, which is anticipated to result in less waste.
Author contribution statement
All authors listed have significantly contributed to the development and the writing of this article.
Funding statement
This work was supported by Netherlands Ministry of Agriculture, Nature and Food Quality, Topsector AgriFood LWV19091 (BO-64-001-013).
Data availability statement
No data was used for the research described in the article.
Declaration of interest's statement
The authors declare no conflict of interest. | 2023-03-08T16:17:23.281Z | 2023-03-01T00:00:00.000 | {
"year": 2023,
"sha1": "203230975a0708e171557739e0cd0b18a11d3c05",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.heliyon.2023.e14021",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8d4d748875f5b4c953bb5e99f251749042b9a67d",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244359992 | pes2o/s2orc | v3-fos-license | Identification of Renoprotective Phytosterols from Mulberry (Morus alba) Fruit against Cisplatin-Induced Cytotoxicity in LLC-PK1 Kidney Cells
The aim of this study was to explore the protective effects of bioactive compounds from the fruit of the mulberry tree (Morus alba L.) against cisplatin-induced apoptosis in LLC-PK1 pig kidney epithelial cells. Morus alba fruit is a well-known edible fruit commonly used in traditional folk medicine. Chemical investigation of M. alba fruit resulted in the isolation and identification of six phytosterols (1–6). Their structures were determined as 7-ketositosterol (1), stigmast-4-en-3β-ol-6-one (2), (3β,6α)-stigmast-4-ene-3,6-diol (3), stigmast-4-ene-3β,6β-diol (4), 7β-hydroxysitosterol 3-O-β-d-glucoside (5), and 7α-hydroxysitosterol 3-O-β-d-glucoside (6) by analyzing their physical and spectroscopic data as well as liquid chromatography/mass spectrometry data. All compounds displayed protective effects against cisplatin-induced LLC-PK1 cell damage, improving cisplatin-induced cytotoxicity to more than 80% of the control value. Compound 1 displayed the best effect at a relatively low concentration by inhibiting the percentage of apoptotic cells following cisplatin treatment. Its molecular mechanisms were identified using Western blot assays. Treatment of LLC-PK1 cells with compound 1 decreased the upregulated phosphorylation of p38 and c-Jun N-terminal kinase (JNK) following cisplatin treatment. In addition, compound 1 significantly suppressed cleaved caspase-3 in cisplatin-induced LLC-PK1 cells. Taken together, these findings indicated that cisplatin-induced apoptosis was significantly inhibited by compound 1 in LLC-PK1 cells, thereby supporting the potential of 7-ketositosterol (1) as an adjuvant candidate for treating cisplatin-induced nephrotoxicity.
Introduction
Cis-diamminedichloroplatinum II (cisplatin) is one of the most common platinum chemotherapeutic agents used for the treatment of many types of solid tumors [1]. In more than 30% of patients taking cisplatin, a variety of side effects, including allergic reactions, ototoxicity, myelotoxicity, nephrotoxicity, and gastrotoxicity, have been reported [2]. Of these side effects, nephrotoxicity is a dose-limiting one that makes patients unable to continue cisplatin treatment [3]. Cisplatin can seriously damage the S3 segment of the proximal tubules, causing kidney dysfunction [4]. Forced diuresis using mannitol, magnesium supplementation, and kidney-protective therapeutic approaches using enzymes and compounds that can help treat or prevent cisplatin-induced nephrotoxicity was reported [5].
In addition, the effects of plant extracts and plant-derived natural products on cisplatininduced nephrotoxicity were studied [6]. However, the detailed molecular mechanisms underlying their protective effects remain unclear. In previous studies using kidney cells, treatment with cisplatin (16-300 µM) induced cell death and activated cellular signaling pathways, including p53, mitogen-activated protein kinases (MAPKs), and caspases [7,8], which can be molecular targets for the mechanism of nephroprotection.
The mulberry tree (Morus alba L.), also known as white mulberry, belongs to the family Moraceae. Morus alba fruit is a well-known edible fruit commonly used in traditional folk medicine to improve diabetes and eyesight [9]. Its leaves are also consumed as a fodder for silkworms (Bombyx mori L.) and used in health products such as tea and beverages [10]. In previous studies on M. alba, extracts from its fruit have exhibited pharmacological activities, including anti-microbial [11], anti-inflammatory [12], anti-obesity [13,14], anticancer [15], and anti-oxidant activities [12,16,17]. Previous phytochemical investigations of M. alba fruit have reported a variety of bioactive secondary metabolites such as chlorogenic acid, ferulic acid, protocatechuic acid, apigenin, quercetin, and rutin [18]. In our ongoing endeavor to find bioactive products from diverse natural resources [19][20][21][22], we have carried out chemical investigations of many natural materials to identify bioactive compounds exhibiting protective effects against cisplatin-induced nephrotoxicity. As a result, we have identified several kidney-protective phytochemicals, such as ginsenoside Rb1 from Panax ginseng [23], ergosterols from the fruiting bodies of the mushroom Pleurotus cornucopiae [24], and flavonoids from peat moss Sphagnum palustre [25]. Recently, we also identified butyl pyroglutamate, a renoprotective compound, from M. alba fruit [26]. Its renoprotection was mediated by inhibition of MAPK protein expression and cleaved caspase-3 protein expression [26].
To extend our previous studies, we further investigated an ethanol extract of M. alba fruit to identify potential renoprotective compounds in the present study. Phytochemical analysis of the M. alba fruit extract led to the isolation of six phytosterols (1)(2)(3)(4)(5)(6). Their structures were determined by detailed analyses of their nuclear magnetic resonance (NMR) spectroscopic and physical data as well as mass spectrometry (MS) data from liquid chromatography (LC)/MS analyses. Herein, we report the isolation and structural characterization of these six compounds along with their protective effects against cisplatininduced cell death and their underlying mechanism of action in LLC-PK1 cells.
Cell Culture and Cell Viability Assay
LLC-PK1 cells and kidney epithelial cells from pigs were purchased from the American Type Culture Collection (ATCC, Manassas, VA, USA). These cells were grown at 37 • C in a humidified atmosphere incubator with 5% CO 2 in air using Dulbecco's modified eagle medium (ATCC) supplemented with 1% penicillin/streptomycin, 10% fetal bovine serum (Invitrogen, Grand Island, NY, USA), and 4 mM l-glutamine. These cells were seeded into 96-well culture plates at a density of 1 × 10 4 cells/mL. After 24 h, cells were pretreated with 2.5, 5, 10, 25, and 50 µM of test samples for 2 h at 37 • C. Next, 25 µM cisplatin was added to cells. After incubation for 24 h at 37 • C, cell viability was measured using an EZ-Cytox assay kit (Daeillab Service, Seoul, South Korea) according to the method described in a previous study [26].
Image-Based Cytometric Assay
Annexin V Alexa Fluor 488 staining was performed to determine the percentage of apoptotic cells. Briefly, cells were seeded in six-well plates at a density of 4 × 10 5 cells/mL. After 24 h, cells were pretreated with 2.5 and 5 µM compound 1 for 2 h at 37 • C. Next, 25 µM cisplatin was added to cells. After incubation for 24 h at 37 • C, cells were stained with Annexin V Alexa Fluor 488 (Invitrogen, Temecula, CA, USA). The percentage of apoptotic cells was analyzed using a Tali image-based cytometer (Invitrogen, Temecula, CA, USA) according to the method described in a previous study [26].
Statistical Analysis
All data, including cell viability, percentage of apoptotic cells, and protein expression, are presented as average value and standard deviation (SD). All assays were performed in triplicate and repeated at least thrice. In this study, only a small number of repetitions for each cell experiment were included. Thus, a non-parametric analysis method was adopted for the statistical analysis. The Kruskal-Wallis test was used for the statistical analysis of each variable. The SPSS statistical package (IBM SPSS Statistics version 21, Boston, MA, USA) was used for all analyses. Statistical significance was considered at p < 0.05.
Compound 1 Inhibits Cisplatin-Induced Apoptosis in LLC-PK1 Cells
We evaluated the effects of compound 1 on cisplatin-induced apoptotic cell death using Annexin V Alexa Fluor 488 staining. As shown in Figure 3A, apoptotic cells were stained with Annexin V Alexa Fluor 488 (green fluorescence). The percentage of apoptotic cells was increased by 25 μM cisplatin from 2.13% ± 0.19% to 46.41% ± 3.21%, whereas it was decreased by 13.74% ± 1.31% and 4.86% ± 0.49% when cells were pretreated with 10 μM and 25 μM of compound 1, respectively ( Figure 3B).
Compound 1 Inhibits Cisplatin-Induced Apoptosis in LLC-PK1 Cells
We evaluated the effects of compound 1 on cisplatin-induced apoptotic cell death using Annexin V Alexa Fluor 488 staining. As shown in Figure 3A, apoptotic cells were stained with Annexin V Alexa Fluor 488 (green fluorescence). The percentage of apoptotic cells was increased by 25 µM cisplatin from 2.13% ± 0.19% to 46.41% ± 3.21%, whereas it was decreased by 13.74% ± 1.31% and 4.86% ± 0.49% when cells were pretreated with 10 µM and 25 µM of compound 1, respectively ( Figure 3B).
Compound 1 Inhibits Expression Levels of p38, JNK, and Cleaved Caspase-3 in Cisplatin-Treated LLC-PK1 Cells
We also evaluated the possible molecular mechanisms of compound 1, focusing on p38, JNK, and cleaved caspase-3 using a Western blot analysis. Treatment with 25 μM cisplatin increased the expression levels of phosphorylated p38, phosphorylated JNK, and cleaved caspase-3. However, the expression levels of all these proteins in LLC-PK1 cells were decreased by treatment with 2.5 and 5 μM compound 1 in a dose-dependent manner ( Figure 4A). Bar graphs show the expression levels of phosphorylated p38, phosphorylated JNK, and cleaved caspase-3 normalized to glyceraldehyde 3-phosphate dehydro-
Compound 1 Inhibits Expression Levels of p38, JNK, and Cleaved Caspase-3 in Cisplatin-Treated LLC-PK1 Cells
We also evaluated the possible molecular mechanisms of compound 1, focusing on p38, JNK, and cleaved caspase-3 using a Western blot analysis. Treatment with 25 µM cisplatin increased the expression levels of phosphorylated p38, phosphorylated JNK, and cleaved caspase-3. However, the expression levels of all these proteins in LLC-PK1 cells were decreased by treatment with 2.5 and 5 µM compound 1 in a dose-dependent manner ( Figure 4A). Bar graphs show the expression levels of phosphorylated p38, phosphorylated JNK, and cleaved caspase-3 normalized to glyceraldehyde 3-phosphate dehydrogenase (GAPDH) (Figure 4B-D).
Discussion
Many drugs, including antifungal agents, anti-retroviral drugs, aminoglycoside antibiotics, and anticancer drugs, are known to cause nephrotoxicity [32]. Various assays have been used to assess the protective effects of plant extracts and plant-derived natural products against drug-induced cytotoxicity in kidney cells. The primary assay to identify an effective substance is based on measurement of cell viability. In the present study, we identified cell-protective compounds from M. alba fruit using the EZ-Cytox assay to measure the metabolic activities of cells in the presence of cisplatin. All compounds displayed protective effects against cisplatin-induced LLC-PK1 cell damage, improving cisplatininduced cytotoxicity to more than 80% of the control value. Compound 1 displayed the best effect at a relatively low concentration. The LLC-PK1 cell viability that was reduced by 25 μM cisplatin to 60% increased to nearly 100% after co-treatment with 5 μM compound 1. In our previous study, 10 μM butyl pyroglutamate isolated from M. alba fruit improved the cell viability by 83%, which was more effective than N-acetylcysteine [33]. N-acetylcysteine has been used as a positive control in cisplatin-induced renal toxicity studies [34,35].
Oxidative stress, apoptosis, and inflammation are three major mechanisms underlying cisplatin-induced cytotoxicity. Among these, the most well-known mechanism is the apoptosis pathway [35]. It is known that cisplatin-induced apoptotic cell death in renal tubular cells is associated with both mitochondrial-mediated and death-receptor-mediated pathways [36]. Both these pathways ultimately induce apoptosis through caspase-3 activation [37]. Additionally, it has been shown that JNK and p38 regulate tumor necrosis factor-α (TNF-α), which plays an important role in cisplatin-induced apoptosis [38,39]. In the present study, compound 1 had a protective effect against apoptotic cell death. This result is consistent with the improved cell viability of compound-1-treated cells. The protective effect of compound 1 on LLC-PK1 cells might be partly due to inhibition of apop-
Discussion
Many drugs, including antifungal agents, anti-retroviral drugs, aminoglycoside antibiotics, and anticancer drugs, are known to cause nephrotoxicity [32]. Various assays have been used to assess the protective effects of plant extracts and plant-derived natural products against drug-induced cytotoxicity in kidney cells. The primary assay to identify an effective substance is based on measurement of cell viability. In the present study, we identified cell-protective compounds from M. alba fruit using the EZ-Cytox assay to measure the metabolic activities of cells in the presence of cisplatin. All compounds displayed protective effects against cisplatin-induced LLC-PK1 cell damage, improving cisplatin-induced cytotoxicity to more than 80% of the control value. Compound 1 displayed the best effect at a relatively low concentration. The LLC-PK1 cell viability that was reduced by 25 µM cisplatin to 60% increased to nearly 100% after co-treatment with 5 µM compound 1. In our previous study, 10 µM butyl pyroglutamate isolated from M. alba fruit improved the cell viability by 83%, which was more effective than N-acetylcysteine [33]. N-acetylcysteine has been used as a positive control in cisplatin-induced renal toxicity studies [34,35].
Oxidative stress, apoptosis, and inflammation are three major mechanisms underlying cisplatin-induced cytotoxicity. Among these, the most well-known mechanism is the apoptosis pathway [35]. It is known that cisplatin-induced apoptotic cell death in renal tubular cells is associated with both mitochondrial-mediated and death-receptor-mediated pathways [36]. Both these pathways ultimately induce apoptosis through caspase-3 activation [37]. Additionally, it has been shown that JNK and p38 regulate tumor necrosis factor-α (TNF-α), which plays an important role in cisplatin-induced apoptosis [38,39]. In the present study, compound 1 had a protective effect against apoptotic cell death. This result is consistent with the improved cell viability of compound-1-treated cells. The protective effect of compound 1 on LLC-PK1 cells might be partly due to inhibition of apoptosis by cisplatin. In addition, treatment with cisplatin increased the expression levels of phosphorylated p38, phosphorylated JNK, and cleaved caspase-3, whereas these expression levels were decreased in a dose-dependent manner by treatment of LLC-PK1 cells with compound 1. These observations indicated that compound 1 inhibited apoptosis through the inhibition of phosphorylated JNK and p38 as well as the inhibition of the expression level of cleaved caspase-3 ( Figure 5). Therefore, the anti-apoptotic effect might be responsible for the protective effect of compound 1 against cisplatin-induced cell death.
Plants 2021, 10, x FOR PEER REVIEW 9 of 11 expression levels were decreased in a dose-dependent manner by treatment of LLC-PK1 cells with compound 1. These observations indicated that compound 1 inhibited apoptosis through the inhibition of phosphorylated JNK and p38 as well as the inhibition of the expression level of cleaved caspase-3 ( Figure 5). Therefore, the anti-apoptotic effect might be responsible for the protective effect of compound 1 against cisplatin-induced cell death.
Conclusions
In summary, as part of an ongoing research project to discover bioactive natural products [40][41][42][43][44][45], we identified renoprotective phytosterols from the fruit of the mulberry tree (M. alba) that ameliorated cisplatin-induced cytotoxicity. All compounds displayed protective effects against cisplatin-induced damage in LLC-PK1 cells. Compound 1 displayed the best effect at a relatively low concentration. In addition, we demonstrated that compound 1 blocked cisplatin-induced LLC-PK1 cell apoptosis by inhibiting expression levels of phosphorylated p38, phosphorylated JNK, and cleaved caspase-3. However, additional detailed mechanisms responsible for the renoprotective effects of compound 1 need to be studied to support the potential of 7-ketositosterol (1) as an adjuvant candidate for treating cisplatin-induced nephrotoxicity.
Conclusions
In summary, as part of an ongoing research project to discover bioactive natural products [40][41][42][43][44][45], we identified renoprotective phytosterols from the fruit of the mulberry tree (M. alba) that ameliorated cisplatin-induced cytotoxicity. All compounds displayed protective effects against cisplatin-induced damage in LLC-PK1 cells. Compound 1 displayed the best effect at a relatively low concentration. In addition, we demonstrated that compound 1 blocked cisplatin-induced LLC-PK1 cell apoptosis by inhibiting expression levels of phosphorylated p38, phosphorylated JNK, and cleaved caspase-3. However, additional detailed mechanisms responsible for the renoprotective effects of compound 1 need to be studied to support the potential of 7-ketositosterol (1) as an adjuvant candidate for treating cisplatin-induced nephrotoxicity.
Data Availability Statement:
The data presented in this study are available in article and supplementary material.
Conflicts of Interest:
The authors declare no conflict of interest. | 2021-11-19T16:17:26.115Z | 2021-11-01T00:00:00.000 | {
"year": 2021,
"sha1": "025334c6479d9c54334606047f2f191793e49141",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2223-7747/10/11/2481/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "08925568ccc98ffb375ed75f6f558f0a53ce65de",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6754227 | pes2o/s2orc | v3-fos-license | Diabetes Onset at 31–45 Years of Age is Associated with an Increased Risk of Diabetic Retinopathy in Type 2 Diabetes
This hospital-based, cross-sectional study investigated the effect of age of diabetes onset on the development of diabetic retinopathy (DR) among Chinese type 2 diabetes mellitus (DM) patients. A total of 5,214 patients with type 2 DM who were referred to the Department of Ophthalmology at the Shanghai First People’s Hospital from 2009 to 2013 was eligible for inclusion. Diabetic retinopathy status was classified using the grading system of the Early Treatment Diabetic Retinopathy Study (ETDRS). Logistic and hierarchical regression analyses were used to identify independent variables affecting the development of DR. Upon multiple logistic regression analysis, patient age at the time of diabetes onset was significantly associated with development of DR. Further, when the risk of retinopathy was stratified by patient age at the onset of diabetes, the risk was highest in patients in whom diabetes developed at an age of 31–45 years (odds ratio [OR] 1.815 [1.139–2.892]; p = 0.012). Furthermore, when patients were divided into four groups based on the duration of diabetes, DR development was maximal at a diabetes onset age of 31–45 years within each group. A diabetes onset age of 31–45 years is an independent risk factor for DR development in Chinese type 2 DM patients.
Risk factors affecting the development of DR. Upon univariate logistic regression analysis, age, age of onset, duration of diabetes, SBP, DBP, HbA1c, MA and eGFR were identified for the potential risk factors. Then we examined a correlation matrix to exclude correlated factors such as age and DBP. Therefore, age of onset, duration of diabetes, SBP, HbA1c, MA and eGFR should be permitted in the multiple logistic regression model. Upon mutiple logistic regression analysis, age of onset, duration of diabetes, SBP, HbA1c, and MA were independent risk factors for DR ( Table 2, Model 1).
To stratify the risk of DR, patients were divided in terms of age at diabetes onset, as follows: ≤ 30 years, 31-45 years, 46-60 years, and ≥ 61 years. A diabetes onset age of 31-45 years was associated with an increased risk of DR. The 31-45-year age group was at the highest risk of DR, thus 1.815-fold (OR 1.815 [1.139-2.892]; p = 0.012) that of patients in the lowest age group (Table 2, Model 2).
Effects of the age of onset of diabetes and diabetes duration on the prevalence of DR. Patients
were divided into four groups according to duration of diabetes: ≤ 5 years, 6-10 years, 11-15 years, and > 15 years. We calculated the prevalence of DR by age at diabetes onset (≤ 30 years, 31-45 years, 46-60 years, and ≥ 61 years) in the groups differing in terms of diabetes duration. When the duration was ≤ 5 years, 375 DR patients (16.3%) were evident among the total of 2,302 patients, the proportions of whom at each age of onset were 24
Discussion
In this cross-sectional study of Chinese patients with type 2 diabetes, multiple logistic regression showed that the age at diabetes onset was significantly associated with the development of retinopathy, independent of the duration of diabetes, SBP, HbA1c level, and MA; these results are consistent with those of other studies 4-9 . We Previous studies suggested that early onset type 2 diabetes was more aggressive than late-onset disease 2,11,12 . In 2008, Wong et al. 8 reported that an age at type 2 diabetes onset < 45 years was associated with an increased inherent susceptibility to DR; the cited authors matched the duration of diabetes and the extent of glycemic control. Recently, a prospective cross-sectional study of an Asian cohort found that patients with younger-onset type 2 diabetes (diagnosed before the age of 40 years) had higher mean levels of HbA1c, and a greater prevalence of retinopathy, than those with late-onset diabetes (diagnosed at age ≥ 40 years) 10 . No study has yet addressed whether an age group at diabetes onset of < 45 years was associated with a higher risk of retinopathy, independent of disease duration and the extent of hyperglycemia.
In the present study, we sought a definite relationship between age at diabetes onset and DR. We divided our patients into four groups by age at diabetes onset: ≤ 30, 31-45, 46-60, and ≥ 61 years. Next, based on the duration of diabetes, each group was divided into four subgroups: ≤ 5 years, 6-10 years, 11-15 years, and > 15 years. We found that a diabetes onset age of 31-45 years was associated with an increased risk of DR development, independent of the duration of diabetes. However, the underlying mechanism remains unclear. Several possible explanations may be advanced. Some studies have found that the level of vascular endothelial growth factor (VEGF) in diabetes patients varies with age, and VEGF expression after stimulation is higher in younger than older patients 13,14 . When hyperglycemia is in play, VEGF promotes pathological retinal angiogenesis and fibrovascular proliferation during development of DR 15,16 . Therefore, we speculate that a gene such as that encoding VEGF may be more active in patients with diabetes onset at 31-45 years of age, predisposing such patients to development of DR. In addition, "metabolic memory" may contribute to an increased risk of DR. Patients developing diabetes at 31-45 years may prioritize personal and career development rather than their health, and are usually diagnosed after a long-term history of hyperglycemia. Many studies have shown that prolonged hyperglycemia causes injuries to the retinal vasculature that are not reversed even upon subsequent sustained glycemic control; such impairments may play pivotal roles in "metabolic memory", rendering patients more susceptible to the complications of diabetes [17][18][19][20] . Last but not least, DM individuals diagnosed between 31-45 years of age are usually under high-level psychological pressure [21][22][23] . Such stress may explain why DR is more likely in patients in whom diabetes develops at an age of 31-45 years. Our findings have an important contribution to the monitoring and intervention in type 2 DM individuals in whom the diabetes onset age is 31-45 years of age with their working life at the peak of creation. Our research had several limitations. Firstly, the work was performed in a single hospital and all patients were Chinese. Care should be taken when attempting to extrapolate this data to other patient populations. Secondly, a cross-sectional study cannot identify cause-and-effect relationships. Future multicenter, longitudinal longer-term studies are required to verify our results. Thirdly, Color photographs were acquired using a macula-centered view of fundus photgraph and supplementary fundus photgraphs of lesions. Standard 7-field fundus photographs should be used in our future study.
In summary, a diabetes onset age of 31-45 years is an independent risk factor for development of DR in type 2 DM patients. It is important to conduct stringent monitoring and intervention in type 2 DM individuals in whom the diabetes onset age is 31-45 years, to delay the development and progression of DR. Data Collection. Medical record review was undertaken by a single researcher; demographic details, and physical and biochemical data, were recorded on a form. Demographic details included age, sex, age at diagnosis, duration of diabetes, and the general and ophthalmological medical histories. Physical examination included: systolic blood pressure (SBP), diastolic blood pressure (DBP), waist circumference (WC), and body-mass index (BMI). Laboratory data included the glycosylated hemoglobin (HbA1c) level; MA status; and the levels of fasting plasma glucose (FPG), triglycerides (TG), total cholesterol (TC), C-reactive protein, high-density lipoprotein (HDL), and low-density lipoprotein (LDL); and the estimated glomerular filtration rate (eGFR). The eGFR was calculated using the equation of the Modification of Diet in Renal Disease study 24 .
Methods
Each patient underwent a comprehensive ophthalmologic examination that included a review of ophthalmologic history, measurement of visual acuity and intraocular pressure (IOP), slit lamp biomicroscopy, and fundoscopic examination through dilated pupils via fundus photography and reading center to grade the retinopathy. Color photographs were acquired with a Zeiss Visucam 200 digital fundus camera (Carl Zeiss Meditec AG, Jena, Germany) using a macula-centered field of view. However, supplementary fundus photgraphs of lesions were taken for those who showed any evidence of DR. Diabetic retinopathy status was graded using the system of the Early Treatment Diabetic Retinopathy Study (ETDRS): 1) no DR; 2) nonproliferative disease (mild, moderate, severe); and, 3) proliferative. Whenever the two eyes were graded differently, the more advanced was chosen. | 2018-04-03T02:53:11.227Z | 2016-11-29T00:00:00.000 | {
"year": 2016,
"sha1": "d5f227257f9a28d958d4ca2712ed3911205b0356",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1038/srep38113",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d5f227257f9a28d958d4ca2712ed3911205b0356",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
137901943 | pes2o/s2orc | v3-fos-license | Magnetocaloric effect and magnetic refrigeration in La 0.7 Ca 0.15 Sr 0.15 Mn 1-x Ga x O 3 (0 ≤ x ≤ 0.1)
. In this paper we report magnetic and magnetocaloric effect ( MCE ) properties for La 0.7 (CaSr) 0.3 Mn 1- x Ga x O 3 ( x =0, 0.025, 0.05, 0.075 and 0.1) manganites. Our compounds were prepared by sol-gel method and characterized by X -ray diffraction and magnetization measurements. The temperature dependence of the magnetization M(T) reveals a decrease of M with increasing Ga content. The same behavior was observed for the Curie temperature T C . MCE was calculated according to the Maxwell relation based on magnetic measurements. The magnetic entropy change ( ∆ S M ) reaches a maximum value witch decreases with increasing Ga content. It is found to decrease from 5.15 J/kgK for x = 0 to 1.86 J/kgK for x = 0.1 under an applied magnetic field of 5 T . So, the studied samples could be considered as good materials for magnetic refrigeration for a large temperature interval near room temperature.
Introduction
The magnetocaloric effect is defined as the response of a magnetic material to an applied magnetic field and is apparent as a change in its temperature. It was discovered by Warburg [1] in 1881 and is intrinsic to all magnetic materials. In the case of a ferromagnetic material, the material heats up when it is magnetized and cools down when the magnetic field is removed out. The magnitude of the MCE of a magnetic material is characterized by the adiabatic temperature change ∆T ad , or by the isothermal magnetic entropy change ∆S M due to a varying magnetic field. The nature of the MCE in a solid is the result of the entropy variation due to the coupling of the magnetic spin system with the magnetic field [2].
Magnetic refrigeration is a method of cooling based on the MCE. The heating and cooling caused by a changing magnetic field are similar to the heating and cooling of a gaseous medium in response to compression and expansion. It has been shown later that the heating and cooling in the magnetic refrigeration process are proportional to the size of the magnetic moments and to the magnetic applied field. That is why research on magnetic refrigeration has been exclusively conducted on heavy rare-earth elements and their compounds [3,4]. Among the rare-earth metals, gadolinium was found to show the highest MCE [4]. Since, the cost of this metal as a magnetic refrigerant is quite expensive (~4000 $/kg), further efforts to discover new materials exhibiting large MCE in response a e-mail : safa.othmani@gmail.com to low applied field, are of significant importance. Among them, perovskite-type manganese oxide materials [5][6][7][8][9][10][11] having large MCEs are believed to be good candidates for magnetic refrigeration at various temperatures.
The manganite parent compound, LaMnO 3 , is an antiferromagnetic insulator (AFI) characterized by a super exchange coupling between Mn 3+ sites facilitated by a single e g electron subjected to strong correlation effects. Substitution on La 3+ ion by a divalent or a monovalent ion results in a mixed valence states of Mn (Mn 3+ and Mn 4+ ), where Mn 4+ lacks e g electron, and hence the itinerant hole associated with Mn 4+ ion may hop to Mn 3+ . The hopping is favorable only when the localized spins of these ions are parallel and this is the essence of double exchange (DE) mechanism [12] which is expected to explain the ferromagnetic metallic nature of manganites below metal-insulator transition temperature TMI. Millis et al. [13] stressed that the physics of manganites is dominated by the interplay between a strong electron-phonon coupling e g via Jahn-Teller effects and large Hund's coupling effect that optimizes the electronic kinetic energy by the formation of ferromagnetic.
It is interesting to note that, when compared with Gd (the most used material for magnetic refrigeration) and other candidate materials, the manganites are more convenient to prepare and exhibit higher chemical stability, as well as the higher resistivity that is favorable for lowering eddy current heating. In addition, they have much smaller thermal and field hysteresis than any rare earth and 3d-transition metal based alloy. Moreover, this material is the cheapest among the existing magnetic refrigerants. These superior features may make it more promising for future magnetic refrigeration technology.
In this context we devoted this paper to develop magnetic and MCE results of La 0.7 Ca 0.15 Sr 0.15 Mn 1-x Ga x O 3 manganite compounds (0 ≤ x ≤ 0.1).
Experimental details
Citric acid was used as a gelling agent for La, Ca, Sr and Mn ions, and the obtained gel was subjected to successive heat treatments at 600 °C for 4h. After that, the microcrystalline powder was pelletized, pressed into disks and sintered at 900 °C for 48h in air. A final heat treatment was performed at 1000 °C for 12h in air. The crystal structure of the bulk samples was determined by an X-ray diffractometer (XRD) with CuKα radiation. XRD data were refined by means of the Rietveld method using the FullProf refinement program [14]. The magnetic measurements versus temperature were performed using extraction magnetometer (Néel Institute-Grenoble) for high and low temperatures. The magnetization curves were obtained under a magnetic applied field up to 5 T in the temperature range 100-400 K, the MCE results were calculated according to the Maxwell relation using isothermal magnetic measurements.
Results and discussions
Structural results and phase identification, carried out by the X-Ray diffraction, were discussed in previous work [15]. We have found that all samples crystallize in the rhomohedral structure (R-3C space group) and that the unit cell parameters are not affected with enhanced Ga doping. The inset of Figure 1 shows the isothermal magnetization measured at 5 K. With the increase of the applied magnetic field, the magnetization increases sharply, and then tends to saturate as µ 0 H ≥ 1.5 T. The saturated magnetization (M S ) can be obtained from an extrapolation of the high field M-H curve to H = 0, and the obtained M S = 3.14 µ B is close to the theoretical value of M S = 3.3 µ B .
The effective magnetic moment was calculated to be µ eff = 4.59µ B using the relation ( ) Temperature dependence of magnetization (M-T) measured at 0.05 T for La 0.7 Ca 0.15 Sr 0.15 MnO 3 (x = 0) is shown in Figure 2. The M-T curve exhibits a PM-FM phase transition. The Curie temperature (T C ), defined by the minimum in dM/dT, has been determined to be T C = 336.5 K. Figure 3 shows the isotherms recorded in the applied field range of 0-5 T for x = 0.0 and 0.05. It is similar to other manganite compounds [17,18] the M(H) curves reach saturation values at high magnetic fields which is considered as a result of the rotation of the magnetic domains under the action of the magnetic applied field. The Banerjee criterion has been frequently used to check the nature of the magnetic phase transition in manganites [19,20]. According to this criterion, the positive or negative slope of µ 0 H/M versus M 2 (Arrott plot) curves indicates whether the magnetic phase transition is second order or first order. As can be seen on Figure 4 near the paramagnetic-ferromagnetic phase transition, µ 0 H/M versus M 2 curves clearly exhibit a negative slope in the entire M 2 range for x = 0, which confirms the first-order nature of the transition, and a positive slope for 0 < x ≤ 0.1, which confirms the secondorder nature of the transition. According to the mean field theory, near the transition point, µ 0 H/M 00049-p.4
EMM-FM2011
versus M 2 should show a series of parallel lines at various temperatures and the line related to T C should pass through the origin [19,20]. Since a sharp PM-FM transition occurs around T C , which possibly implies a large magnetic entropy change near room temperature, we performed a measurement of MCE of the present materials.
The isothermal magnetic entropy change ∆S M (T), which is associated with the magnetocaloric effect, can be calculated from measurements of magnetization as a function of the magnetic applied field and temperature (indirect measurement technique of the magnetocaloric effect). According to the classical thermodynamics theory, the magnetic entropy change produced by varying the magnetic field from 0 to µ 0 H max is given by Maxwell relation [21] , , The magnetic entropy change can be rewritten as follows [22]: The integral in eq. 2 corresponds to the area enclosed between isothermal magnetization curves ). δT is the temperature difference between two isotherms. The magnetic entropy change was determined by integrating numerically eq. 2 and the obtained results are given in Figure 5. It is worth noting that all the samples exhibit a maximum entropy change ∆S Mmax around their Curie temperature T C [23][24][25]. It can be seen from Figure 5 that ∆S Mmax decreases from 5.15 to 1.86 J/kgK when increasing Ga content from 0 to 0.1, for an applied field of 5 T. The sharp ferromagnetic-paramagnetic phase transition for x = 0, 0.025 and 0.05 indicates a first order transition and would promisingly imply a high entropy change. However the rest of the samples exhibit a second order transition. Moreover, the Relative Cooling Power (RCP) is given by [26]:
M(H, T) and M(H, T+δT
where δT FWHM is the full-width at the half maximum of the entropy change curve ( Figure 6). Refrigerants with a wide working temperature span and high RCP are in fact very beneficial to magnetic cooling applications.
Conclusion
In this work we have introduced Ga substitution with small amount at the B site in La 0.7 Ca 0.15 Sr 0.15 MnO 3 manganites. It is shown that the substitution of Mn by Ga element doesn't influence the structural properties. Moreover, variation of the Mn/Ga ratio has the following effects: • The curie temperature, which is also the working point of a magnetic refrigerant, decreases from 336 to 253 K in the composition range 0 < x < 0.1. | 2019-04-29T13:06:40.603Z | 2012-06-01T00:00:00.000 | {
"year": 2012,
"sha1": "42890ecac5fcf8cb8cec90996688ec70532a39f0",
"oa_license": "CCBY",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2012/11/epjconf_emm2012_00049.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "5268e61796aa6bba852703968447ed0c6caf7b3d",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
14721518 | pes2o/s2orc | v3-fos-license | Potential impact of infant feeding recommendations on mortality and HIV-infection in children born to HIV-infected mothers in Africa: a simulation
Background Although breast-feeding accounts for 15–20% of mother-to-child transmission (MTCT) of HIV, it is not prohibited in some developing countries because of the higher mortality associated with not breast-feeding. We assessed the potential impact, on HIV infection and infant mortality, of a recommendation for shorter durations of exclusive breast-feeding (EBF) and poor compliance to these recommendations. Methods We developed a deterministic mathematical model using primarily parameters from published studies conducted in Uganda or Kenya and took into account non-compliance resulting in mixed-feeding practices. Outcomes included the number of children HIV-infected and/or dead (cumulative mortality) at 2 years following each of 6 scenarios of infant-feeding recommendations in children born to HIV-infected women: Exclusive replacement-feeding (ERF) with 100% compliance, EBF for 6 months with 100% compliance, EBF for 4 months with 100% compliance, ERF with 70% compliance, EBF for 6 months with 85% compliance, EBF for 4 months with 85% compliance Results In the base model, reducing the duration of EBF from 6 to 4 months reduced HIV infection by 11.8% while increasing mortality by 0.4%. Mixed-feeding in 15% of the infants increased HIV infection and mortality respectively by 2.1% and 0.5% when EBF for 6 months was recommended; and by 1.7% and 0.3% when EBF for 4 months was recommended. In sensitivity analysis, recommending EBF resulted in the least cumulative mortality when the a) mortality in replacement-fed infants was greater than 50 per 1000 person-years, b) rate of infection in exclusively breast-fed infants was less than 2 per 1000 breast-fed infants per week, c) rate of progression from HIV to AIDS was less than 15 per 1000 infected infants per week, or d) mortality due to HIV/AIDS was less than 200 per 1000 infants with HIV/AIDS per year. Conclusion Recommending shorter durations of breast-feeding in infants born to HIV-infected women in these settings may substantially reduce infant HIV infection but not mortality. When EBF for shorter durations is recommended, lower mortality could be achieved by a simultaneous reduction in the rate of progression from HIV to AIDS and or HIV/AIDS mortality, achievable by the use of HAART in infants.
Background
An estimated 2.3 million children under 15 years were living with human immunodeficiency virus (HIV) infection, and 700,000 children were newly infected in 2005 alone [1]. Ninety percent of these HIV infections were acquired through mother-to-child-transmission (MTCT). Vertical transmission of the HIV virus from mother to child can occur during pregnancy, delivery or postnatal through breast-milk [2]. Rates of MTCT range from 5-25% in developed and 13-42% in developing countries [3]. Data from various studies indicate that breast-feeding may be responsible for one-third to one-half of HIV infections in infants and young children in Africa [2].
The reduction of HIV transmission during lactation is one of the most pressing global health dilemmas confronting health policy makers and HIV-infected women in many regions of the world [4][5][6]. Replacement-feeding prevents breast-milk transmission of HIV. However, in resourcelimited settings, access to replacement-feeding is hindered by costs, poor water quality and sanitation, cultural practices and stigma associated with not breast-feeding [7][8][9]. In addition, the protection offered by breast-feeding against diarrheal and respiratory diseases which cause high infant mortality rates, needs to be weighed against the risk of transmitting HIV.
It has long been recommended that women who are HIV positive should avoid breast-feeding and use replacementfeeding when it is acceptable, feasible, affordable, sustainable and safe (AFASS) [10]. In cases were this is not possible, exclusive breast-feeding is recommended for the first months of life, followed by rapid weaning as soon as it is feasible, depending on the individual woman's situation, and taking into account the possible increased risk of HIV transmission with mixed-feeding during the transition period between exclusive breast-feeding and complete cessation of breast-feeding.
Several researchers have modeled the risks and benefits of replacement versus breast-feeding for HIV-infected mothers in developing countries [6,7,[11][12][13][14][15][16][17][18]. However, these modeling studies primarily examined the impact of exclusive breast-feeding versus replacement-feeding with little attention to the recommended duration of exclusive breast-feeding or the impact of poor compliance to these recommendations.
Taking these limitations into consideration, we developed a model that examined the potential impact of different infant-feeding recommendations on the overall mortality, burden of HIV and AIDS in children less than 2 years of age, and also examined the impact of varying the duration of breast-feeding and the rate of compliance to infantfeeding recommendations. We chose a priori to derive parameter sources for this model from Uganda and Kenya, two East African countries where the epidemiology is relatively well documented. In addition, we assessed the impact of variations to the chosen parameters through a sensitivity analysis. In contrast to previous models of time-to-death as a single outcome, we chose to model both cumulative mortality and infection proportions at 2 years. Our choice of these two outcome measures was designed to address the fact that communities may be as concerned with the number of children living with HIV/ AIDS after a certain time period as they could be about the number of children dead. Further, cumulative proportions of children living with HIV/AIDS or dead represent statistics that are easy-to-interpret and understand and thus are at least as useful as time-to-event statistics (hazard or rates).
Model characteristics
We developed a compartmental, deterministic model to simulate the effects of different breast-feeding recommendations in HIV-infected women in a typical sub-Saharan setting ( Figure 1). This type of model was chosen for its simplicity and the direct interpretation of results. This model simulated a population of N children born to women who were HIV-positive during pregnancy. A proportion (p) of these children were infected at birth, whilst (1-p), of the N children were born HIV-negative. Though, in practice, infants' HIV status is usually based on a PCR test at 6 weeks, we assumed that infants who tested positive at 6 weeks were positive right from birth. Infants born infected with HIV (I), can later progress to develop AIDS (A). Non-infected infants were distributed according to their feeding mode into the following three compartments; exclusively breast-fed (B), mixed-fed (M), and replacement-fed (F), representing the proportions b, m, f respectively. In our model, exclusive breast-feeding was defined as feeding the infant breast-milk only, mixedfeeding was defined as feeding the infant breast-milk and other non-breast-milk liquids, and replacement-feeding meant that the infant was not given breast-milk but other non-breast-milk liquids. Non-infected infants could eventually get HIV-infected (I), and HIV-infected infants could progress to AIDS (A). Death could result from AIDS and AIDS-related causes or from non-AIDS related causes.
The model assumed that post-natal HIV transmission occurred solely through breast-feeding with no difference in the HIV transmission rates by gender. Exclusively breast-fed infants were infected with HIV at a rate of λ B while infants receiving mixed-feeding were infected at a rate of λ M . Evidence for higher HIV transmission in infants receiving mixed-feeding when compared to exclusively breast-fed infants [19] was taken into account by using values of λ M that were higher than λ B . We assumed that pre-and intra-partum use of antiretroviral therapy (ART) by the mother did not have any effect on the cumulative 2year postpartum risk of HIV transmission through breastfeeding. The base model assumed no use of ART in the postpartum period (as this is not a common practice in many developing countries) [10]. This assumption however was relaxed in the sensitivity analysis by varying λ M and λ B .
In addition, we assumed that the risk of HIV transmission in breast milk is constant [20]. By using an average value for transmission risk, we underestimate the rate in periods of truly high transmission, but this is compensated by overestimating the estimates in periods of truly low transmission such that outcome measures of cumulative proportions remain valid estimates. HIV-infected infants progressed to AIDS at a rate γ, estimated from the inverse of the average duration from HIV infection to the onset of AIDS in infants. The model assumed that progression from HIV to AIDS did not depend on the mode of feeding. Infants with AIDS died at a rate of µ A . We assumed that the mortality in HIV-infected children was mainly due to AIDS-related illness (with non-AIDS related mortality being negligible).
Weaning in exclusively breast-fed infants and infants receiving mixed-feeding occurred at a rate δ; for this simulation, weaning was assumed to be abrupt. This rate was estimated from the inverse of the average duration of breast-feeding in the population. Mortality rates in uninfected infants depended on the mode of feeding with infants receiving exclusively breast-milk, no breast-milk and mixed-feeding dying at rates of µ B , µ F , and µ M , respectively.
The following differential equations, with a time step of one week, were used to model the weekly rate at which infants moved in and out of compartments. B(t), F(t), M(t), I(t), and A(t), represented the number of infants respectively in compartments B, F, M, I and A at time t.
Six unique infant-feeding scenarios were analyzed (Table 1): these were defined by the recommended mode and duration of feeding as well as compliance to the breastfeeding recommendation in the population of HIVinfected women. Breast-feeding could be prohibited in all infants (scenarios U and X), or breast-feeding could be recommended for a duration of 6 months (scenarios V and Y) or for a duration of 4 months (scenarios W and Z). Three of these scenarios are idealistic, (U, V, W) assuming complete (100%) compliance while the other three scenarios (X, Y, Z) were more realistic assuming 85% compliance in exclusively breast-fed infants and 70% compliance in infants not breast-fed [21][22][23].
The primary model outcomes were: the cumulative mortality at 2 years (104 weeks) defined as D t = 104 /N, the pro-
Model compartment and parameters
It is worth noting that the latter combined measure counts each infant only once (not twice). Because this is a compartmental model, at any given time each infant is in one and only one compartment. So at time 2-year, infants who died from HIV will be in the D compartment, not in the I compartment. The assessment endpoint was set a priori at 2 years with the assumption that infant-feeding patterns negligibly affected child mortality after 2 years.
Parameter estimates
Estimates of parameters used in the model were obtained from published articles from a Medline search using the MeSH keywords "Infant + Feeding + HIV" as well as guideline documents published by the World Health Organization publication (WHO). Because of the heterogeneity in parameter values across regions, the parameters were chosen primarily from studies conducted in Kenya or Uganda. For estimates that were not documented in these two countries we used projections from other countries in the region (with similar epidemiology). The specific values used for each parameter are shown in Table 2.
Sensitivity analysis
Sensitivity analyses were conducted to evaluate the potential impact of the choice of parameters. We performed univariate sensitivity analyses, varying one parameter while holding the rest of the parameters in the model constant. The following parameters were varied: the recommended duration of breast-feeding from 1 to 6 months; compliance to the recommended infant-feeding from 10 to 100%, the proportion of infants born HIV-infected from 1% to 25%; the mortality rates in breast-fed, mixedfed and replacement-fed infants from 0 to 200 per thousand children, taking into account differences in access to clean water and health care facilities according to geographical settings. Parameters that could be influenced by the greater availability of anti-retroviral therapy such as the rate of infection in breast-fed infants, the rate of progression from HIV to AIDS and the mortality rate due to HIV/AIDS were also varied.
Base model
In the scenarios with 100% compliance (scenarios U, V, W) to the infant-feeding recommendations, exclusive replacement-feeding (scenario U) resulted in the least number of children with HIV/AIDS at 2 years ( Figure 2). The proportion of children with HIV/AIDS at 2 years was 6.2%, 8.6% and 9.7% for replacement-fed, and infants breast-fed for 4 or 6 months respectively. However, the cumulative mortality at 2 years was very similar for each of the three scenarios: 10.55% in infants who had replacement-feeding compared to 10.57% in infants who were breast-fed for 4 months and 10.53% for infants breast-fed for 6 months. Considering all the outcomes, HIV/AIDS and mortality at 2 years together, replacement-feeding was the best feeding option if there was 100% compliance (scenario U, Figure 2) as it resulted in the least number of children affected by HIV/AIDS or death, while breast-feeding for 6 months (scenario V) had the highest combined morbidity/mortality.
Taking into account the limited compliance to recommendations and assuming 70% compliance when * Realistic scenarios assume 85% compliance in exclusively breast-fed infants and 70% compliance in replacement-fed infants [21][22][23].
replacement-feeding was recommended (scenario X), and 85% compliance when exclusive breast-feeding (for 4 (scenario Z) or 6 months (scenario Y)) was recommended, the number of infants infected with HIV, and having AIDS at 2 years was still lower with replacementfeeding compared to exclusive breast-feeding for 4 or 6 months. However, compared to the scenario where there was 100% compliance, the number of children with HIV/ AIDS at 2 years increased by 24% for replacement-feeding with 70% compliance (scenario U vs. X). By contrast, the number of children with HIV/AIDS at 2 years increased only by 1.7% (scenario W vs. Z) and 2.1% (scenario V vs. Proportion of uninfected infants at birth that receive mixed feeding † -* In the absence of specific estimates in Uganda/Kenya, used this study to estimate that transmission rate in mixed fed is 40% higher than that in exclusively breast-fed. ** In the absence of data: mortality in mixed-fed infants was assumed to be the mean of the mortality in exclusively breast-fed and replacement-fed infants. † Values varied by scenario (see Table 1 , for infants who were exclusively breast-fed for 4 and 6 months respectively, when compliance was 85% versus 100%. The cumulative mortality increased only by 0.9%, 0.3% and 0.5% in children who had replacement-feeding (scenario U vs. X) or were breast-fed for 4 (scenario W vs. Z) and 6 months (scenario V vs. Y) respectively when compliance was reduced from 100% to 70% for replacement-feeding and 85% for exclusive breast-feeding. When limited compliance was taken into account, the least total number of children with HIV/AIDS or dead at 2 years was obtained when replacement-feeding was recommended (scenario X), followed by exclusive breast-feeding for 4 months (scenario Z) and lastly, exclusive breast-feeding for 6 months(scenario Y) (Figure 2).
Sensitivity analysis Duration of breast-feeding
With 100% compliance, increasing the recommended duration of breast-feeding resulted in an increase in the number of children infected with HIV/AIDS (Figure 3a). When compliance was 70% for the replacement-feeding recommendation (scenario X), the number of children with HIV/AIDS also increased with any increase in the recommended duration of breast-feeding.
Varying the recommended duration of breast-feeding had very little impact on the cumulative mortality -between 10.5% and 10.7% of all infants were dead at 2 years irrespective of scenario or duration (Figure 3b). Despite this limited overall impact, increasing the recommended duration of breast-feeding from 1 to 2 months resulted in an initial increase in 2-year cumulative mortality. The maximum mortality was attained at a recommended duration between 2 and 3 months, followed by a progressive decline in cumulative mortality as the duration increased to 6 months. Compared to replacement-feeding, slightly fewer infants were dead at 2 years only when exclusive breast-feeding for more than 5 months was recommended.
With a reduced compliance of 70%, cumulative mortality increased in infants who had replacement-feeding (scenario X) with increased recommended duration of breastfeeding (Figure 3b). By contrast, a reduced compliance of 85% following a recommendation of exclusive breastfeeding (scenario Y, Z) resulted in an initial increase in cumulative mortality, with a maximum being attained when breast-feeding was recommended for 3-4 months (Figure 3b). Furthermore, recommending breast-feeding for 3 months or more (with reduced compliance) (scenario Y, Z) resulted in fewer deaths than recommending replacement-feeding with reduced compliance (scenario X).
Compliance to recommended infant-feeding method
For all scenarios, decreasing the compliance to recommended infant-feeding methods resulted in an increase in the number of children infected with HIV and or dead at 2 years (Figure 4c). The impact of compliance on HIV infection and/or cumulative mortality was highest with replacement-feeding than it was with breast-feeding. Though, with 100% compliance, recommending breastfeeding for 4 months (scenario W) resulted in slightly more deaths than recommending breast-feeding for 6 months (scenario V) or replacement-feeding (scenario U), mortality in all 3 scenarios was very similar with a compliance of 60% or less.
Absolute and relative mortality rate in infants not breast-fed
Varying the mortality rate in replacement-fed infants did not impact the total number of children with HIV/AIDS at 2 years (Additional file 1a and Figure 5a). Nevertheless increasing the absolute value of the mortality rate in replacement-fed infants (Additional file 1b) or the relative mortality in replacement-fed compared to breast-fed infants (Figure 5b) resulted in an increase in the cumulative mortality, irrespective of feeding recommendation. The increase was highest in replacement-fed infants. Although with a mortality rate less than 50 per 1000 per year, the cumulative mortality was lowest when replacement-feeding was recommended, the cumulative mortality in this scenario became highest when the mortality rate surpassed 50 per 1000 per year. Considering the total number of children with HIV/AIDS or dead at 2 years as the outcome (Additional file 1c), an equilibrium between all six scenarios was reached at a higher mortality rate: though recommending replacement feeding resulted in the lowest number of children with HIV/AIDS or dead when the mortality rate in replacement-fed was less than 150 per 1000 per year, it resulted in the highest number of children with HIV/AIDS when the mortality rate in replacement-fed was higher than 150 per 1000 per year. In terms of relative mortality, the equilibrium was reached when the mortality rate in replacement-fed infants was 3-4 fold that in breast-fed infants: recommending replacement-feeding results in the least number of children with HIV/AIDS or dead when the mortality rate in replacement-fed infants is less than 3 fold that in breast-fed infants, while the same recommendation results in the most number of children with HIV/AIDS or dead when the mortality rate in replacement-fed is more than 4 times that in breast-fed infants.
Rate of infection in breast-fed infants
Increasing the infection rate in breast-fed infants resulted in a higher number of children with HIV/AIDS and a higher cumulative death at 2 years in all breast-feeding scenarios (Additional file 2c). These numbers (of infected or dead children at 2 years) were even higher with a longer duration of breast-feeding. The cumulative mortality in breast-fed infants was however similar to that of replacement-fed infants when the rate of infection was in the order of 2 per 1000 per week.
Rate of progression from HIV to AIDS in infants
Though varying the HIV to AIDS progression rate did not impact the total number of children with HIV/AIDS or dead at 2 years, increasing the rate resulted in an exponential decrease (with little change beyond a rate of 30 per 1000 per week) in the number of children with HIV/AIDS at 2 years (Additional file 3a). Concurrently, increasing the rate resulted in an increase in the number of children dead at 2 years (Additional file 3b). The rate of increase was however lowest when replacement-feeding was recommended. Thus, although the mortality in replacementfed infants was highest with low progression rates, recommending replacement-feeding with a high compliance resulted in the least deaths when the progression rate was higher than 20 per 1000 per week.
Mortality rate due to HIV/AIDS in infants
Varying the HIV/AIDS mortality rate did not affect the total number of children with HIV/AIDS or dead at 2 years (Additional file 4c). Nevertheless, increasing the rate resulted in a decrease in the number of children with HIV/ AIDS at 2 years (Additional file 4a), and an increase in the number of children dead at 2 years (Additional file 4b).
Despite resulting in the highest number of deaths at 2 years when HIV/AIDS mortality was low, replacement-fed infants had the least number of deaths at 2 years when HIV/AIDS mortality rate was higher than 300 per 1000 per year (Additional file 4b).
Discussion
Policy makers and care-providers in resource-limited, high HIV-prevalence settings continue to be confronted with the dilemma of what feeding method and duration to recommend for infants born to HIV-infected children.
In order to reflect conditions specific to these settings, we used epidemiologic parameters only from studies conducted in sub-Saharan studies to analyze the potential impact of 3 infant-feeding recommendations on the morbidity and/or mortality in these infants. Our analysis suggests that the choice of preferred infant-feeding method depends on the policy makers' objective: to minimize the number of children with HIV/AIDS, to minimize the cumulative mortality, or to minimize the total number of children with HIV/AIDS or dead. Most previous discussions of this issue have focused on infant mortality. This view assumes that communities will prefer minimizing deaths irrespective of the number of children that end up living with HIV. However, there is no evidence supporting this. It is our view that not only should the mortality be considered but also the number of children affected by HIV/AIDS either separately or combined with the total number of deaths. Thus an important prerequisite in the choice of feeding method should be a definition of each society's preferences and or perceptions towards mortality versus living with HIV/AIDS.
As expected, recommending breast-feeding increased the number of children infected with HIV while recommending replacement-feeding increased infant mortality. However there was only a minimal decrease in cumulative mortality when breast-feeding was recommended. Breastfeeding seemed to simply delay the timing of death rather than reduce it altogether: while breast-feeding reduced mortality at the very young ages, infants got infected and, consistent with the conditions existing in most resource limited settings, these infants progressed to AIDS relatively rapidly, and died later on by the age of two. Breastfeeding for a shorter duration (4 months as has been suggested) actually increased mortality, an increase that was accentuated when there was poor compliance. In fact, breast-feeding resulted in the least cumulative mortality only when it was recommended for six months and there was 100% (very high) compliance. Poor compliance could be expected to result in mixed-feeding with its consequences: higher infection rates, more infants who are HIV positive and who die later on. If the aim was solely to reduce mortality, then recommending breast-feeding (including breast-feeding for shorter durations) could only be justified when mortality in replacement-fed infants was greater than 50 per 1000 per year, the rate of infection in exclusively breast-fed infants was less than 2 per 1000 breast-fed infants per week, the rate of progression from HIV to AIDS was less than 15 per 1000 infected infants per week, or the mortality due to HIV/AIDS was less than 200 per 1000 infants with HIV/AIDS per year.
Reducing the recommended duration of breast-feeding resulted in fewer children living with HIV/AIDS. However, recommending replacement-feeding (even with poor compliance) resulted in the least number of children living with HIV/AIDS. This suggests that if the aim was solely to reduce the number of children living with HIV/AIDS then replacement-feeding would be the optimal choice irrespective of mortality and rates of HIV transmission and progression.
Considering the minimization of the total number of children with HIV/AIDS or dead as main objective, replacement-feeding was the best option in nearly all scenarios, the exception being when the mortality in replacementfed infants was greater than 150 per 1000 infants or when the mortality rate in replacement-fed infants was more than 3.5 fold that in breast-fed infants.
The absence of any reduction in mortality with shorter durations of breast-feeding in this simulation supports WHO's recent update of its guidelines to recommend exclusive breast-feeding for six months unless replacement-feeding is AFASS. It may however be difficult for policy-makers, attempting to implement WHO guidelines, to determine the extent to which replacement-feeding in their settings is AFASS. Our simulations suggest that replacement-feeding may not be considered safe unless the mortality rate in replacement-fed infants is less than three times that in exclusively breast-fed infants.
The deterministic nature of this simulation may be a limitation because of its inherent assumption that the parameters and outcomes were fixed (having no variance). Furthermore, as for every mathematical simulation, the conclusions from our analysis could be limited by the veracity of the model selected as well as the limitations of the studies from which parameters were extracted. Our analysis is however strengthened by its particular usefulness to sub-Saharan countries as we used parameters specific to conditions in this high-HIV prevalence setting. Although the parameters used in the baseline model may differ between published studies and may be biased because of defects in the original studies, the impact of varying them was addressed in the sensitivity analysis. This sensitivity analysis of model assumptions showed our findings to be robust within the range of plausible parameters. Furthermore our findings are consistent with a recently published report by Becquet et al. who did not find any significant difference in the 2-year cumulative mortality of exclusively breast-fed infants and replacement-fed infants in West Africa [24].
Conclusion
In conclusion, this analysis presents a framework to assist decision-makers in resource-limited settings in the choice of which infant-feeding method to recommend for infants born to HIV-infected mothers. Recommending exclusive breast-feeding in infants born to HIV-infected women in these settings, instead of replacement-feeding, may potentially result in very little gains in mortality. Furthermore, although recommending shorter durations of breast-feeding may substantially reduce infant HIV infection, it might slightly increase mortality. When exclusive breastfeeding for shorter durations is recommended, lower mortality could be achieved by a simultaneous reduction in of the rate of progression from HIV to AIDS and or HIV/ AIDS mortality, reductions that are obtainable by the use of HAART in infants. Making HAART and better care available to infected-infants should thus be an imperative whenever a community and/or policy-maker prefer exclusive breast-feeding over replacement-feeding. | 2017-06-21T14:01:14.145Z | 2008-05-16T00:00:00.000 | {
"year": 2008,
"sha1": "f4bf4851a5957a4193aaf8855151d48af81a3f5c",
"oa_license": "CCBY",
"oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/1471-2334-8-66",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "13e9a0ce6c04ad011dc056283edb9afa1fd71393",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
172137506 | pes2o/s2orc | v3-fos-license | The Volume-Regulated Anion Channel LRRC8/VRAC Is Dispensable for Cell Proliferation and Migration
Cells possess the capability to adjust their volume for various physiological processes, presumably including cell proliferation and migration. The volume-regulated anion channel (VRAC), formed by LRRC8 heteromers, is critically involved in regulatory volume decrease of vertebrate cells. The VRAC has also been proposed to play a role in cell cycle progression and cellular motility. Indeed, recent reports corroborated this notion, with potentially important implications for the VRAC in cancer progression. In the present study, we examined the role of VRAC during cell proliferation and migration in several cell types, including C2C12 myoblasts, human colon cancer HCT116 cells, and U251 and U87 glioblastoma cells. Surprisingly, neither pharmacological inhibition of VRAC with 4-[(2-Butyl-6,7-dichloro-2-cyclopentyl-2,3-dihydro-1-oxo-1H-inden-5-yl)oxy]butanoic acid (DCPIB), carbenoxolone or 5-nitro-2-(3-phenylpropyl-amino)benzoic acid (NPPB), nor siRNA-mediated knockdown or gene knockout of the essential VRAC subunit LRRC8A affected cell growth and motility in any of the investigated cell lines. Additionally, we found no effect of the VRAC inhibition using siRNA treatment or DCPIB on PI3K/Akt signaling in glioblastoma cells. In summary, our work suggests that VRAC is dispensable for cell proliferation or migration.
During cell proliferation, the transient activation of Cl − channels leads to a decrease in cell volume after an initial volume increase [17][18][19]. Cell migration is mainly mediated by cytoskeletal rearrangements and directed membrane transport. In addition, osmotic water flux by the differential activity of ion channels and transporters mediating local changes in cell volume was found to contribute to cell movement [20,21]. The uptake of inorganic ions and water (a regulatory volume increase, RVI) at the leading edge by locally active Na + -K + -2Cl − cotransport, Na + /H + exchange or nonselective cation channels, and a volume decrease at the trailing end via release of K + and Cl − through activated K + and Cl − channels followed by water efflux (RVD) will lead to a net translocation of the cell [22]. Recently, cell displacement was found to be solely driven by directed cellular osmotic water transport in an artificial confined environment when actin polymerization was inhibited [23].
However, several observations cast doubt on a general role for the VRAC in cell proliferation and migration. So far, no proliferation defect has been reported for any of the various published LRRC8A-deficient cells lines. The proliferation of HeLa cells was reported to be unaffected by the siRNA-mediated knockdown of LRRC8A [42]. Recently, the flavonoid Dh-morin was shown to effectively inhibit endogenous VRAC currents in endothelial cells, without impairing the proliferation of human umbilical vein endothelial (HUVEC) cells [43], arguing against a crucial role for the VRAC in the cell cycle progression of this cell type. The anti-proliferative effect of submicromolar concentrations of cardiac glycosides was even linked, albeit not necessarily directly, to an increase of VRAC activity in HT-29 colorectal cancer cells and could be blocked by the VRAC inhibitor DCPIB [44].
In all, there are conflicting data as to the role of VRAC in cell proliferation and migration. To study the potential role of VRAC in cell proliferation and migration, and therefore we systematically examined these processes for a variety of cell lines, including cancer and noncancer cell lines, by using pharmacological blockers, siRNA against LRRC8A, and genomic VRAC knockout. In none of the studied cell lines did we find evidence for a critical involvement of VRAC in cell proliferation or migration.
LRRC8A Knockout Does Not Impinge on the Proliferation and Migration of C2C12 Cells
To investigate the putative role of the VRAC in the proliferation and migration of C2C12 mouse myoblast cells, we used various clonal genome-edited cell lines deficient for the essential VRAC subunit LRRC8A (clones 27, 13, and 14) and a line (clone 4) with only heterozygous LRRC8A deletion that had experienced the same transfection and selection process as the knockout clones. The loss of LRRC8A in the knockout clones and reduced levels in clone 4 in comparison to wild-type C2C12 cells was confirmed by Western blotting ( Figure 1A). We first assessed the effects of LRRC8A knockout on C2C12 proliferation. The proliferation rate of the knockout clones was similar to that of wild-type cells or the heterozygous clone ( Figure 1B), demonstrating that VRAC is dispensable for C2C12 cell proliferation. Next, we investigated the role of LRRC8A in cell migration using a wound healing assay ( Figure 1C,D). We observed no significant differences in migration speed among LRRC8A knockout (clone 27
VRAC Blockers and Disruption of LRRC8s do not Impair HCT 116 Proliferation and Migration
Since the involvement of ion channels in cell growth and migration is of particular interest in relation to cancer progression [45][46][47][48], we investigated a potential role of VRAC in the proliferation and migration of human colon cancer HCT116 cells. We first examined the effects of genomic VRAC knockout on HCT116 proliferation (Figure 2A). Although the proliferation of genomic VRAC knockout clones seemed slightly decreased as compared with wild-type cells during the first 48 h, the proliferation of a clonal cell line lacking the essential LRRC8A subunit of VRAC was virtually equal to that of wild-type cells over the complete time course. Another clonal cell line, lacking all five LRRC8 members, even displayed an increase in proliferation. These results demonstrate that VRAC is not critically involved in HCT116 proliferation. Next, we examined the effect of the genomic VRAC deletion and of the VRAC inhibitor carbenoxolone (CBX) on HCT116 cell motility in our wound healing assay ( Figure 2B). Neither pharmacological inhibition of VRAC with up to 50 μM CBX, nor gene knockout of VRAC affected motility of the HCT116 cells. Together, these data demonstrate that VRAC is dispensable for human colon cancer proliferation and migration.
VRAC Blockers and Disruption of LRRC8s Do Not Impair HCT 116 Proliferation and Migration
Since the involvement of ion channels in cell growth and migration is of particular interest in relation to cancer progression [45][46][47][48], we investigated a potential role of VRAC in the proliferation and migration of human colon cancer HCT116 cells. We first examined the effects of genomic VRAC knockout on HCT116 proliferation ( Figure 2A). Although the proliferation of genomic VRAC knockout clones seemed slightly decreased as compared with wild-type cells during the first 48 h, the proliferation of a clonal cell line lacking the essential LRRC8A subunit of VRAC was virtually equal to that of wild-type cells over the complete time course. Another clonal cell line, lacking all five LRRC8 members, even displayed an increase in proliferation. These results demonstrate that VRAC is not critically involved in HCT116 proliferation. Next, we examined the effect of the genomic VRAC deletion and of the VRAC inhibitor carbenoxolone (CBX) on HCT116 cell motility in our wound healing assay ( Figure 2B). Neither pharmacological inhibition of VRAC with up to 50 µM CBX, nor gene knockout of VRAC affected motility of the HCT116 cells. Together, these data demonstrate that VRAC is dispensable for human colon cancer proliferation and migration.
LRRC8A/VRAC is not Required for the Proliferation and Migration of Glioblastoma Cells
While VRAC plays no important role in HCT116 cell proliferation and migration, the contribution of VRAC to cell proliferation and migration may vary between cell types. Glioblastoma multiforme (GBM) is a common, rapidly growing malignant brain tumor [49,50]. To examine the contribution of VRAC to GBM cell proliferation and migration, we first assessed the effects of pharmacological inhibitors on the established glioblastoma cell lines U251 and U87 ( Figure 3). Treatment with up to 100 μM CBX did not alter the proliferation rate of U251 or 87 cells ( Figure 3A,B). Consistently, proliferation was neither affected by VRAC inhibition with up to 100 μM DCPIB ( Figure 3C,D). Next, we tested the effect of the VRAC inhibitors on GBM cell migration in the wound healing assay. We observed no significant differences in migration speed between inhibitor-treated and control U251 and U87 cells ( Figure 3E,F). Collectively, these results suggest that VRAC activity is dispensable for GBM cell proliferation and 2D migration.
LRRC8A/VRAC Is Not Required for the Proliferation and Migration of Glioblastoma Cells
While VRAC plays no important role in HCT116 cell proliferation and migration, the contribution of VRAC to cell proliferation and migration may vary between cell types. Glioblastoma multiforme (GBM) is a common, rapidly growing malignant brain tumor [49,50]. To examine the contribution of VRAC to GBM cell proliferation and migration, we first assessed the effects of pharmacological inhibitors on the established glioblastoma cell lines U251 and U87 ( Figure 3). Treatment with up to 100 µM CBX did not alter the proliferation rate of U251 or 87 cells ( Figure 3A,B). Consistently, proliferation was neither affected by VRAC inhibition with up to 100 µM DCPIB ( Figure 3C,D). Next, we tested the effect of the VRAC inhibitors on GBM cell migration in the wound healing assay. We observed no significant differences in migration speed between inhibitor-treated and control U251 and U87 cells ( Figure 3E,F). Collectively, these results suggest that VRAC activity is dispensable for GBM cell proliferation and 2D migration.
Since these data are in apparent conflict with the previously reported effect of DCPIB on GBM cell migration [28], we additionally approached the role of VRAC by silencing the expression of the essential VRAC subunit LRRC8A with siRNA. Western blotting confirmed a robust reduction of LRRC8A protein amount after transfection with siRNA against LRRC8A for 48 or 72 h in U251 and U87 cells at two days (by roughly 40% and 30%, respectively) and three days (by roughly 70%) after transfection with siRNA against LRRC8A ( Figure 4A-D) cells. Proliferation of both cell lines, assessed from 48 h after siRNA or control transfection onwards, was not affected by the LRRC8A knockdown. In the wound healing assay, also started 48 h after transfection, we observed no significant differences in the migration speed between non-transfected cell, cells transfected with control siRNA, and cells transfected with siRNA against LRRC8A at various time points after transfection ( Figure 4G,H, Figure S1). Since these data are in apparent conflict with the previously reported effect of DCPIB on GBM cell migration [28], we additionally approached the role of VRAC by silencing the expression of the essential VRAC subunit LRRC8A with siRNA. Western blotting confirmed a robust reduction of LRRC8A protein amount after transfection with siRNA against LRRC8A for 48 or 72 h in U251 and
VRAC Inhibition by DCPIB or LRRC8A Downregulation Does Not Affect PI3K/Akt Signaling in GBM Cells
Since activation of mTOR signaling by the PI3K/Akt pathway is involved in the regulation of GBM cell proliferation and migration [51][52][53], we examined whether VRAC is involved in PI3K/Akt/mTOR signaling. To this end, we assessed the phosphorylation status of Akt and the mTOR substrate ULK by Western blotting (Figure 5). Treatment of U251 or U87 GBM cells with 100 µM DCPIB for one or two days did not alter the relative phosphorylation of Akt or ULK (p-Akt/t-Akt and p-ULK/t-ULK, respectively, Figure 5A-E). Likewise, the phosphorylation was not changed three days after LRRC8A siRNA transfection, when LRRC8A protein levels were significantly reduced ( Figure 5H,J). Collectively, the results suggest that neither pharmacological VRAC inhibition nor siRNA-mediated downregulation of LRRC8A affected PI3K/Akt signaling. U87 cells at two days (by roughly 40% and 30%, respectively) and three days (by roughly 70%) after transfection with siRNA against LRRC8A ( Figure 4A-D) cells. Proliferation of both cell lines, assessed from 48 h after siRNA or control transfection onwards, was not affected by the LRRC8A knockdown.
In the wound healing assay, also started 48 h after transfection, we observed no significant differences in the migration speed between non-transfected cell, cells transfected with control siRNA, and cells transfected with siRNA against LRRC8A at various time points after transfection ( Figure 4G,H, Figure S1). Together, these results from pharmacological VRAC inhibition and LRRC8A knockdown suggest that VRAC is dispensable for glioblastoma cell proliferation and migration in the wound healing assay.
Discussion
The volume-regulated anion channel (VRAC) is ubiquitously expressed in vertebrate cells [1,2,29] and contributes to regulatory volume decrease upon osmotic cell swelling. As the extracellular osmolarity is usually kept constant, most animal cells rarely experience extracellular hypo-osmolarity under normal conditions. Thus, the VRAC is thought to play roles in other physiological processes by its impact on cellular volume, such as during cell proliferation and migration [2,26,54].
Several studies reported an impairment of proliferation and/or migration of various cell lines in the presence of VRAC inhibitors [24,25,[28][29][30][31][32][33][34][35][36][37][38]. However, the available VRAC inhibitors display little selectivity and often also inhibit other anion channels [55,56], or as in the case of the potent and relatively selective VRAC blocker DCPIB [57] even modulate potassium channels [58,59]. The identification of LRRC8 proteins as essential VRAC components [4,5] enabled investigating physiological functions of VRAC by molecular biological tools. Using this approach, siRNA-mediated downregulation of the essential VRAC subunit LRRC8A reduced proliferation of primary glioblastoma and U251 GBM cells [40], and knockdown of LRRC8A in the colorectal cancer cell line HCT116 was shown to impair cell migration in a wound healing assay [41].
In contrast, using both pharmacological and molecular biological approaches we found no evidence for a role of VRAC in the proliferation or migration in a range of cell lines including non-differentiated C2C12 myoblasts, colorectal cancer HCT116 cells, and GBM cell lines U251 and U87. Whereas, inhibition of VRAC with DCPIB (at higher concentrations than required to inhibit VRAC currents) in these latter GBM cell lines [28] and siRNA-mediated LRRC8A knockdown in U251 cells [40] was reported to reduce cell viability and proliferation, however, we did not observe such effect by either treatment in these cell lines. A possible explanation for the apparently conflicting results may be that we measured the increasing confluence of the proliferating cells, while in the previous studies the cells' metabolic activity was measured using the MTT assay. Measuring the cell number directly with a Coulter counter, the most efficient siRNA showed much less reduction in cell proliferation as compared with the MTT assay [40]. The reportedly reduced migration of GBM cells in the presence of DCPIB, which we did not observe in our study, may be explained by their impaired proliferation also during the wound healing assay [28]. The discrepancy between our finding, on the one hand, that knockout of LRRC8A or all LRRC8 members did not diminish HCT116 cell migration, and on the other hand, the previous report of slowed HCT116 migration upon siRNA-mediated LRRC8A knockdown [41] is unlikely due to upregulation of compensatory mechanisms in our case, since the migration speed was also unaffected upon acute pharmacological VRAC inactivation.
Other studies corroborate the notion that VRAC is dispensable for cell proliferation. The flavonoid Dh-morin suppressed VRAC currents but did not reduce proliferation of HUVEC cells [43], and siRNA against LRRC8A did not affect the proliferation rate in HeLa cells [42]. The DCPIB treatment even inhibited the antiproliferative effect of cardiac glycosides that correlated with an increase in VRAC activity in HT-29 cells [44].
The slowed proliferation and migration of U251 GBM cells was related to a reduction of PI3K/Akt/mTor signaling in the presence of 100 µM DCPIB for 24 or 48 h [28]. In our study, we were not able to detect differences in the basal phosphorylation state of Akt1 in U251 or U87 GBM cells upon treatment with 100 µM DCPIB for one or two days. Neither did we find a reduced phosphorylation of the mTOR substrate ULK. In addition, Akt signaling was not altered by siRNA-mediated LRRC8A knockdown. This was consistent with the previously reported normal anti-CD3-mediated activation of Akt1 in thymocytes from LRRC8A-deficient mice [60]. In adipocytes, LRRC8A was reported to be involved in the regulation of insulin-stimulated Akt2 signaling [61]. However, no effect on Akt1 was detected upon LRRC8A deletion in that study [61], which was in agreement with our findings for glioblastoma cells.
In summary, our study demonstrates that LRRC8/VRAC is not crucially involved in general cell proliferation. Its indispensability for cell migration in the wound healing assay does not rule out a role for VRAC in constricted environments where directed osmotic water flux has been shown to drive cellular locomotion [23]. The VRAC inhibitor NPPB was shown to impair this process of cell invasion [62]. Apart from VRAC, other chloride channels may contribute, such as the calcium-activated chloride channel TMEM16A [63]. Future work is required to clarify the potentially cell-type-specific roles of the different types of ion channels under more physiological conditions in cell migration and invasion.
siRNA Transfection
U251 and U87 cells were plated in 6-well cell culture plates (1.5 × 10 5 cells per well) 1 day prior to transfection. The cell culture medium was removed and cells were washed with the serum-free Opti-MEM, then transfected with siRNA against LRRC8A (Lrrc8a siRNA: sense-CCU UGU AAG UGG GUC ACC ATT) (ThermoFisher Scientific, Darmstadt, Germany) #s109501 at a concentration of 15 nM using lipofectamine RNAiMax transfecting agent (ThermoFisher Scientific). A nontargeting, scrambled siRNA (ThermoFisher Scientific, 4390844) was used as a negative control. For the proliferation assay, cells were grown for a further 48 h before seeding into a 96-well plate. For the migration assay, cells were grown for a further 30-42 h post-transfection, then plated in a 96-well ImageLock™ tissue culture plate. Wounds were created 48 h after siRNA transfection.
Cell Proliferation and Migration Assays
To assess cell proliferation, 5000 cells (10,000 cells in the case of HCT116) per well were seeded into a 96-well plate and placed into the IncuCyte live-cell analysis system. Before scanning, the plate was allowed to acclimatize for 30 min. Cell proliferation was monitored by using the IncuCyte system (Sartorius, Göttingen, Germany) to capture phase contrast images every 2 h during constant incubation at 37 • C in a humidified atmosphere with 5% CO 2 .
To assess cell migration, 2-8 × 10 4 cells (depending on cell type) per well were seeded into a 96-well ImageLock™ plate (Essen BioScience 4379, Sartorius, Göttingen, Germany) and incubated for 4-16 h at 37 • C in a humidified atmosphere with 5% CO 2 before replacing the medium with culture medium supplemented with 5 µg/mL mitomycin C. Mitomycin C was applied during the following steps, if not specified otherwise, to inhibit cell proliferation so that this process would not distort our results on cell migration. After 2 h, wounds were created in all wells of the 96-well ImageLock™ plate with the WoundMaker™ (Sartorius, Göttingen, Germany). After gently washing the wells twice with culture medium, 100 µL of medium containing additional drugs (CBX, DCPIB, NPPB, or vehicle DMSO when appropriate) were applied to each well. NPPB and DCPIB were dissolved in DMSO, CBX in water. Cell migration was monitored by phase contrast imaging with an IncuCyte Zoom microscope, acquiring an image every 2 h during constant incubation at 37 • C in a humidified atmosphere with 5% CO 2 . The IncuCyte Zoom image analysis software (Sartorius, Göttingen, Germany) was used to detect cell edges automatically and to generate an overlay mask for wound width calculation. For the migration assay with HCT116, the 96-well ImageLock™ plates were coated with fibronectin prior to cell seeding.
Generation of C2C12 LRRC8A Knockout Cell Lines Using CRISPR/Cas9 Technology
To create LRRC8A knockout C2C12 cells using the CRISPR/Cas9 technology, the targeting sgRNA sequences (5 -GCCCCGGAAGGAGTCGTTGCAGG-3 ) was cloned into the px459-V.2 vector and transfected into C2C12 cells. Two days post-transfection, transfected cells were selected by treatment with 10 µg/mL puromycin for two days before single clone expansion by dilution to statistically 0.5 cells per well in 96-well format. Monoclonal cell lines were expanded and tested for sequence alterations using target-site-specific PCR with primers 5 -CATGTATGTCTCACTACACCTAACTTGTAG-3 and 5 -CCAGGAAGATGAGGGTGTGCA-3 on genomic DNA followed by Sanger sequencing.
Statistical Analysis
Proliferation and migration were quantified with the IncuCyte Zoom image analysis software by measuring cell confluence and wound width over time, respectively. The software OriginPro 2017 (OriginLab, Northampton, MA, USA) was used for statistical analyses. All data are presented as the mean values ± SD For comparisons between two groups, p-values were determined using Student's t-test and are indicated according to convention: * p < 0.05, ** p < 0.01 and *** p < 0.001. | 2019-06-02T13:02:17.610Z | 2019-05-30T00:00:00.000 | {
"year": 2019,
"sha1": "41175ae863a943818614da3a6a825eea6e68a116",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/20/11/2663/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "41175ae863a943818614da3a6a825eea6e68a116",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
257321663 | pes2o/s2orc | v3-fos-license | Exploring Project-based Learning Model Applied in Writing Activities based on the 2013 Curriculum
| ABSTRACT The research aimed to determine teachers' perception toward Project-based Learning and the implementation of Project-based Learning in Writing Activities. Project-Based Learning is an instructional model aiming to focus learners on complex issues that need to be investigated and comprehended the subject matter through investigation. The research method used was the Qualitative Method with descriptive analysis. The data were obtained using two kinds of instruments. An open-ended Questionnaire is used to know teachers' perceptions toward Project-based Learning. In contrast, an observation checklist is used to know the implementation of Project-based Learning applied by the teacher in writing activities. The descriptive analysis found that the teacher's perspective toward Project-based Learning in the teaching and learning process was positive. The teachers most agreed that PjBL assists teachers in maintaining classroom discipline and a pleasant atmosphere, and PjBL can improve students' discipline towards assignment deadlines. Conversely, teachers were skeptical that Project-based Learning could strengthen student-teacher relationships, considering that the teacher's role in Project-based Learning is "only" a facilitator. Other findings revealed that the teachers' implementation of PjBL in writing activities still does not adhere to PjBL syntax, mainly in designing a plan for the project and evaluation stages. Therefore, it suggested that teachers be aware of their role in implementing Project-based Learning to ensure the objectives of the Projcet-based Learning models are adequately met.
Introduction
The Digital Age has begun due to the rapid and massive growth of technology and information.Students in the digital age are vastly different from those who graduated 10 to 15 years ago.They are well-equipped with advanced technology and easily learn new things daily.Seeing the current situation, educators overcome the challenges and develop 21st-century skills to engage in life-long learning, which students require today.
The Indonesian government issued a new curriculum called the Curriculum 2013 (K-13) in 2013.A curriculum is a set of plans and arrangements of the purpose, content, materials, and methods used as guidelines for implementing learning activities to achieve specific educational objectives (UU No. 20,2003).This curriculum is intended to emphasize students' creativity and morality.As a result, students are expected to exercise their creativity through a variety of learning activities in order to improve their learning objectives in the cognitive, affective, and psychomotor domains.Mulyasa (2014:7) states in his book that Curriculum 2013 (K-13) is a character and competence base curriculum, which reveals as the answer critics to Curriculum 2006.This curriculum requires students to be actively involved in learning activities.In reaching the goal of implementation in Curriculum 2013, it should be started by increasing the quality of teachers.They faced many challenges and constraints, especially in terms of the teaching and learning process.In this case, Mulyasa (2014:13) states that Learning is a strategy teachers use in curriculum implementation to attempt students to achieve the objectives.As hoped by Curriculum 2013, many variables and important components exist to consider in building a meaningful and effective teaching and learning process.First, Learning should be emphasized on how teachers use strategy and models of Learning.Second, Learning should be democratic, open, cohesive, and participative, focusing on students.The teaching and learning process is conducted as student-centered Learning and contextual Learning (Permendikbud, 2012:25), which aims to develop the three aspects elaborated for each school level (Standar Proses Permendikbud No. 65, 2013).Third, learning should be emphasized on actual problems which contextually happened in society.Fourth, scientific methods need to be developed (Mulyasa, 2014:134-135).
In Curriculum 2013, it is recommended that teachers apply learning models such as Discovery Learning, Problem-Based Learning, and Project-Based Learning to facilitate students' learning.Those learning models are assumed to be suitable to realize and succeed in the implementation of Curriculum 2013, appropriate to the condition and the development of the society as well as with students' characteristics.Those models have been practiced to step by step with teachers in Diklat Kurikulum 2013 (Mulyasa, 2014).
One learning model mentioned previously is Project-Based Learning.It can stimulate motivation and processes and improve students' achievement by using the problems related to a particular subject in a real situation.According to Helle.et al. (2006) argue that Project-Based Learning (PBL) is a collaborative form of Learning as all participants need to contribute to the shared outcome and have elements of experiential Learning with active reflection and conscious engagement rather than passive experiences being essential.While Mulyasa (2014:145) mentions that Project-Based Learning is a learning model used to make students focus on complex problems required to investigate and understand the Learning through investigation.
Project-Based Learning, as one of the learning models used in Curriculum 2013, should be integrated into the skills in English language learning and the activities to deliver the material based on the skills that the teacher wants to teach, such as writing activities.Writing is one of the productive skills that students should master.As stated by Hyland (1996), writing is a way of sharing personal meaning, and writing courses emphasize the power of the individual to construct his or her views on a topic.It can be inferred that a person delivers his/her ideas through his/her own writing, and everyone can have a different perspective about something that they think.Through writing, students can express feelings, describe something, discuss an idea, present a point of view, and share their experiences as a written product (Argawati & Suryani, 2017).Besides, writing extends and deepens students' knowledge; it acts as a tool for learning subject matter (Graham & Perrin, 2007).Hasani et al. (2017) investigated the suitability of the implementation of PjBL and Writing, finding that through PjBL, students can develop their creative ability according to the theme they prefer.In previous research, PjBL was shown to be capable of increasing students' creativity in the teaching and learning process (Rahmania, 2020).Furthermore, students are more engaged in learning independently in groups to complete the assigned project.As a result, student-centered Learning, which is one of the 2013 curriculum's objectives, has been well implemented.
To establish this proof, the researcher collected preliminary data from three English teachers at SMP Negeri 1 Bulukumba.It obtained the result that, PjBL is very interesting to use because it helps students improve their creativity; therefore, the teacher's role is to facilitate and support students in developing their ideas.Teachers agree that Project-Based Learning is an effective method for teaching writing.Furthermore, teachers perceived Project-Based Learning as very interesting because it can bring Learning alive in the class environment.
Based on the findings and preliminary data presented above, the researcher is interested in exploring more information about the implementation of Project-Based Learning carried out by English teachers in writing activities, particularly at SMP Negeri 1 Bulukumba as a school appointed by the government to be a school model for implementing the K-13.There were two 1.Problem statements formulated: What is the teachers' perception of the Project-Based Learning Model? 2. How did the teachers do implemented the Project-Based Learning Model in writing activity based on the 2013 Curriculum?
Project Based Learning
According to Mulyasa (2014:145), Project-Based Learning is an instructional model that aims to focus learners on complex issues that need to investigate and comprehend the subject matter through investigation.
The aims of this learning model are also to guide learners in a collaborative project which requires integrating a variety of subjects (material) in the curriculum, allows learners to explore the material using a variety of ways that are meaningful to them, and conduct experiments collaboratively.Some of the benefits of using PjBL Aydin (2017) are skill improvement, real-world practice, improved discipline, better relationships among students, a better relationship between student and teacher, and a pleasant atmosphere in the classroom.
The syntax of Project-Based Learning, according to Faturrohman (2016), consists of designing a plan for the project, creating a schedule, monitoring the student's progress and project, presenting the result, assessing the outcome and experience and evaluating
1)
Designing a Plan for the Project The first step of Project-Based Learning always begins with essential questions (Nurohman, 2014:15).This question can be proposed by t h e teacher or students or collaboratively between t h e teacher and students.Teachers' duties are guiding the students to make a plan for the project based on the questions made and core competencies.Planning is done collaboratively between teachers and learners.Thus, students are expected to implement the project.Planning contains rules, the selection of activities that can support answering questions essentially by integrating a variety of subjects and knowing the tools and materials that can be accessed to help completion of the project (Kosasih, 2014:99).
2)
Creating a Schedule Teachers and learners collaboratively construct a scheduled activity to complete the project.Activities in this phase include: (1) create a timeline for completing the project, (2) make the deadline project, (3) bring learners to plan how new, (4) guide learners when they make way which is not related to the project, and (5) Require learners to make an explanation (excuse) on the selection of a way (Kosasih, 2014:99).
3) Monitoring the Students' Progress and Project
The teacher is responsible for conducting and monitoring the activity of learners to complete the project.Monitoring can be done by facilitating learners in each process.In other words, the teacher should be a mentor teacher for activity students.In order for the monitoring process, creating a rubric that can record all activities is important (Kosasih, 2014:100) 4) Presenting the Result Students show their product and explain the process of making it and its advantages of it.It can be done in class discussions (Kosasih, 2014:100).
5) Assessing the Outcome and Experience
Assessment is done to assist teachers in assessing achievement standard, plays a role in evaluating each learner's progress, provide feedback on the level of understanding already achieved by learners, and help teachers prepare the next learning strategies (Nurohman, 2014: 16).
6) Evaluating
At the end of the learning process, teachers and learners reflect on the activities and results of the project are already run.The process of reflection is done either individually or i n a group.At this stage, learners are asked to disclose t h e i r feelings and their experience in completing the project.Teachers and learners developed the discussion to improve performance during the learning process and eventually found new findings (new inquiry) to address the problems posed (Kosasih, 2014:100).
Writing
The writing process is the stage that goes through in order to produce something in its final written form (Harmer, 2004, p.4).To deliver that explanation, of course, we used to practice of express what ideas were in our mind in the form of a list, letter, essay, report, or novel.The written language is simply the graphic representation of spoken language, and that written performance is much like an oral performance; the only difference lies in a graphic instead of auditory signals (Brown, 2001, p.335).In addition, when writing something, it usually expects somebody to read it.It is easy for the reader to understand what the writers have written.However, it might be difficult for other people to understand.The writing process is the stage that goes through in order to produce something in its final written form.The writer not only needs to know the process of writing but also needs to apply these processes to the works.It will help the writer to organize ideas logically.
Writing is one of the four language skills taught in school.Writing is an important skill to be developed from the beginning of language instruction (Larsen and Anderson, 2013).On the other hand, writing is a powerful way to describe and examine, reflect on, and understand our thoughts, feelings, ideas, activities, and experiences (Yagelski, 2015).Additionally, writing is a written productive language skill.The purpose of writing skills is to share information from spoken language into written language.It needs great thinking to produce writing which begins with getting the main idea, planning, and revising the procedure.Reaching the whole requires a specific skill that not everyone can develop (Ramadhani, 2013).
Curriculum 2013
Curriculum 2013 puts teachers as characters who hold important roles, especially in teaching and learning.The core and basic competencies make this curriculum different from the previous one.The main essences of this curriculum are implementing the scientific approach and student-centered Learning.
The teaching and learning process should develop students' attitudes, skills, and knowledge, make them creative, innovative, and critical, and optimally achieve learning objectives.In such a way, the assessment should be authentic toward the input, output, and income in each teaching and learning process (Mulyasa, 2014:3-4).
Curriculum 2013 is meant to develop an active, creative, and joyful learning process for students.It is expected to produce golden generations who are productive, creative, innovative, and effective.It can be achieved through observing, listening, reading, questioning, reasoning, trying, and communicating (Retnaningsih, 2012:11).Despite students being the subject of t h e teaching and learning process in t h e curriculum 2013, it does not mean that teacher does not take any important role.As the implication of the policy, teachers are demanded to have skills in developing methods and approaches for teaching learning.In addition, it is hoped that teachers can create conducive and effective classroom management (Mulyasa, 2014:7-9).
Method
This research applied the Qualitative Method with descriptive analysis that describes the teachers' perception toward using Projectbased Learning and the way of implementing Project-based Learning in writing activities.
Subject of the Research
The subject of this research is the English teachers of SMPN 1 Bulukumba, where the total number is five teachers.The researcher chose two teachers as study subjects because, based on the data, the two teachers met the researcher's purposive sampling criteria.
Research Instrument
The researcher used two instruments; an open-ended questionnaire and an observation checklist.An open-ended Questionnaire is used to know teachers' perceptions of implementing Project-based Learning.In contrast, Observation Checklist is used to know how English teachers implement Project-based Learning syntax.
Data Collection 1. Open-ended Questionnaire
The researcher collected data for the Open-ended Questionnaire by sharing the link https://forms.gle/ang5RkUUo5QM6Nyb9already arranged in Google Form with both the research subject.
Observation
a. Prepare an observation checklist covering the procedures for Project-based Learning.b.Asking permission to attend the meeting class c.Make observations throughout the lesson d.Keep a record of noteworthy events for the research objectives.
Teachers' perception of the Project-Based Learning Model
Mulyasa (2014:145) that Project-Based Learning is an instructional model that aims to focus learners on complex issues that need to investigate and comprehend the subject matter through investigation.This learning method is one that the two English teachers at SMPN 1 Bulukumba frequently employ in writing activities.In accordance with its objectives, project-based learning is a learning model capable of empowering students to act independently.As a result, teachers must ensure that students take an active role in project completion, encourage students to gather information and connect ideas, and ensure that projects are carried out according to plan to assist students in evaluating the outcomes of their projects.The teacher's role is important, especially in writing activities.According to Wening (2016), teachers play an important role in teaching writing because they need to create the right environment for students to generate ideas and be motivated to write.
PjBL is an effective self-learning to be applied in writing activities
The two teachers in this study agreed that PjBL was an effective self-learning strategy for writing activities.According to Harmer (2007), teaching writing entails dealing with the future and assisting students in understanding their writing-composing process.This theoretical statement was supported by experimental research conducted by Kusmiyati (2020), who discovered that the Project-based Learning model impacted writing skills.Larasati discovered similar research (2021); several studies have found that Project-based Learning can help students improve their writing skills.Students were found to be more active in their writing classroom instruction.
PjBL is capable of developing a student-centered approach to class
The two teachers agreed with the above statement, adding that the presence of PjBL was to fulfill the demands of the 2013 curriculum.According to Mulyasa (2014), in Curriculum 2013, teachers should create conducive and effective classroom management, so a student-centered approach will be formed, with the teacher responding as a monitor in the teaching and learning process.Simpson (2012) A facilitator and advisor in project-based Learning is a teacher.It is done to ensure that the student-centered approach is implemented correctly.
PjBL strengthens the bond between teacher and student
Given the teacher's role as a project-based learning facilitator, the teacher in this study disagreed with the statement above (Simpson, 2012).However, one teacher confirmed that PjBL makes the learning environment enjoyable for both students and teachers.However, contrary to Larasati's (2021) findings, Project-based Learning strengthened the bond between the teacher and the students because the teacher monitored and supervised the students throughout the project work.The teacher can provide psychological and moral support and encouragement by simply being with students and spending time with them.
PjBL strengthens student relationships
The two teachers strongly agreed with the preceding statement because students must be active collaboratively (Kosasih, 2014).The relationship between students becomes closer to seeing the students' role in PjBL, which was argued by Simpson (2012) that students' roles are self-directed learners, team members or collaborators, and knowledge managers or leaders.Students' roles in PjBL demonstrate that they rely on one another and become more acquainted.
PjBL assists teachers in maintaining classroom discipline and a pleasant atmosphere
The two teachers in this study agreed by stating the supporting sentences that PjBL was running according to the planning controlled by the time allocation.Students will work in a disciplined and structured manner as they collaborate to complete projects on time.According to Nurrohman (2014), the teacher guides the students in creating a project plan so that the project is carried out in a directed manner to create a pleasant atmosphere in the classroom.
PjBL helps students develop specific skills and abilities
The two teachers in this study agreed that PjBL helps students develop specific skills and abilities.According to Kosasih (2014), through PjBL, students can be creative and innovative and develop their potential through activities based on their learning, either individually or collaboratively.
PjBL assists students in developing critical thinking and creative skills.
The two teachers agreed that this PjBL encourages students to think critically and creatively.Implementing PjBL in the teaching and learning process will produce students capable of critical thinking, collaboration, and communication.Following PjBL's objectives, According to Kosasih (2014), Pjbl exists to help students develop their competencies and skills, including their ability to think creatively and critically.In her research, Ekasari (2020) demonstrated that using PjbL effectively increased students' cognitive and psychomotor levels.The PjBL approach can shape students into human resources capable of critical and creative thinking (Rahmania, 2021).
PjBL gives students real-world experience
The two teachers in this study agreed that PjBL provides students with work experience because the process of completing projects by students gives the impression of real work.According to Kosasih (2014), students benefit from the materials they learn daily.Furthermore, PjBL's syntax gives the impression of collaborative and individual work.Students plan an activity or product that they will produce (Kosasih, 2014)
PjBL Method makes students' abilities more apparent
The two teachers agreed because students' roles in PjBL are team members or collaborators, requiring students to be responsible for their own based on their capacities and roles.According to Simpson (2012), the outcome is part of their responsibility in group work, so students must be team members willing to work and put in the effort to make it right.
PjBL can improve students' discipline toward assignment deadlines
The two teachers in this study agreed that PjBL could improve student discipline toward assignment deadlines.The logical reason is that PjBL had already set a deadline during the planning stage, so students must work consistently and optimally not to exceed the time limit.The second syntax of PjBL is creating a schedule (Faturrohman, 2016).The creating schedule has established a timeline for completing the project and setting the project deadline.
The implementation of Project Based Learning (PjBL) in Writing Activities
The writing process is the stage through which something is produced in its final written form (Harmer, 2004).Both classes produced a product as part of the learning process related to writing activities.The project assigned by the two teachers was done in groups.Working on projects in groups or individually is one of the PjBL characteristics (Kosasih, 2014).
Six syntaxes should be implemented by the teacher in general when teaching.Faturrohman (2016) defined PjBL syntax as creating a project plan, creating a schedule, monitoring students' progress and project, presenting the result, assessing the outcome and experience, and evaluating.The following is a discussion of the results of the two teachers' observations of PjBL for writing activities: 1. Designing a plan for the project The first step in creating a project is determining what project will be created.The first step in designing a project in PjBL is identifying the fundamental questions.PjBL always starts with fundamental questions (Nurrohman, 2014).Several activities, according to Faturrohman (2016), should be carried out when determining the basic questions: The teacher asks students to determine questions that contain 5W + 1H elements, The teacher asks questions to students, The teacher asks students to determine the investigation variable, The teacher asks students to determine questions based on the variables of investigation, The teacher asks students to determine hypotheses.The fifth point, " The teacher asks students to determine hypotheses," was not implemented by the two teachers in this study.Even though the formulation of this hypothesis in PjBL is intended to train students in problemsolving based on the project concept to be worked on (Murniarti, 2016).T1 asked students to ask questions independently based on the results of video observations, while T2 assisted students in asking questions related to the results.According to the two different steps following the explanation (Nurrohman, 2014), the fundamental question can be proposed by either the teacher or the students or collaboratively by the teacher and students.
Based on the results of the video observations, students design the type of project that will be produced.The two teachers' videos contain projects related to the material being taught.Students learn various skills from the videos shown, including how to write sentences and complete projects.Alfaki (2015) Word choice is one of the most difficult aspects of writing.Students are expected to be able to create projects that will be worked on without experiencing cognitive difficulties when writing sentences related to the content writing that will be done (Alfaki, 2015).T2 allowed students to choose the type of greeting card to be written when working on the project.Brown and Douglass (1994) have one approach to teaching writing that allows students to discover what they want to write.
Creating a schedule
In PjBL, teachers and students collaborate to create a project completion schedule (Faturrohman, 2016).T1 and T2 in the PjBL implementation have met the schedule requirements.Create a timeline for completing the project, do the deadline project, bring learners to a new plan, guide learners when they do what is not late for the project, and require learners to explain the selection way (Kosasih, 2014).
T1 helped students design work time outside of face-to-face sessions when creating a schedule.This includes avoiding issues with writing activities, specifically Credit Hours.Almubark (2016), more credit hours should be added to teach writing skills.T2 assisted students in arranging the schedule when creating one.T2 made suggestions to ensure that students actively participate in working on projects on time.One of the Students' roles in PjbL is a team member or collaborator (Simpson, 2012).The preparation of the schedule carried out by T2 also suggests dividing the roles of each student in the group for project completion.
Monitoring the Students' Progress and Project
The teacher controls and monitors the students' activities in order for the project to be completed (Faturrohman, 2016).T1 and T2 use roughly similar methods to track the progress of student project work.While monitoring student progress, the two teachers inquired about the difficulties encountered in completing the project.T1 focused on writing sentences that introduce my family and their profession, whereas T2 focused on writing greeting cards using a generic structure.
While monitoring the progress and project, the teacher focused on observing students' writing skills in project work.The result of writing is not instantaneous (Harmer, 2007).Students must brainstorm ideas, choose vocabulary, write, edit, and publish a writing project.The two teachers did an excellent job of monitoring student progress and project activities.Teachers' role in PjBL is as facilitators and advisors (Simpson, 2016).
Presenting the result
The fourth stage of Project Based Learning is project presentation.It is possible to do this in a whole-class discussion (Kosasih, 2014).T1 and T2 asked each group to present their project in front of the class.T1 selected a student to present the result of their work project.T2, on the other hand, immediately invited group members to give project presentations.This demonstrates that self-directed learners are more effective in the T2 class.Students who do the bask within the group are self-directed learners (Simpson, 2012).
Assessing the outcome and Experience
Students assess in order to assist teachers in determining achievement standards (Nurrohman, 2014).This stage occurs during or after the project presentation.T1 and T2 approach the assessment stage differently, including other assessment instruments used by both of them.T1 and T2 evaluated students' writing abilities concerning the linguistic elements used in the project for writing activities.
T1 fully conducts student project assessments, beginning with monitoring project work and continuing through the presentation stage.T1 provided feedback on student project work at the end of the presentation.T1 did not involve students in conducting assessments; instead, students were only allowed to provide oral suggestions during the project presentation.
Students assist T2 in carrying out the assessment.Peer assessment is the type of assessment in which students participate.T2 created an assessment rubric, which was distributed during the project presentation.T2 asked students to evaluate their friends who deliver project presentations.The scoring rubric provided is related to the generic structure of greeting cards.Teachers must create an assessment tool, such as a rubric, in the early stages of Project-based Learning (Simpson, 2012).T2 also offered suggestions for improvements to the projects that had been presented at the end of the presentation.
Evaluating
At the end of the learning process, teachers and students reflect on the project's activities and outcomes (Faturrohman, 2016).T1 and T2 asked students to share their experiences while working on the project at the end of the activity.According to Kosasih (2014), during the evaluation stage, learners are asked to disclose their feelings and experiences while working on the project.
T1 concluded the learning activity by answering the basic questions that students raised during the project design stage.T2 did the same thing, but at the end of the lesson, T2 assigned individual greeting card-making assignments.T2 emphasized the importance of paying attention to the content of greeting cards.According to Douglas Brown (1994), giving students time to write and rewrite belongs to the approach in writing activities.
Conclusions
The teacher's perception of PjBL in the teaching and learning process is positive.It is demonstrated by the amount of agreement expressed in response to each question.Both teachers agreed that PjBL assists teachers in maintaining classroom discipline and a pleasant atmosphere; PjBL could improve students' discipline toward assignment deadlines.Meanwhile, one question was responded only reasonably to agree: PjBL strengthens the bond between teacher and student.The teacher contended that it is only entirely agreed upon because it recognizes that the teacher's role as a facilitator in the classroom is more prominent during the project's duration.
Teachers' implementation of PjBL in Wiring Activities still does not adhere to PjBL syntax, especially at the stage of determining basic questions and evaluating the project.The first teacher did not ask students to develop hypotheses before beginning work on the project, and neither did the second teacher.The second teacher appeared to be more active in determining basic questions, as evidenced by the fact that students were not involved in determining questions that contained 5W + 1 H elements.The first teacher conducted direct assessments without involving students in stage assessment.Meanwhile, the second teacher conducted an assessment alongside with students.Students participated in peer assessments using the rubric score provided by the teacher. | 2023-03-04T16:08:38.996Z | 2023-03-02T00:00:00.000 | {
"year": 2023,
"sha1": "de46ad84b30e683a9faa7bf56b3a1f84be272a8e",
"oa_license": "CCBY",
"oa_url": "https://al-kindipublisher.com/index.php/ijels/article/download/4653/4157",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "48701ac58ead1059434069ad31296f7077a8163e",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
51904872 | pes2o/s2orc | v3-fos-license | Spatial distribution and correlation characteristics of heavy metals in the seawater, suspended particulate matter and sediments in Zhanjiang Bay, China
Concentrations of eight heavy metals (i.e., Fe, Mn, Cr, Ni, Cu, Zn, Cd and Pb) in the seawater, suspended particulate matter (SPM) and sediments of the Zhanjiang Bay were investigated in 2014. The concentrations of metals were generally low in the seawater and sediments of the Zhanjiang Bay in winter and summer, indicating good environmental quality in the bay. The distribution patterns of Fe and Mn in three phases indicated the influence of terrestrial inputs. The partition coefficients log(Kd) between the dissolved and particulate phases showed a general decrease in the order of Pb≈Cd>Fe≈Mn>Ni≈Cr>Zn>Cu. The concentrations of some metals in the dissolved and particulate phases showed seasonal variations. Phytoplankton production and complexation reactions may contribute to this phenomenon. The relationships among metals in different phases were different, and there were few close relationships among metals in the dissolved phase, many close relationships in the particulate phase, and more close relationships in the sedimentary phase. This finding may be related to the different mobility levels of metals in different phases.
Introduction
Of all organic and inorganic contaminants, heavy metals are of particular concern due to their environmental persistence, biogeochemical recycling and potential ecological risks. Many aquatic organisms assimilate dissolved metals directly, causing unwanted bioaccumulation. Particulate or sedimentary metals are easily assimilated and accumulated by filter-feeding organisms, especially filter-feeding bivalves [1]. Metals such as Cu and Zn are essential biological micronutrient elements that are required for the growth of many aquatic organisms, but these micronutrients can become toxic at high concentrations [2]. Other metals, such as Cr, PLOS Pb, and Cd, are not required for the growth of aquatic organisms and even trace amounts can be highly toxic to marine organisms [3,4]. In estuarine and coastal environments, heavy metals can be generally partitioned into dissolved, particulate and sedimentary phases. Heavy metals in different phases can also interact with each other. For example, dissolved metals can be transformed into the particulate phase through adsorption and flocculation [5]. Metals in the particulate phase can be desorbed from particulate matter into the water body or deposited into the sediment, while they can also be released back into the water column from the sediments by resuspension [6]. Heavy metals in different phases present different biogeochemical behaviors due to their different responses to environmental changes [7]. The processes of desorption, phytoplankton assimilation and redox conditions have profound influences on the physical and chemical behaviors of dissolved heavy metals [8,9]. Suspended particulate matter (SPM) has a high capacity to interact with a range of inorganic and organic contaminants through surface complexation, ligand exchange, hydrophobic association, and so on [6]. Thus, the content of SPM play an important role in marine environmental quality through affecting the concentration and distribution of heavy metals. Therefore, many marine environmental quality standards for heavy metals have been established in many countries to protect the marine environment [6,10,11]. Therefore, the study of the concentrations and distributions of heavy metals in seawater, SPM and sediment, the relationships among heavy metals in these different phases and related environmental parameters, are critically important.
Zhanjiang Bay (ZJB) is located in the Leizhou Peninsula of southern China and connects with the South China Sea (SCS). In recent decades, rapid economic growth and urban development have occurred in the area surrounding ZJB [12,13]. In 2014, the GDP of Zhanjiang was 2.3×10 3 million yuan, which was 10% higher than that in 2013 [14]. The wastewater load discharged by Zhanjiang City in 2014 was 346 million tons [15]. According to the report of the Bulletin of Marine Environment Status of Guangdong Province in 2014, part of the seawater near Zhanjiang was polluted with high concentrations of phosphate and inorganic nitrogen [16]. ZJB is a complex region with respect to geography and hydrodynamics. The waters in ZJB are profoundly influenced by two water regimes: river discharge and oceanic water from the SCS. There are many sources of metals in ZJB, mainly including river runoff, wastewater discharge and atmospheric deposition. The dynamic variations and biogeochemical processes related to heavy metals in ZJB may cause important influences on environmental quality in the bay. Therefore, it is necessary to understand the spatial-temporal patterns of metals in ZJB. However, there have been few studies that have discussed the characteristics of the spatial-temporal variations of heavy metals in this area. In this study, eight major/trace metals (Fe, Mn, Cr, Ni, Cu, Zn, Cd and Pb) with important environmental significance [11,17,18] were investigated in ZJB in winter and summer of 2014. Through analyzing the distribution patterns of metals in three phases as well as the variations of water temperature, salinity, dissolved oxygen level, SPM concentrations and chlorophyll a (Chl a) in ZJB, we examined the possible relationships among different heavy metals and related environmental parameters to illuminate the possible behaviors and transition processes of heavy metals in different phases.
Study area
ZJB is located in the northeast of the Leizhou Peninsula, which belongs to the southwest area of Guangdong Province (Fig 1), China. ZJB is a semi-closed and drowned valley lagoon with a narrow tidal entrance that is less than 2 km wide. The Suixi River is the main river that discharges into ZJB. The water area of this bay is approximately 190 km 2 , which is framed by the red dash lines in Fig 1. ZJB is a natural deep-water harbor. The normal waterway depth of ZJB ranges from 8 m to 28 m, and the deep-water channel between 26 m and 44 m is over 10 km long. ZJB lies in a subtropical monsoon climate zone with warm temperatures and high rainfall from April to September each year. The cool and dry season is from November to February.
Sampling and pretreatment
Samples at twelve stations in ZJB (Fig 1) were collected in January (winter) and June (summer) 2014 for SPM and seawater. In consideration of more stable and minimal changes with seasons for sediments, surface sediments were sampled only once in January 2014. Two parallel water samples were collected manually at a depth of 0.5 m at each station. The samples were collected by using an in-house developed telescopic plastic barrel fitted with an adaptor into which a sampling bottle could be inserted. Each seawater sample was immediately filtered through acid-treated cellulose acetate filters with known weights (0.45 μm in pore size, 47 mm in diameter, Thermo Fisher, USA) by a polycarbonate-based filtration holder (Thermo Fisher Nalgene 300-4100, USA). The filtrate was collected in a precleaned 1 L polyethylene bottle and acidified to a pH of approximately 2 using high-purity HNO 3 . The sample bottles were sonicated in 30% HNO 3 , 30% HCl, and ultrapure water (UW) (Millipore Elix3+A10, 18.2 MO resistivity) for 3 h in a water bath that was kept at 60˚C. At the end of this process, the bottles were thoroughly rinsed with UW, then dried in a vacuum drying oven. Each bottle was then double bagged in polyethylene bags and stored until use. The polycarbonate-based filtration holder was precleaned with 2% nitric acid and then rinsed with UW before use.
Surface sediments (upper 3 cm) were collected with a stainless-steel grab sampler and were placed in acid-cleaned polyethylene bags. The collected samples were stored in a cooler box with ice bags and then frozen at -20˚C within 12 h until further treatments. The frozen samples were freeze-dried by a vacuum freeze dryer (Heto-Holten, LyoPro3000) and then ground to pass through a 200-mesh nylon sieve and kept in clean containers before further analysis.
Metal analysis
The filters containing SPM were oven-dried at 105˚C and weighed again to determine the amounts of SPM. Then, they were completely digested with a mixture of concentrated nitric, hydrofluoric acids and H 2 O 2 (5:2:2) in a Teflon digestion tank by a microwave digester (CEM MARS, USA) using the temperature programming procedure recommended by the manufacturer. The extracts for metal analyses were diluted to a final weight of 50 g with deionized water in a PE plastic bottle. For surface sediment samples, three aliquots of~200 mg (dry weight) were digested following the same procedure. To avoid possible contamination and decrease the blank value as much as possible, extreme care was taken during sample preparation, filtration and digestion; all concentrated acids that were used were of guaranteed reagent grade, and deionized water with a quality of at least 18 MO resistivity (Millipore Elix3+A10, USA) was used for washing and solution preparation; all other experimental accessories that were used were carefully cleaned (detergent, tap water, 10% nitric acid and deionized water).
The dissolved metal concentrations in filtered and acidified seawater were measured by ICP-MS (Agilent 7500Cx, USA) after being diluted 10 times with 2% nitric acid (volume ratio). To guarantee measurement quality, an internal standard sample (Part# 5183-4680, including Sc, Ge, Y, In, Tb and Bi, Agilent Technologies) and certified reference seawater with trace metals (GBW(E)-080040, from the Second Institute of Oceanography, SOA of China) were used to ensure acceptable results of the seawater samples. Metal concentrations of SPM and surface sediment in an aliquot of digest solution were also determined by ICP-MS, and the Chinese national reference material of GBW-07314 was used to control the analytical quality of the SPM and sediments. The concentrations in the analytical blanks, the results of the certified reference material analyses, and all of the recoveries shown in Table 1 confirmed that the data were reliable.
The analysis of other environmental factors
Temperature (T), salinity (S) and pH measurements were conducted in situ during the course of sampling. A salinometer (Thermo Fisher Eutech Salt 6+, USA) was used to measure the salinity of the seawater, and a pH meter (Thermo Fisher Orion Star A221, USA) was used for the pH measurement. The content of dissolved oxygen (DO) was determined by the typical iodometric titration within 24 h after the pretreated samples had been transported to the laboratory. For the determination of Chl a, one liter of seawater at each station was immediately filtered through a cellulose acetate membrane onboard, and the filters were stored at -20˚C until analysis in the laboratory. The Chl a retained on the filters was extracted with 90% acetone and determined using a spectrophotometer (Shimadzu UV-2450, Japan).
Methods of data statistics and analyses
The analytical and statistical method of data used in this work included some common softwares, such as Microsoft Office Excel, Golden Software Surfer and SPSS Statistics. Therein, through Microsoft Office Excel, the tables of different data were listed and the distributions of concentrations vs stations were also graphed in this work. The study area and sampling
Results
For the convenience of a clear description, we divided the study area into three regions. Three stations (S1, S2 and S3) at the mouth of the bay were classified as the bay mouth (Fig 1). Three stations (S10, S11 and S12) in the inner parts of the bay were classified as the inner bay (Fig 1). The other stations (S4-S9) represented the middle of the bay (Fig 1).
Environmental background during the winter and summer
The temperature of the surface water in ZJB showed obvious seasonal differences. The mean water temperature in ZJB was 17.6˚C in winter (range: 17.3~17.8˚C) and 30.0˚C in summer (range: 27.7~30.9˚C) (Fig 2(A)). There were no obvious spatial variations in temperature in either season (Fig 2(A)). The salinity ranges in the water of ZJB ranged from 27.0 to 29.7 in winter and 25.7 to 30.3 in summer (Fig 2(B)). The salinity exhibited increasing tendencies from the inner bay to the bay mouth in both seasons (Fig 1; Fig 2(B)). The pH of the surface water was generally higher in winter than in summer, and the mean difference between the two seasons was 0.27. Unlike salinity, the spatial variations in pH were not obvious in either the winter or summer (Fig 2(B) and 2(C)). The DO concentrations of the surface water varied from 7.77 mg L -1 to 11.77 mg L -1 in winter and 5.45 mg L -1 to 8.53 mg L -1 in summer, which was indicative of well-oxygenated waters in this bay (Fig 2(D)). The range of DO concentrations in ZJB was comparable with that in other estuaries or bays with fewer pollutants [8].
The SPM concentrations in ZJB ranged from 5.9~20.9 mg L -1 in winter (average: 11.1 mg L -1 ) and 3.4~10.3 mg L -1 in summer (average: 6.7 mg L -1 ) (Fig 2(E)). In winter, the SPM concentrations in the bay mouth were higher than those in the middle bay and the inner bay ( Fig 2(E)). However, this phenomenon was not shown in summer (Fig 2(E)). The average concentrations of Chl a in winter and summer were 28.3 and 8.08 μg L -1 , respectively (Fig 2(F)), indicating more phytoplankton in winter than in summer.
Heavy metals in different phases
Heavy metals in seawater. The concentrations of heavy metals in the seawater of ZJB are listed in Table 2. Among the eight studied metals, the mean concentrations in the two seasons decreased in the order of Fe, Zn, Mn, Cu, Cr, Ni, Pb, and Cd (Table 2). According to the National Standard of China for Seawater Quality (SWQ) of GB 3097-1997 [19], the seawater was classified into four levels (i.e., Grades I-IV) corresponding to different function zones, and these levels have already been used to evaluate the seawater quality in China. The mean concentrations of dissolved Zn, Cu, Cr, Ni, Pb and Cd in ZJB were all within the ranges of SWQ Grade I ( Table 2), indicating that the water of ZJB was not polluted by these metals.
For comparison purposes, the eight heavy metal concentrations in seawater reported in other coastal areas are also listed in Table 2. The mean concentrations of the metals in ZJB were generally within the ranges reported in other coastal areas, as shown in Table 2.
Compared with the concentrations reported for the seawater from the Bohai Bay [20], the average concentrations of Cu, Zn, Cd and Pb recorded in ZJB were generally lower. The average concentrations of Cu and Cd in the seawater of ZJB were only comparable with those in the Jiaozhou Bay [21]. The average concentrations of Fe, Cu, Zn and Cd in ZJB were higher than only those in the Yangtze River Estuary [24,25]. Some of the metal concentrations in ZJB showed obvious seasonal variations. The mean Zn concentration was obviously higher in summer, and the mean concentrations of Ni and Cd were obviously higher in winter ( Table 2). The mean concentrations of Fe, Mn, Cr, Cu and Pb exhibited no obvious seasonal variations, and their seasonal differences were generally within 20% (Table 2). (Fig 3). The results may indicate that the main sources for Fe and Mn in ZJB were terrigenous, and the sources of the other dissolved metals may be more complicated.
Heavy metals in suspended particulate matter. The concentrations of the eight heavy metals (Fe, Mn, Cr, Ni, Zn, Cu, Cd and Pb) in the suspended particulate matter in ZJB are presented in Table 3. The mean concentrations of Fe, Mn, Cr, Ni, Zn, Cu, Cd and Pb in the two seasons decreased in sequence. The mean concentrations of all other particulate metals except Fe, Cr and Ni were within the range reported in the other coastal areas listed in Table 3. The mean concentration of Fe recorded in this study was lower than the concentrations in the Bahía Blanca Estuary [27] and the major river estuaries in the East Hainan Island [23] Heavy metals in different phases in Zhanjiang Bay (Table 3). The mean concentrations of Cr and Ni in this study were higher than those in the Yellow River Estuary [28] ( Table 3).
Most of the metals in SPM presented strong seasonal variations. The mean concentrations of Fe, Mn, Ni, Zn and Pb in SPM were obviously higher in winter than in summer, while Cu and Cd showed the reverse pattern (Table 3). Unlike the distribution patterns of some metals that showed decreasing tendencies from the inner bay to the bay mouth in dissolved phase (Fig 3(A), 3(B), 3(F) and 3(G)), most of the metals in the particulate phase showed no obvious spatial patterns (Fig 4). This phenomenon suggested that the sources and/or behaviors of metals in the particulate phase may be different from those in the dissolved phase to some extent.
Heavy metals in surface sediments. Heavy metal concentrations in the surface sediments of ZJB were measured during the winter survey. The mean concentrations of the heavy metals in surface sediments decreased from Fe to Mn, Zn, Cr, Pb, Ni, Cu and then to Cd ( Table 4). The National Standard of China for Marine Sediment Quality (MSQ) of GB 18668-2002 [35] is widely used to judge the potential risks of metals in marine sediments [11]. This standard classifies marine sediments into three classes based on the function and protection targets of the marine area. The mean concentrations of Zn, Cr, Pb, Cu and Cd in ZJB sediments were all within the range for MSQ Grade I, indicating that these metals were within good levels (Table 4), and the surface sediments in ZJB did not suffer from metal contamination.
For comparison purposes, the mean concentrations of heavy metals in the upper continental crust (UCC) and those of surface sediments reported in some coastal areas are also shown in Table 4. In the surface sediments of ZJB, the mean concentrations of Cr, Cd and Pb were clearly higher than those in the UCC (Table 4). Compared with those reported in coastal Bohai Bay and western Xiamen Bay, which are surrounded by heavily urbanized zones in China, the mean concentrations of Cr, Ni, Cu, Zn and Cd were lower in ZJB. The mean concentration of Pb that was recorded in this study was higher than the concentrations in the Daya Bay, the Yangtze River Estuary, coastal Bohai Bay and the eastern continental shelf of Hainan Island; however, the concentrations in this study were lower than those in western Xiamen Bay and the Pearl River Estuary. Fig 5 shows the spatial distributions of heavy metals in the surface sediments of ZJB. For most of the heavy metals in ZJB, their concentrations were generally low in the bay mouth compared with those in the middle bay and the inner bay, implying the potential influence of terrestrial inputs (Fig 5).
Partitions of heavy metals between dissolved and particulate phases. The existing speciation of metal elements in seawater systems is impacted by many environmental parameters, including temperature, salinity, pH, and SS [36,37]. The value of log(K d ) is suitable for evaluating the partitioning balance of heavy metals between dissolved and suspended phases. The Heavy metals in different phases in Zhanjiang Bay partition coefficient (K d ) is defined as the ratio of the particulate metal concentration (μg g -1 ) to the dissolved metal concentration (mg L -1 ) [38][39][40][41]. A higher log(K d ) value indicates a stronger affinity between the metal and suspended particles, and a lower log(K d ) value means more metal exists in the dissolved phase. Based on the data of both winter and summer, the mean values of log(K d ) for the metals in ZJB followed the variation in the order of Pb%Cd>Fe%Mn>Ni%Cr>Zn>Cu (Table 5). This result suggested that among the eight metals in ZJB, Pb and Cd were most strongly bound to SPM, while Cu and Zn were least partitioned into the particulate phase. The different partition behaviors are determined by the specific physical and chemical characteristics of the metals [39,41].
The high particle reactivities of Pb and Cd promote the association of these two metals with particulate matter, which lead to a higher value of log(K d ). On the other hand, the low particle reactivity and stronger potential to form stable organic complexes allows Zn and Cu to more easily remain in the dissolved phase [41]. The partition coefficients (log(K d )) in ZJB were roughly of the same order of magnitude as those in other estuaries or bays throughout the world (Table 5). Compared with Jiaozhou Bay in China [42], the accumulation abilities of Zn, Cd and Pb in SPM seem to be much stronger in ZJB. Fig 6 shows the spatial variations of log(K d ) for different metals in winter and summer. There were no obvious spatial variations in the values of log(K d ) for all metals in both seasons (Fig 6). However, some seasonal variations could be seen in the log(K d ) values of some metals. The log(K d ) values were generally higher in winter than in summer for Fe, Zn and Pb, and the reverse was true for Cu and Cd.
Correlation analysis
In a complex marine system, variations of any environmental factors is not independent it will be interdepend with other environmental parameters, which can be analyzed by correlation method. Correlation analysis is based on Pearson or Spearman product moment coefficients and the corresponding correlation results can be presented in covariance correlation matrices. The covariance is a measure of this relationship and depends on the variability of each of the two variables. Correlation analysis can estimate the strength of the relationship between any pair of variables [44]. For normal environmental factors in the ZJB water, all of pH, DO, SPM and Chl a showed significant negative correlations with temperature; Chl a showed significant positive correlations with pH, DO and SPM ( Table 6). As belonging to a tropical marine system, the water temperature of ZJB in winter is rather warm (with an average value of 17.6˚C) and suitable for algal growth [45,46]. In such a warm water, marine phytoplankton can utilize more CO 2 and produce more O 2 [47,48], which may contribute to generally high pH and DO in winter in ZJB (Fig 2(C) and 2(D)). Meanwhile, the concentrations of Chl a in the water of ZJB were generally higher in winter (Fig 2(F)).
Relationships between dissolved metals and environmental factors.
In the ZJB water, many metals in the dissolved phase showed significant correlations with many environmental parameters ( Table 6). Fe showed a significant positive correlation with temperature and significant negative correlations with salinity and pH. Mn showed significant negative correlations with salinity, pH and SPM. No significant correlations were found between Cr and other environmental parameters. Ni showed significant negative correlations with temperature and significant positive correlations with pH, DO, SPM and Chl a. Cu showed a significant negative correlation with temperature and significant positive correlations with pH, DO and Chl a. Significant positive correlations between dissolved Cu and DO have also been observed in other estuaries or bays [3,8]. Zn showed a significant positive correlation with temperature and significant negative correlations with pH, DO, SPM and Chl a. The correlations between Cd and other environmental parameters were similar to those of Cu. Pb showed a significant negative correlation with only SPM.
Different correlations between dissolved metals and related environmental parameters result from various behaviors, and/or different sources and/or sinks of metals. Salinity had significant negative correlations with Fe and Mn, indicating that terrestrial inputs strongly contributed to the distributions of Fe and Mn in the ZJB water. Ni, Cu and Cd in the water showed significant positive correlations with pH, DO and Chl a. These metals may have close relationships with phytoplankton production. High primary production usually leads to high concentrations of Chl a and DO and high pH. DOC concentrations may also increase during this course of phytoplankton growth [49]. Some metals have complexation properties with organic matter, which could keep these metals in the dissolved phase [8]. Therefore, significant positive correlations were presented between these three metals (i.e., Ni, Cu and Cd) and pH, DO and Chl a. Compared with Ni, Cu and Cd, Zn had reverse relationships with pH, DO and Chl a. Bruland and Lohan [50] reported that Zn was a nutrient-type metal. Therefore, the enrichment of Zn in phytoplankton may contribute to the significant negative correlations between Zn and pH, DO and Chl a. The relationships between temperature and Ni, Cu, Zn and Cd may be influenced by the variations in Chl a to some extent. In the ZJB water, the mean temperature of 17.6˚C in winter was suitable for the rapid growth of phytoplankton, resulting in relatively high concentrations of Chl a and DOC (Fig 2(A) and 2(F)) [43,44]. Concentrations of dissolved Ni, Cu and Cd increased in winter due to their complexation action with organic matter, as discussed above. Meanwhile, the high concentrations of Chl a in winter were in favor of the enrichment of Zn by phytoplankton [2]. As a result, Ni, Cu and Cd had significant negative correlations with temperature, and Zn had significant positive correlations with temperature ( Table 5). The significant negative correlations between Pb and SPM may be associated with the high concentrations of SPM, which can absorb more dissolved Pb due to the high availability of SPM adsorption surfaces [8].
Due to the various responses of dissolved metals to the environmental conditions, some of the correlations were different among eight metals with each other in the ZJB water ( Table 6). The reverse responses of Zn and Ni, Cu and Cd to the phytoplankton production, as discussed above, may account for the significant negative correlations between Zn and Ni, Cu and Cd. Although Zn exhibited different behaviors with Fe and Mn, significant positive correlations were found between Zn and Fe, Zn and Mn, which may indicate that these metals may have similar behaviors in other aspects [41]. This result needs to be further explored in future studies.
Relationships among particulate metals and environmental parameters. In the ZJB environments, the relationships between particulate metals and environmental parameters were different from those between dissolved metals and environmental parameters to some Heavy metals in different phases in Zhanjiang Bay extent (Table 6). Particulate Fe had a significant negative correlation with temperature and significant positive correlations with pH, DO, SPM and Chl a, which may imply that high primary production was in favor of the enrichment of Fe in SPM [51]. Although the behavior of particulate Mn was roughly similar to that of particulate Fe to some extent, it was evidently different that no significant correlation existed between particulate Mn and SPM, compared with the significant correlation between particulate Fe and SPM. The different absorption and desorption behaviors of Fe and Mn (Table 6), which could be inferred from the relationships between dissolved Fe, Mn and SPM, may be responsible for this phenomenon. Similarly, the behaviors of dissolved and particulate Cr did not show significant correlations with the related environmental parameters. Particulate Ni showed a significant positive correlation with only pH, which may be caused by the increased absorption ability by SPM in high pH environments. Particulate Cu showed a significant positive correlation with temperature and significant negative correlations with pH, DO, SPM and Chl a. The complexation of Cu with dissolved organic matter, which could be deduced from the relationships between dissolved Cu and related environmental parameters, contributed to the negative correlations between particulate Cu and some environmental parameters [12]. A similar phenomenon could also be seen for particulate Cd. Generally, similar behaviors or sources of the metals could be concluded by the significant positive relationships between the metals in the particulate phase [5]. Significant positive relationships could be observed between particulate Pb and particulate Fe, Mn, Cr, Ni and Zn, which probably suggested their similar behaviors or sources in the particulate phase in ZJB [5]. Although significant positive relationships were also observed between particulate Cu and Cd, these two particulate metals were not significantly correlated with the other metals in the particulate phase, which indicated that the behaviors or the sources of particulate Cu and Cd were different from the other particulate metals. The complexation of particulate Cu and Cd with dissolved organic matter, as discussed above, may be the main reason that led to their different behaviors from other particulate metals [4].
Relationships among sedimentary metals and environmental parameters. Generally, salinity [52] can reflect the influence of terrestrial input to a certain degree in coastal regions and bays, and the SPM could ultimately settle on surface sediments. Therefore, the above two parameters (i.e., salinity and SPM) presented significant correlations with most of the sedimentary metals in ZJB (Table 6), where the sedimentary metals in the nearshore areas were generally terrigenous, and the SPM was the main source of surface sediments [11,28]. It was an interesting finding that the SPM concentrations generally showed significant negative correlations with most sedimentary metals (Table 6). This phenomenon may be related to the size compositions of the SPM and the hydrodynamic conditions, which need further study in the future. Compared with the metals in dissolved and particulate phases, sedimentary metals seemed to be less influenced by many water environmental parameters, such as T, pH, DO and Chl a, as inferred from the relatively few significant correlations between sedimentary metals and these water environmental parameters (Table 6).
Relationships among partition coefficients and related parameters. The log(K d ) coefficient can be regarded as a way to evaluate the partitioning ability of heavy metals between dissolved and suspended/sediment phases; a higher log(K d ) value indicates a stronger affinity between metals and suspended particles, and a lower log(K d ) value means more metals exist in the dissolved phase [38,39]. Table 7 shows the correlations among the partition coefficients of heavy metals and the related environmental parameters. Significant positive correlations (Table 7) were found among the partition coefficients of Fe, Mn, Zn and Pb, indicating that the partition behaviors are similar between dissolved and particulate phases for these metals. The partition coefficients of Cu and Cd were negatively correlated with the partition coefficients of Fe, indicating that the partition behaviors of Cu and Cd were different from that of Fe. The partition behaviors of Ni were also similar to those of Cu and Cd, which was inferred from the significant positive correlations between their partition coefficients.
Based on the results in Table 7, a higher temperature seems to help the desorption of Fe, Mn, Zn and Pb from SPM, which agrees well with the general adsorption rule of physical chemistry [53]. However, for Ni, Cu and Cd, the correlation variations between their log(Kd) values and temperature presented reverse rules. The relatively lower concentrations of the three metals may be the possible cause. Except for Cr, the partition coefficients of all metals were significantly correlated with many environmental parameters (i.e., pH, DO, SPM, and Chl a) ( Table 7), suggesting that the above environmental factors may probably regulate the partition behaviors of metals between seawater and SPM in ZJB. High phytoplankton primary production usually leads to high concentrations of Chl a, DO, and SPM and high pH [8]. According to the correlation analysis among the partition coefficients of heavy metals and the environmental parameters, we could deduce that high primary production could cause Fe and Pb to be more easily partitioned into the particulate phase and could cause Cu and Cd to be more easily partitioned into the dissolved phase. Similar conclusions were also obtained in other studies [3,8,54]. Similar with Fe and Pb to some extent, high concentrations of Chl a, DO and high pH were favorable for the partitioning of Mn and Zn to into the particulate phase. However, high concentrations of Chl a and DO seemed to be favorable for the partitioning of Ni into the dissolved phase.
Principal component analysis
Principal component analysis (PCA) is a multivariate exploratory technique with two main applications: reducing the number of variables and detecting relationships among them [55,56]. This method was used to identify principal components from three groups of parameters in ZJB: dissolved metals and environmental parameters (Group 1), particulate metals and the related environmental parameters (Group 2), and sedimentary metals and the environmental parameters in the water column (Group 3) (Table 8). Based on the principal component extraction from the PCA, the first three principal components were extracted from each group of parameters (Table 8). For the metals in the dissolved phase and the related environmental parameters, three principal components (PC1-PC3) were identified that accounted for 81.4% of the total data variance. The PC1 of this group, accounting for 53.3% of the data variance, had high positive loadings for temperature, pH, DO, SPM, Chl a, Ni, Cu and Cd and high negative loadings for Fe, Mn and Zn. The results indicated that PC1 had biological characteristics, and primary production (i.e., temperature, pH, DO, SPM and Chl a) in PC1 favored the remainder of Ni, Cu and Cd in the dissolved phase and the clearance of Fe, Mn and Zn from the water. The PC2 of this group, accounting for 18.3% of the data variance, had high positive loadings for Fe, Mn, Cr, Cd, and Pb and had a high negative loading for salinity. This component may represent the hydrological characteristics of these parameters, which indicate that Fe, Mn, Cr, Cd and Pb in the dissolved phase may be influenced by terrestrial inputs to some extent [39,57]. The PC3 of this group, accounting for 9.9% of the data variance, had high positive loadings for salinity, Cr, and Pb and a high negative loading for Fe. This component may represent the influence of the outer seawater on the behaviors of dissolved Fe and Cr in ZJB. Considering the distribution patterns and seasonal variations of the studied metals in the dissolved phase and the related environmental parameters, we obtained the conclusion that the seasonal variations of most of the studied dissolved metals in ZJB were mainly influenced by phytoplankton primary production, and terrestrial inputs could have some effects on the spatial variations of some dissolved metals in ZJB.
For the metals in the particulate phase and the related environmental parameters, three principal components (PC1-PC3) were also identified that accounted for 78.7% of the total data variance. The PC1 of this group, accounting for 44.5% of the data variance, had high positive loadings for temperature, pH, DO, SPM, Chl a, Fe, Mn, Ni and Pb and high negative loadings for Cu and Cd. Like the PC1 of Group 1, the PC1 of Group 2 also had biological characteristics. Phytoplankton production seemed to favor the enrichment of Fe, Mn, Ni and Pb in the particulate phase and to induce the dissociations of Cu and Cd from the particulate phase. The PC2 of Group 2, accounting for 23.6% of the data variance, had high positive loadings for Cr, Ni, Cu, Zn, Cd and Pb and a negative loading for SPM. This result may indicate that low concentrations of SPM were favorable for the enrichment of metals in the particulate phase. The possible reason is because the particle size is generally smaller in a low concentration of SPM and shows a stronger adsorbability for inorganic and organic pollutants in water body. The PC3 of Group 2, accounting for 10.7% of the data variance, had high positive loadings for salinity and Ni and a negative loading for Mn. Different absorption and/or desorption behaviors of Mn and Ni may be responsible for this phenomenon (Table 5).
For the metals in the sedimentary phase and the environmental parameters of the water body, three principal components (PC1-PC3) were identified that accounted for 77.2% of the total data variance. The PC1 of this group, accounting for 52.9% of the data variance, had high positive loadings for all studied sedimentary metals and high negative loadings for salinity, pH and SPM. Considering the distribution patterns of sedimentary metals and the related environmental parameters, we concluded that all studied metals in the sediments were mainly influenced by terrestrial inputs. For the PC2 and PC3 of this group, there were few high loadings of sedimentary metals, reflecting lower disturbance of the water environment on the distribution of metals in surface sediments.
Relationships among metals in different phases
Generally, the relationships among different metals are interdependent in different phases, and it's complicated between metals and related environmental parameters. As revealed in Table 6, many water parameters (such as temperature, pH, DO, and Chl a) had close relationships with metals in dissolved and particulate phases in ZJB. These environmental parameters in the water seemed to have little influence on the sedimentary metals, as revealed by the poor correlations among them (Table 6). However, the salinity and SPM of the water environments seemed to have close relationships with many heavy metals in the surface sediments ( Table 6). The settlement of SPM and the resuspension of surface sediment may strongly contribute to the close relationships between SPM and sedimentary heavy metals. The significant correlations between salinity and many sedimentary heavy metals seemed to be a coincidence, as both the salinity of the water and sedimentary heavy metals were mainly controlled by terrestrial input [41,57]. The relationships among metals in different phases were also different, and there were few close relationships among metals in the dissolved phase, many close relationships among metals in the particulate phase, and closer relationships among metals in the sedimentary phase, which may be attributed to the fact that metals in the dissolved phase more easily migrated and could also strongly interact with the water environment; the mobilities of particulate metals were relatively weak, and their interactions with the water environment were not as close as those of dissolved metals; the mobility of sedimentary metals was weakest, and their interactions with the water environment were less weak than those of dissolved and particulate metals.
Metals in dissolved and particulate phases generally had reverse correlation relationships with water environmental parameters [57][58][59]. For example, dissolved Cu had a negative correlation with temperature and positive correlations with pH, DO and Chl a, while particulate Cu had a positive correlation with temperature and negative correlations with pH, DO and Chl a. The reason for this difference may be that Cu in dissolved and particulate phases exhibited reverse behaviors in response to the variations in environmental parameters. There were also some metals in dissolved and particulate phases that had similar relationships with environmental parameters like Cu. For example, both dissolved and particulate Ni had positive correlations with pH, and both dissolved and particulate Zn had negative correlations with SPM. Other processes such as terrestrial inputs or sediment release may contribute to that phenomenon.
Conclusions
The environmental conditions in Zhanjiang Bay that were inferred from the survey of eight heavy metals (Fe, Mn, Cr, Ni, Cu, Zn, Cd and Pb) were found to be in good conditions due to the low concentrations of these metals in both the dissolved and sedimentary phases. There were obvious seasonal variations in the dissolved Zn, Ni and Cd (i.e., water phase) and the particulate Fe, Mn, Ni, Zn, Pb, Cu and Cd (i.e., particulate phase). The distribution patterns of some metals in the dissolved and sedimentary phases indicated the potential influence of terrestrial inputs. The partition coefficients log(K d ) between dissolved and particulate phases showed a general decrease in the order of Pb%Cd>Fe%Mn>Ni%Cr>Zn>Cu. The values of log(K d ) in some of the eight metals presented obvious seasonal variations. Correlation and principal component analyses indicated that both terrestrial inputs and biological processes regulated the distributions and seasonal variations in metals in the three different phases. Dissolved Fe and Mn were mainly influenced by terrestrial inputs, while dissolved Ni, Cu, Zn and Cd were mainly influenced by biological processes. For the metals in the particulate phase, biological processes seemed to be the main factor that controlled the behaviors of most of the metals in ZJB. For the metals in the sedimentary phase, all metals were mainly influenced by terrestrial inputs. Phytoplankton production in ZJB could cause Fe, Pb, Mn and Zn to more easily enter the particulate phase, while it could cause Cu, Cd and Ni to more easily enter the dissolved phase.
Metals in the different phases interact with the water environments with different intensities, resulting in many strong correlations in sedimentary metals, relatively weaker correlations in particulate metals, and the weakest correlations in dissolved metals. | 2018-08-14T13:49:22.832Z | 2018-08-02T00:00:00.000 | {
"year": 2018,
"sha1": "0a4c12b0a1c7df5060a52a1b5c3ac163df7697a8",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0201414&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0a4c12b0a1c7df5060a52a1b5c3ac163df7697a8",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science",
"Medicine"
]
} |
222291023 | pes2o/s2orc | v3-fos-license | SANSCrypt: A Sporadic-Authentication-Based Sequential Logic Encryption Scheme
We propose SANSCrypt, a novel sequential logic encryption scheme to protect integrated circuits against reverse engineering. Previous sequential encryption methods focus on modifying the circuit state machine such that the correct functionality can be accessed by applying the correct key sequence only once. Considering the risk associated with one-time authentication, SANSCrypt adopts a new temporal dimension to logic encryption, by requiring the user to sporadically perform multiple authentications according to a protocol based on pseudo-random number generation. Analysis and validation results on a set of benchmark circuits show that SANSCrypt offers a substantial output corruptibility if the key sequences are applied incorrectly. Moreover, it exhibits an exponential resilience to existing attacks, including SAT-based attacks, while maintaining a reasonably low overhead.
I. INTRODUCTION
The design process of modern VLSI systems often relies on a supply chain where several services, such as verification, fabrication, and testing, are outsourced to third-party companies. If these companies gain access to a sufficient amount of critical design information, they can potentially reverse engineer the design. One possible consequence of reverse engineering is Hardware Trojan (HT) insertion, which can be destructive for many applications. HTs can either disrupt the normal circuit operation [1] or provide the attacker with access to critical data or software running on the chip [2].
Countermeasures such as logic encryption [3]- [6], integrated circuit (IC) camouflaging [7], watermarking [8], and split manufacturing [9] have been developed over the past decades to prevent IC reverse engineering. Among these, logic encryption has received significant attention as a promising, low-overhead countermeasure. Logic encryption modifies the circuit in a way such that a user can only access the correct circuit functionality after providing a correct key sequence. Otherwise, the circuit function remains hidden, and the output different from the correct one.
Various logic encryption techniques [3]- [6] and potential attacks [10]- [12] have appeared in the literature, as well as methods to systematically evaluate them [13], [14]. A category of techniques [3]- [5] is designed to modify and protect the combinational logic portions of the chip and can be extended to sequential circuits by assuming that the scan chains are not accessible by the attacker, e.g., due to scan chain encryption and obfuscation [15]- [17]. Another category of techniques, namely, sequential logic encryption [6], [18], [19], targets, instead, the state transitions of the original finite state machine (FSM). Sequential logic encryption introduces additional states and transitions in the original FSM, essentially partitioning the state space into two sets. After being powered on or reset, the FSM enters the encrypted mode, exhibiting an incorrect output behavior. The FSM transitions, instead, to the functional mode, providing the correct functionality, upon receiving a sequence of key patterns.
A set of attacks have been reported against sequential encryption schemes, aiming to retrieve the correct key sequence or circuit function. Shamsi et al. [20] adapted the Boolean satisfiability (SAT)-based attack [10], traditionally targeted to combinational logic encryption, by leveraging methods from bounded model checking to unroll the sequential circuit.
Recently, an attack based on automatic test pattern generation (ATPG) [21] uses concepts from excitation and propagation of stuck-at faults to search the key sequence among the input vectors generated by ATPG. When the attackers have some knowledge of the topology of the encrypted FSM, then they can extract and analyze the state transition graph and bypass the encrypted mode [22]. Overall, the continuous advances in FSM extraction and analysis tools tend to challenge any of the existing sequential encryption schemes and call for approaches that can significantly increase their robustness. This paper proposes SANSCrypt, a Sporadic-Authenticationbased Sequential Logic Encryption (SANSCrypt) scheme, which raises the attack difficulty via a multiple-authentication protocol, whose decryption relies on retrieving a set of key sequences as well as the time at which the sequences should be applied. Our contributions can be summarized as follows: • A robust, multi-authentication based sequential logic encryption method that for the first time, to the best of our knowledge, systematically incorporates the robustness of multi-factor authentication (MFA) [23] in the context of hardware obfuscation. • An architecture for sporadic re-authentication where key sequences must be applied at multiple random times, determined by a random number generator, to access the correct circuit functionality. • Security analysis and empirical validation of SANSCrypt on a set of ISCAS'89 benchmark circuits [24], showing exponential resilience against existing attacks, including sequential SAT-based attacks, and reasonably low overhead. Analysis and validation results show that SANSCrypt can significantly enhance the resilience of sequential logic encryption under different attack assumptions.
II. BACKGROUND AND RELATED WORK
Among the existing sequential logic encryption techniques, HARPOON [6] defines two modes of operation. When powered on, the circuit is in the encrypted mode and exhibits an incorrect functionality. The user must apply a sequence of input patterns during the first few clock cycles to enter the functional mode, in which the correct functionality is recovered. However, the encrypted mode and functional mode FSMs are connected by only one transition (edge), which can be exploited by an attacker to perform FSM extraction and analysis, and bypass the encrypted mode [22].
Interlocking [18] sequential encryption modifies the circuit FSM such that multiple paths are available between the states of the encrypted and the ones of the functional FSMs, making it harder for the attacker to detect the only correct transition between the two modes. However, in both HARPOON and Interlocking encryption, once the circuit enters the functional mode, it remains there until reset.
Dynamic State-Deflection [25] requires, instead, an additional key input verification step while in the functional mode. If the additional key input is incorrect, the FSM transitions to a black-hole state cluster which can no longer be left. However, because the additional key input is fixed over time, the scheme becomes more vulnerable to sequential SAT-based attacks [20].
Finally, instead of corrupting the circuit function immediately after reset, DESENC [19] counts the occurrence of a specific but rare event in the circuit. Once the counter reaches a threshold, the circuit enters the encryption mode. This scheme is more resilient to sequential SAT-based attacks [26] because it requires unrolling the circuit FSM a large number of times to find the key. However, the initial transparency window may still expose critical portions of the circuit functionality.
III. SANSCRYPT
We introduce design and implementation details for SAN-SCrypt, starting with the underlying threat model.
A. Threat Model
SANSCrypt assumes a threat model that is consistent with the previous literature on sequential logic encryption [6], [20], [22]. The goal of the attack is to access the correct circuit functionality, by either reconstructing the deobfuscated circuit or finding the correct key sequence. To achieve this goal, the attacker can leverage one or more of the following resources: (i) the encrypted netlist; (ii) a working circuit providing correct input-output pairs; (iii) knowledge of the encryption technique. In addition, we assume that the attacker has no access to the scan chain and cannot directly observe or change the state of the circuit.
B. Authentication Protocol
As shown in Fig. 1a, existing logic encryption techniques are mostly based on a single-authentication protocol, requiring users to be authenticated only once before using the correct circuit function. After authentication, the circuit remains functional unless it is powered off or reset. To attack the circuit, it is then sufficient to discover the correct key sequence that must be applied in the initial state. We adopt, instead, the authentication protocol in Fig. 1b, where the functional circuit can "jump" back to the encrypted mode from the functional mode. Once the back-jumping occurs, another round of authentication is required to resume the normal operation. The back-jumping can be triggered multiple times and involve a different key sequence for each re-authentication step. The hardness of attacking this protocol stems from both the increased number of the key sequences to be produced and the uncertainty on the time at which each sequence should be applied. A new temporal dimension adds to the decryption procedure, which poses a significantly higher threshold to the attackers.
C. Overview of the Encryption Scheme
SANSCrypt is a sequential logic encryption scheme which supports random back-jumping, as represented in Fig. 2. When the circuit is powered or reset, the circuit falls into the reset state E0 of the encrypted mode. To transition to the initial (or reset) state N 0 of the functional mode, the user must apply at startup the correct key sequence to the primary input ports.
Once in the functional mode, the circuit can deliberately, but randomly, jump back, as denoted by the blue edges in Fig. 2, to a state s bj in the encrypted mode, called back-jumping state, after a designated number of clock cycles t bj , called backjumping period. The user needs to apply another key sequence to resume normal operations, as shown by the red arrows. Both the back-jumping state s bj and the back-jumping period t bj are determined by a pseudo-random number generator (PRNG) embedded in the circuit. Therefore, when and where the backjumping operation happens is unpredictable unless the attacker is able to break the PRNG or find its seed. The schematic of SANSCrypt is shown in Fig. 3 and consists of two additional blocks, a back-jumping module and an encryption finite state machine (ENC-FSM), besides the original circuit. We discuss each of these blocks in the following subsections.
D. Back-Jumping Module
The back-jumping module consists of an n-bit PRNG, an n-bit Counter, and a Back-Jumping Finite State Machine (BJ-FSM) which sends back-jumping commands to the rest of the circuit. As summarized in the flowchart in Fig. 4, when the circuit is in the encrypted mode, BJ-FSM checks whether the authentication has occurred. If this is the case, BJ-FSM stores the current PRNG output as the back-jumping period t bj and initializes the counter.
The counter increments its output at each clock cycle until it reaches t bj . This event triggers BJ-FSM to sample again the current PRNG output r, which is generally different from t bj , and use it to determine the back-jumping state s bj = f (r). For example, if s bj is an l-bit binary number, BJ-FSM can arbitrarily select l bits from r and assign the value to s bj . If the first l bits of r are selected, we have f (r) = r[0 : l − 1]. At the same time, BJ-FSM sends a back-jumping request to the other blocks of the circuit and returns to its initial state, where it keeps checking the authentication status of the circuit. On receiving the back-jumping request, the circuit jumps back to state s bj in the encrypted mode and will stay there unless re-authentication is performed. Any PRNG architecture can be selected in this scheme, based on the design budget and the desired security level. For example, linear PRNGs, such as Linear Feedback Shift Registers (LFSRs), provide higher speed and lower area overhead but tend to be more vulnerable than cipher algorithm-based PRNGs, such as AES, which are, however, more expensive.
E. Encryption Finite State Machine (ENC-FSM)
The Encryption Finite State Machine (ENC-FSM) determines whether the user's key sequence is correct and, if it is not correct, takes actions to hide the functionality of the original circuit. The input of the ENC-FSM can be provided via the primary input ports, without the need to create extra input ports for authentication. The output enc out of ENC-FSM, which is n bit long, together with a set of nodes in the original circuit netlist, can be provided as an input to a set of XOR gates, to corrupt the circuit function as in combinational logic encryption [3]. For example, in Fig. 5, a 3-bit array enc out is connected to six nodes in the original circuit via XOR gates. In this paper, XOR gates are inserted at randomly selected nodes. However, any other combinational logic encryption technique is also applicable. As a design parameter, we denote by node coverage the ratio between the number of inserted XOR gates and the total number of combinational logic gates in the circuit.
Only one state of ENC-FSM, termed auth, is used in the functional mode. In state auth, all bits in enc out are set to zero and the original circuit functionality is activated. In the other states, the value of enc out changes based on the state, but at least one of its bits is set to one to guarantee that the final output is incorrect. A sample truth table for a 3-bit enc out array is shown in Table I. Even if the circuit is in the encrypted mode, enc out changes its value based on the state of the encryption FSM. Such an approach makes it difficult for signal analysis attacks, aiming to locate signals with low switching activity in the encrypted mode, to find enc out and bypass ENC-FSM. After a valid authentication, the circuit resumes its normal operation. Additional registers are, therefore, required in the ENC-FSM to store the circuit state before back-jumping so that it can be resumed after authentication.
IV. PERFORMANCE ANALYSIS We analyze SANSCrypt's resilience against existing attacks and estimate its overhead.
A. Brute-Force Attack
Let us suppose that the number of primary input bits used as key inputs is i and each re-authentication procedure requires c clock cycles to apply the key sequence. If the attacker has no preference in selecting the key sequence, then she would have, on average, (2 i·c + 1)/2 ≈ 2 i·c−1 attempts for each reauthentication procedure, which amounts to the same bruteforce attack complexity of HARPOON. However, because the correct key sequence of each re-authentication procedure depends on the PRNG output, the number N prng of possible values of the PRNG output will also contribute to the attack effort. If each PRNG output corresponds to a unique key sequence which is independent from other key sequences, the average attack effort will be N prng · 2 i·c−1 . For a 10-bit PRNG, i = 32, and c = 8, the average attack effort will reach 5.93 × 10 79 .
B. Sequential SAT-Based Attack
A SAT-based attack can be carried out on sequential encryption by unrolling the sequential portions of the circuit [20]. This attack can be remarkably successful especially when the correct key is the same at each time (clock cycle) and the key input ports are different from the primary input ports. Similarly to HARPOON, SANSCrypt is resilient to this SATbased attack variant, since the correct keys are generally not the same at different clock cycles.
We therefore analyze the resilience of SANSCrypt via a modified version of the sequential SAT-based attack [22] that is appropriate for schemes such as HARPOON and SANSCrypt, as shown in Fig. 6. Let us first assume that the encryption scheme requires n clock cycles after reset to enter the functional mode. Then, the attacker can start the attack by unrolling the circuit (n+1) times. The first n copies of the circuit receive the keys at their primary input ports (K a and K b ), while the primary input and output ports of the (n + 1) th circuit replica can be used to read the circuit input and output signals after n cycles. If the SAT-based attack fails to find the correct key with (n + 1) circuit replicas, as in Fig. 6, the circuit will be unrolled one more time (see, e.g., [20]).
The attack above would be still ineffective on SANSCrypt, since it can retrieve the first key sequence but would fail to discover when the next back-jumping occurs and what would be the next key sequence. Even if the attacker knows when the next back-jumping occurs, the above SAT-based attack will fail due to the large number of circuit replicas needed to find all the key sequences, as empirically observed in Section V.
C. FSM Extraction and Structural Analysis
As discussed in Section II, a possible shortcoming of certain sequential encryption schemes is the clear boundary between the encrypted mode and the functional mode FSMs. As shown in Fig. 3, SANSCrypt addresses this issue by designing more than one transition between the two FSMs.
An attacker may also try to locate and isolate the output of ENC-FSM by looking for low signal switching activities when the circuit is in the encrypted mode. SANSCrypt addresses this risk by expanding the output of ENC-FSM from one bit to an array. The value of each bit changes frequently based on the state of the encrypted mode FSM, which makes it difficult for attackers to find the output of ENC-FSM based on signal switching activities.
D. Cycle Delay Analysis
Due to multiple back-jumping and authentication operations in SANSCrypt, additional clock cycles will be required in which no other operation can be executed. Suppose that authentication requires t a clock cycles and the circuit stays in the functional mode for t b clock cycles before the next backjumping occurs, as shown in Fig. 7. The cycle delay overhead can be computed as the ratio Specifically, for an n-bit PRNG, the average t b is equal to the average output value, i.e., 2 n−1 . To illustrate how the cycle delay overhead is influenced by this encryption, Fig. 8 shows the relation between average cycle delay overhead and PRNG bit length. The clock cycles (t a ) required for (re-)authentication are set as 8, 16, 64, and 128. When the PRNG bit length is small, the average cycle delay increases significantly with the increase of t a . However, the cycle delay can be reduced by increasing the PRNG bit length. For example, the average cycle delay overhead becomes negligible for all the four cases when the PRNG bit length is 11 or larger. A key manager, available to the trusted user, will be in charge of automatically applying the key sequences from a tamper-proof memory at the right time, as computed from a hard-coded replica of the PRNG.
V. SIMULATION RESULTS
We first evaluate the effectiveness of SANSCrypt on seven ISCAS'89 sequential benchmark circuits with different sizes, as summarized in Table III. All the experiments are executed on a Linux server with 48 2.1-GHz processor cores and 500-GB memory. We implement our technique on the selected circuits with different configurations and use a 45-nm Nan-gateOpenCellLibrary [27] to synthesize the encrypted netlists for area optimization under a critical-path delay constraint that targets the same performance as in the non-encrypted versions. For the purpose of illustration, we realize the PRNG using Linear Feedback Shift Registers (LFSR) with different sizes, ranging from 5 to 15 bits. An LFSR provides an areaefficient implementation and has often been used in other logic encryption schemes in the literature [9], [28]. We choose a random 8-cycle-long key sequence as the correct key, and select 5%, 10%, 15%, and 20% as node coverage levels. Finally, we use the Hamming distance (HD) between the correct and the corrupted output values as a metric for the output corruptibility. If the HD is 0.5, the effort spent to identify the incorrect bits is maximum.
We run functional simulations on all the encrypted circuits with the correct key sequences (case 1) and without the correct sequences (case 2), by applying 1000 random input vectors. We then compare the circuit output with the golden output from the original netlist and calculate the HD between the two. Moreover, we demonstrate the additional robustness of SANSCrypt by simulating a scenario (case 3) in which the attacker assumes that the encryption is based on a singleauthentication protocol and provides only the first correct key sequence upon reset. Fig. 9a-d show the average HD in these three cases. For all the circuits, the average HD is zero only in case 1, when all the correct key sequences are applied at the right clock cycles. Otherwise, in case 2 (orange) and case 3 (green), we observe a significant increase in the average HD. The average HD in case 3 is always smaller than that of case 2 because, in case 3, the correct functionality is recovered for a short period of time, after which the circuit jumps back to the encrypted mode. The longer the overall runtime, the smaller will be the impact of the transparency window in which the circuit exhibits the correct functionality. We then apply the sequential SAT-based attack in Section IV to circuit s1238 with 5-bit LFSR and 20% node coverage, under a stronger attack model, in which the attacker knows when to apply the correct key sequences. Table IV shows the runtime to find the first set of 7 key sequences. The runtime remains exponential in the number of key sequences, which makes sequential SAT-based attacks impractical for large designs.
Finally, Table II reports the synthesized area, power, and delay overhead due to the implementation of our technique. In more than 70% of the circuits the delay overhead is less than 1%, and exceeds the required clock cycle by at most 5.8%. Except for s27 and s298, characterized by a small gate count, all the other circuits show average area and power overhead of 141.1% and 160.8%, respectively, which is expected due to the additional number of registers required in ENC-FSM to guarantee that the correct state is entered upon re-authentication. However, because critical modules in large SoCs may only account for a small portion of the area, this overhead becomes affordable under partial obfuscation. For example, we encrypted a portion of state registers in s38584, the largest ISCAS'89 benchmark, using SANSCrypt. We then randomly inserted additional XOR gates to achieve the same HD as in the case of full encryption. Table V reports the overhead results after synthesis, when the ratio between the encrypted state registers and the total number of state registers decreases from 100% to 1%. Encrypting 10% of the registers will only cost 33.4% of the area while incurring negative power overhead and 4.2% delay overhead.
VI. CONCLUSION We proposed SANSCrypt, a robust sequential logic encryption technique relying on a sporadic authentication protocol, in which re-authentications are carried out at pseudo-randomly selected time slots to significantly increase the attack effort. Future work includes optimizing the implementation to further reduce the overhead and hide any structural traces that may expose the correct key sequence. Further, we plan to investigate key manager architectures to guarantee reliable timing and operation in real-time applications. | 2020-10-13T01:00:44.358Z | 2020-10-05T00:00:00.000 | {
"year": 2020,
"sha1": "46a7d824a6c3a1ec01c45c6cef5e0d1fa976e216",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2010.05168",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "971c0a7f122884a44e6462cc304cc6abc06f1623",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
} |
221078035 | pes2o/s2orc | v3-fos-license | Comparing a Mobile Phone Automated System With a Paper and Email Data Collection System: Substudy Within a Randomized Controlled Trial
Background: Traditional data collection methods using paper and email are increasingly being replaced by data collection using mobile phones, although there is limited evidence evaluating the impact of mobile phone technology as part of an automated research management system on data collection and health outcomes. Objective: The aim of this study is to compare a web-based mobile phone automated system (MPAS) with a more traditional delivery and data collection system combining paper and email data collection (PEDC) in a cohort of breastfeeding women. Methods: We conducted a substudy of a randomized controlled trial in Sydney, Australia, which included women with uncomplicated term births who intended to breastfeed. Women were recruited within 72 hours of giving birth. A quasi-randomized number of women were recruited using the PEDC system, and the remainder were recruited using the MPAS. The outcomes assessed included the effectiveness of data collection, impact on study outcomes, response rate, acceptability, and cost analysis between the MPAS and PEDC methods. Results: Women were recruited between April 2015 and December 2016. The analysis included 555 women: 471 using the MPAS and 84 using the PEDC. There were no differences in clinical outcomes between the 2 groups. At the end of the 8-week treatment phase, the MPAS group showed an increased response rate compared with the PEDC group (56% vs 37%; P <.001), which was also seen at the 2-, 6, and 12-month follow-ups. At the 2-month follow-up, the MPAS participants also showed an increased rate of self-reported treatment compliance (70% vs 56%; P <.001) and a higher recommendation rate for future use (95% vs 64%; P <.001) as compared with the PEDC group. The cost analysis between the 2 groups was comparable. Conclusions: MPAS is an effective and acceptable method for improving the overall management, treatment compliance, and methodological quality of clinical research to ensure the validity and reliability of findings.
Background
Participant engagement and response is a vital aspect of any clinical research study.Many research studies are costly, labor intensive, and potentially compromised because of the difficulties associated with patient compliance, engagement, incomplete data collection, and inadequate follow-up [1][2][3].The method and type of data collection system utilized to recruit participants and collect data throughout the study is important to ensure the quality, reliability, and validity of data collection.In addition, it must be cost-effective and acceptable to participants, funding organizations, and researchers [4][5][6].
Paper-based data collection in research studies is gradually being replaced or used in conjunction with electronic data collection systems [7], primarily in the form of emails containing links to web-based surveys.Comparison of these two methods has been well documented [8][9][10][11].
In recent years, mobile phone technology has been increasingly used to promote health-related behavioral change and self-management of care via the use of apps and automated SMS text messages.Studies have shown effective changes in psychological and physical symptoms [12][13][14] as well as specific pregnancy and breastfeeding outcomes [15,16] by sending individually tailored text messages to participants.However, a Cochrane review specifically looking at mobile phone apps as a method of data delivery for self-administered questionnaires found that none of the included trials in the review reported data accuracy or response rates [17].Furthermore, a review of studies utilizing mobile phones for data collection showed that they were based on very small sample sizes, collected intermittent data (as opposed to daily), or had limited longitudinal data collection (maximum 9 months) [18][19][20][21].There is also limited assessment of mobile phone technology as part of a web-based automated system, integrating randomization, SMS delivery, and electronic data collection into a streamlined data management system.Although previous studies have compared traditional paper-based data collection with data collection using mobile phones [22,23], there is limited evidence assessing the effectiveness of a combination of paper or email-based methods in comparison with mobile phones as part of an automated data collection management system.In addition, longitudinal data collection using mobile phone technology has not been assessed, particularly in maternal and infant health, despite adults of reproductive age currently being the largest users of mobile phones [24].
Objectives
The primary aims of this study were to compare a web-based research management system utilizing mobile phone technology with a traditional delivery and data collection system using a combination of paper-and email-based methods on clinical research outcomes and to assess the acceptability and effectiveness of use, including cost analysis.
Design
We conducted a prespecified substudy as part of the APProve (CAn Probiotics ImProve Breastfeeding Outcomes?)trial to compare a mobile phone automated system (MPAS) with a paper and email data collection (PEDC) system.APProve was a double-blind randomized controlled trial (RCT) evaluating the effectiveness of an oral probiotic versus a placebo for preventing mastitis in breastfeeding women.It was conducted between April 2015 and December 2016 in 3 maternity hospitals in Sydney, Australia.Detailed methods have been published previously [25].Briefly, it involved the evaluation of a probiotic versus a placebo taken daily for 8 weeks for the prevention of mastitis, which was assessed using short daily and slightly longer weekly questionnaires during the first 8 weeks following birth and longer follow-up questionnaires at 2, 6, and 12 months.
The MPAS was a data delivery and collection system that combined treatment randomization, SMS delivery to participants, electronic data collection, and data management.It was developed by the study team with the aid of an eResearch (electronic research) company, which developed the system based on our prospective design specifications.The system integrated 2 established software services, SMS delivery and a web-based survey tool, which were then linked to a secure web-based data management system.The MPAS sent automated text messages to the participants' mobile phones with links to self-administered web-based surveys.Each survey link was embedded with the participant's unique identifier, enabling comparison across multiple surveys.A maximum of 2 automated reminders were integrated into the system if a participant did not respond after 3 days.The MPAS was pilot tested by 17 members of the research department, with feedback and suggestions integrated into the system before study commencement.
The PEDC included a combination of an 8-week calendar diary provided to participants at the time of trial entry and emailed links to weekly and follow-up surveys.The calendar diaries were identified with the participant study number at the time of treatment randomization, and the start date was manually entered.The A4-size calendar was preserved with a waterproof coating, allowing for daily entries by pen.Participants were encouraged to hang the calendar in a prominent place at home.PEDC users were supplied with a stamped, addressed envelope to post the calendar back to the trial coordinating center at the end of the treatment phase.
Participants and Study Procedures
Of the 639 women randomized to the APProve trial, 539 women were allocated to the MPAS and 100 women to the PEDC.A quasi-randomization process was applied for PEDC recruitment, XSL • FO RenderX which was conducted on randomly preassigned days of the week and continued until 100 participants were recruited.Both groups of women were identified, approached, and consented to the study in the postnatal ward in the same way, but the treatment randomization process was slightly different.
For the women allocated to the MPAS group, a research assistant entered their details into the web-based data management system, which then automatically generated a unique participant identification number and treatment allocation.The randomization schedule was built into the system and generated using a computer random number generator with random block sizes.Randomization of participants using the PEDC was conducted using sealed, opaque envelopes, with the randomization schedule developed using a similar but separate process compared with the MPAS group.
Data Collection
Baseline sociodemographic, clinical, and birth characteristics collected in this study are shown in Table 1.All daily, weekly, and follow-up questionnaires were identical for the 2 groups.For the MPAS group, each study site was provided with an electronic tablet with internet connectivity to enable the research assistant to enter the participants' details, conduct treatment randomization, and enter baseline and hospital data directly into the web-based data management system.All research assistants were trained in the use of the MPAS and given individualized password-protected access to the website, which could be accessed by phone, tablet, or computer.Only deidentified data were entered into the database and linked to an individual study number generated automatically at randomization.The only paper-based data for this cohort included a signed patient information and consent form and a trial entry form containing the participants' contact details.Once randomized, the study number generated by the MPAS was written in the trial entry form to allow for reidentification, if required.An audit trail was integrated into the MPAS to log all SMS messages sent and surveys completed.Daily and weekly outcome data for the APProve trial for the first 8 weeks (56 days) following birth were collected via self-completed questionnaires using automated weblinks sent directly via SMS to the participant's mobile phone.Before the follow-up questionnaires at 2, 6, and 12 months (63, 180, and 360 days), participants were sent an automated link asking for their preferred method of receiving the questionnaires, with SMS, email, or post as options.On the basis of the response, the MPAS would either send the participant an SMS link to the relevant survey or alert the trial coordinator by an automated email of the preference for an emailed or a postal questionnaire.
For the PEDC participants, baseline and hospital data were collected on paper-based data forms and then entered into the web-based system at the trial coordinating center.Once randomized to their allocated treatment, participants were given a calendar diary by the research assistant to record daily XSL • FO RenderX outcomes for 8 weeks.Weekly outcome data for the first 8 weeks and follow-up questionnaires at 2, 6, and 12 months were collected by an emailed weblink to a web-based survey sent by the clinical trial coordinator (Figure 1).
Outcomes
Outcomes evaluating participant acceptability, treatment compliance, and effectiveness of data collection comparing the MPAS with the PEDC were assessed in the 2-month follow-up questionnaire.Data were collected on the ease of participation in the trial and the ease of remembering to take the study treatment every day (both rated from 0 [very difficult] to 5 [very easy]), self-reported compliance with taking the allocated treatment (compliance was defined as having taken the product for ≥42 of 56 days, semicompliance as having taken the product for 15-41 of 56 days, and noncompliance as having taken the product for ≤14 of 56 days), whether the method of data collection was helpful in reminding the participant to take the treatment (ranked from 0 [not helpful at all] to 5 [very helpful]), recommendation of the allocated method of data collection for future studies, and the preference for how the participant wanted to receive the follow-up questionnaires (SMS, email, or post).The effectiveness of data collection was defined as the frequency of completing the questionnaires at all time points.
We also assessed whether the data collection method had any impact on the clinical trial outcomes.Clinical outcomes were collected during the daily, weekly, and 2-month surveys.They included mastitis, maternal infection, and breastfeeding status up to 2 months after birth.The mastitis outcome measure was based on self-reported symptoms related to breast infection or a clinical diagnosis of mastitis by a care provider [26].
Satisfaction with using their assigned method of data collection (MPAS or PEDC) was assessed by using open-ended free text questions to elicit written comments pertaining to what the participants liked the most and the least about their assigned method of data collection and what suggestions could be provided for future use.In addition, satisfaction with the method of data collection was elicited from the MPAS users and responses ranked from 0 (did not like at all) to 5 (really liked it).This response was subgrouped into 2 categories: satisfied (4-5) and less satisfied (0-3).
The cost analysis of utilizing the MPAS compared with the PEDC was also performed.Costs included those associated with the initial development and ongoing usage of each system and personnel time associated with trial participant survey collection and follow-up.A web-based time tracking report was generated weekly to determine the average time required for creating and sending emails and manual data entry from paper survey collection.
Statistical Analysis
Baseline sociodemographic, clinical, and birth characteristics were compared between the 2 groups.Categorical data were summarized using percentages, and the differences in the characteristics between the 2 groups were assessed using a chi-square test.Continuous outcomes with a normal distribution were summarized using mean and SD, and the characteristics between the 2 groups were compared using t tests.Data with a nonnormal distribution were summarized using medians, and the groups were compared using nonparametric Wilcoxon tests.Satisfaction with the MPAS was analyzed by maternal sociodemographic characteristics and treatment compliance.Written responses were thematically assessed by 2 authors and an external researcher, who each independently coded the data, followed by group discussion.Common themes and relevant responses were identified, and frequency was quantified.Analyses were conducted using SPSS version 24 (IBM SPSS Statistics, 2016 IBM Corporation), and P value <.05 was used for statistical significance.
Participant Characteristics
Of 620 women, 526 women were quasi-randomized to the MPAS group and 94 women to the PEDC group.There were no differences between the groups except that a higher percentage of women in the MPAS group gave birth to their first baby (P=.02;Table 1).After loss to follow-up of 10.5% (55/526) participants in the MPAS group and 11% (10/94) in the PEDC group, secondary outcomes were analyzed for 555 women.We found no difference in the trial outcomes between the 2 data collection groups (Table 2).There was also no difference in the ease of use between the MPAS and PEDC groups.However, a higher proportion of participants using the MPAS were compliant with taking the study treatment (331/471, 70.3% vs 47/84, 56%; P<.001), were more likely to rate their method of data collection as being a helpful reminder to record their symptoms (median 4.37 vs 2.63; P<.001), and were more likely to recommend their assigned method for future use (330/349, 94.6% vs 36/56, 64%; P<.001).There was little difference among the characteristics of the women who were lost to follow-up compared with those for whom we had follow-up data, except that at 2 months postpartum, the former were less likely to be tertiary educated (45/65, 69% vs 472/555, 85.0%; P=.001).
Effectiveness and Satisfaction
The frequency with which women completed the daily and weekly questionnaires was consistently higher among the MPAS users, with a 56% average response rate over the 8-week treatment period compared with 37% (P<.001) among the PEDC users (Figure 2).There was a gradual decrease in the MPAS daily response rate over the course of the treatment phase from 70% in the first week to less than half the women completing the questionnaires by 8 weeks.Although the daily response rate from PEDC users was lower than MPAS users, there was a notable spike in the response rate among the PEDC users on the days the weekly questionnaires were sent by email (Figure 2).Response rates for the follow-up questionnaires showed a 12% higher rate of survey completion among the MPAS users at 2 months compared with the PEDC participants, with an 18% difference at 12 months (P<.05; Figure 2).Among the MPAS users, satisfaction was high with a mean score of 4.49 out of 5 (SD 1.0).There was no difference in satisfaction scores among maternal characteristics.There was a difference in satisfaction related to compliance, with participants most compliant with treatment being the most satisfied with the use of the MPAS (P<.001; Figure 3).Nearly half of the participants preferred to receive the questionnaires by either SMS (135/289, 46.7%) or email (139/289, 48.0%) at 2 months; however, the preference for SMS increased to 60% for both the 6-and 12-month questionnaires (142/241,58.9%and 135/224,60.2%,respectively).Very few women opted to receive questionnaires by post (<5%).Responses to open-ended questions in the 2-month questionnaires were received from 74.1% (349/471) MPAS participants and 67% (56/84) PEDC participants.The themes identified were related to the factors that the participants liked most and liked least about their method of data collection as outlined in Table 3.Most of the MPAS participants stated that the MPAS was easy, convenient, quick, accessible, and efficient to use.In particular, many commented that web-based questionnaires were easy to complete while breastfeeding.
Overall, less than 5% (16/349) of the MPAS participants stated that it was difficult to remember to complete the survey every day, compared with 25% (14/56) PEDC participants.Approximately, 1 in 5 participants in each group commented on the functionality of either the diary or the MPAS, such as difficulty with formatting, size restrictions, Wi-Fi accessibility, and inability to enter additional comments.Although 11 women in the MPAS group stated that they found the text messages XSL • FO RenderX intrusive, 3 participants stated that they liked the fact that this method was not intrusive.Suggestions for future use by the MPAS participants included allowing users to select the time of day to receive the SMS and to opt in or out of reminder messages, limiting the number of questions on the questionnaire to minimize scrolling, diversifying the content of each SMS for improved interest, and improving the functionality to allow the questionnaires to be completed later if interrupted.Many of the PEDC participants recommended the use of SMS or a web-based app for data collection (Textbox 1).
Textbox 1. Participants' comments about the mobile phone automated system compared with the paper and email data collection system.
Mobile phone automated system
• "I found using my phone to complete the surveys great as I could do it easily when feeding my daughter." • "It was great-something to look forward to everyday.It was easy and also a great reminder in case I had forgotten to take my daily Approve sachets." • "So easy to remember and to complete the daily survey.I often completed the survey while out and about." • "Most people have a smartphone on hand.Much easier than using a computer or a paper record.Ease of use-always with me.Could answer questions while breastfeeding my baby." • "Sometimes it took a while to upload the questions" • "Reminders were great but sometimes daily were a bit annoying" • "Weekly questionnaires bit lengthy" • "It would often change my response (touch feature too sensitive)" • "Hard to see if the survey was completed if forgotten to complete the previous ones" Paper and email data collection system • "I liked to be a part of this study but it was not that easy to remember it to take every day...I missed sometimes." • "Helped to keep on track.Encouraged me to have a morning routine that incorporated having breakfast at the similar time each morning." • "The calendar was quick and easy.Can't imagine also having to write in a diary on a daily basis." • "Now that everyone is on the phone maybe there could be a daily reminder on the participants' phone, creating an app or site so the data goes straight to the research office daily or weekly." • "Filling out the manual form is troublesome." • "Forgetting to fill in the daily diary even though it was clearly explained to me before I agreed to do the trial.I'm so sorry.I only found it the other day in a pile of paperwork.I do everything electronically." • "Keeping track and filling as was not doing it every day so it was hard to remember after 15-20 days for that period, sorry." • "The progress chart would be easier if online or an app so it could be filled in on a smartphone during feeds." • "Probably use a different stock as it could be hard to write on."
Cost Analysis
Cost analysis between the 2 groups showed a comparable per-person cost, with the MPAS costing on average Aus $10 (US $7.21) more (Tables 4 and 5).
Principal Findings
This study demonstrates that an MPAS is an effective and acceptable tool for improving study delivery and data collection within a randomized trial as compared with a more traditional system.We have shown that the mobile phone system improved treatment compliance and response rates, demonstrated greater user satisfaction, is comparable in cost to PEDC, and does not impact study outcomes.
Comparison With Prior Work
Our study supports previous studies which showed that SMS messaging could improve treatment adherence and was acceptable to participants [16,19,27].Despite concerns about long-term attrition in previous studies [28], the MPAS results showed that even with a decrease in response rates over time, XSL • FO RenderX the response rates were consistently higher than the PEDC rates over the same period, possibly because of better engagement among the users.Although the response rate of the PEDC participants showed that 37% (35/94) of the participants returned a completed questionnaire, it is likely that some of the days may have been retrospectively completed, compromising the accuracy of the data.The peak completion rate of the PEDC questionnaires was on the day the weekly questionnaires were emailed to the participants, suggesting that emailed links are a more effective method of data collection compared with paper-based data collection, although they are more time consuming for the trial coordinator compared with automated SMS links.Despite no difference in clinical outcome measures between the 2 groups, the increased response rates to the daily surveys provided rich data regarding breastfeeding habits, confirming the feasibility of using an MPAS as a means of improving the reliability of outcome data in breastfeeding research [23].
The daily questionnaires of the MPAS appeared to have a secondary effect of improving treatment compliance by serving as a daily reminder, which in turn increased engagement with the system, resulting in a higher rate of satisfaction.Anecdotally, satisfaction among the research assistants was also high, with the majority saying that the MPAS was easy to use and less time-consuming for randomization and data entry as compared with paper forms.Moreover, the MPAS minimized the use of paper.
Despite previous research showing a 55% reduction in cost upon using electronic data collection compared with paper data collection [10], our study indicates that the cost per person is comparable between PEDC and MPAS.This is largely because of the differences in electronic data capture between the 2 studies, with the earlier study collecting, monitoring, and entering data directly into a web-based database, whereas the major expenditure to our study was the development of a research management system that integrated randomization, automated SMS, and data collection.It is important to note that once the trial infrastructure and data hosting was installed and initiated, there was potential to significantly scale up the number of participants and the duration of the study without an incremental increase in cost, whereas an increase in PEDC participants would constitute a supplemental increase in labor costs.An additional 18 PEDC participants in our study would have balanced the costs between the 2 groups.Furthermore, the scope for contact and engagement with participants with the MPAS is greater compared with paper and email methods of data collection.For example, the PEDC participants each received a minimum of 11 emails.Conversely, the MPAS participants received an average of 61 automated text messages, including welcome texts, daily SMS, and reminder messages.If the same number of texts were sent by email by a clinical trial coordinator, the cost would have increased to an additional Aus $200 (US $144.17There is very little data to evaluate the use of SMS as a consolidated research management tool.We found many benefits of using MPAS in the multicenter APProve trial, including a centralized system to manage randomization, data collection across all stages of the trial, automated reminders and alerts, reduced paper transfer of sensitive patient information between sites, reduced potential for transcription error [11,29,30], and improved reliability of daily data collection associated with reduced risk of recall bias [23].Reducing the burden and time of data collection on the research assistant was significant, along with issues associated with patient confidentiality and storage of physical case report forms [23,29].The advantage of integrating the MPAS via a web-based platform ensured access across mobile phone platforms and enabled accessibility to a large and diverse population, especially for those living in rural, remote, or disadvantaged areas or where mobility is restricted [31,32].In addition, staff sick leave and absences were less of an issue because of the automated nature of the system, leading to increased flexibility of the research team, which is important when managing research studies on small budgets in small teams.
Strengths and Limitations
The main strength of our study was embedding the assessment of the MPAS versus PEDC as a substudy in an RCT with quasi-randomization to treatment group showing little difference between study groups.Most studies comparing paper-based data collection and electronic data collection had very small sample sizes, 20 to 116 participants [20,33], whereas we were able to show an effective difference with a statistically robust sample size.Furthermore, daily data collection for 8 weeks and comparison of responses at 3 strategic time points over the course of 1 year was instrumental in the accurate assessment of outcomes and minimizing errors in recall bias [34].The inclusion of data accuracy and response rates fills a gap in the literature as addressed by a relevant Cochrane review [17].Furthermore, the method of data collection for both groups allowed for objectivity of responses without gratitude bias, as is often seen in questionnaires of a face-to-face nature [35,36].
One of the limitations of the study was the difference in sample size between the 2 groups.As this was a substudy of an RCT, it was not powered for this secondary outcome.Random sampling was performed to ensure that the MPAS did not adversely affect the primary outcome.Although baseline maternal characteristics show that more women in the MPAS group gave birth to their first baby, possibly because the paper diary appeared more overwhelming for first-time mothers, there were no differences between maternal health and breastfeeding outcomes.In addition, self-reported compliance can be perceived as subjective and prone to bias, but as compliance was measured by the same method in both groups, the bias would be nondifferential.There were also issues with the interface and usability for completing the questionnaires via the web for both MPAS and PEDC participants.However, we were able to resolve many of the issues and make slight modifications to the software over time.This did not negatively impact the response rates.A final limitation was that no assessment of participant time was included in the cost analysis.This was not included as it was not anticipated that there would have been a discernible difference in time cost between the 2 groups.Posting the diaries and logging on to the computer for the weekly questionnaires may have elicited more time from the PEDC participants, but this would have been negligible.
Conclusions
Despite the increasing growth of web-based clinical trial management systems, there has been little or no evaluation of these systems against traditional methods of trial management systems.Since the commencement of our trial, there have been improvements in the quality and availability of electronic data collection systems.For example, REDCap (Research Electronic Data Capture) is a secure web application for building and managing web-based surveys and databases, specifically for research studies and operations [37].The system offers an easy-to-use and secure method of flexible yet robust data collection, which is free to researchers affiliated with universities.Using such a system would have decreased the costs associated with the development of the web-based survey tool we utilized as well as eliminated many of the functionality issues we experienced to reduce future research costs.
Future research should focus on how to maximize the effect of mobile phone technology, such as implementing strategies to improve long-term engagement with participants by simplifying questionnaires, optimizing the number of text messages, and personalizing the content and timing of messages.
Although we evaluated MPAS in a perinatal population, the use of mobile phone technology provides the opportunity to facilitate and improve the quality and effectiveness of clinical research studies; enhance patient interaction; and improve clinical research across a wide range of methodologies, disciplines, and health care settings.Integration and evaluation of mobile phone research management systems that are cost-effective, efficient, and acceptable to both researchers and patients is essential, given the increasing use of mobile phone technology [24] and high costs of undertaking research.We have shown that the use of an integrated MPAS is an effective and acceptable method for improving the overall management, treatment compliance, and methodological quality of a randomized clinical trial to ensure validity and reliability of findings, in addition to being cost-effective.
Figure 1 .
Figure 1.Flow diagram comparing the mobile phone automated system with paper and email data collection.MPAS: mobile phone automated system; PEDC: paper and email data collection.
Figure 2 .
Figure 2. Effectiveness of data collection between the mobile phone automated system and the paper and email data collection.MPAS: mobile phone automated system; PEDC: paper and email data collection.
Figure 3 .
Figure 3. Treatment compliance and satisfaction for the mobile phone automated system (n=555).
Table 1 .
Characteristics of participants using the mobile phone automated system compared with the paper and email data collection system.
paper and email data collection. c Test statistics using Pearson chi-square test for categorical variables and 2-tailed, independent sample t test for continuous variables with their respective df are presented.
College, university, or vocational training after high school.
d N/A: not applicable.e
Table 2 .
Impact and acceptability of the mobile phone automated system compared with the paper and email data collection system.Test statistics using Pearson chi-square for categorical variables and 2-tailed, independent sample t test for continuous variables with their respective df are presented.
a MPAS: mobile phone automated system.b PEDC: paper and email data collection.c d N/A: not applicable.e N=469.f N=468.g N=349.h N=56.
Table 3 .
Qualitative analyses of the likes and dislikes of mobile phone automated system users compared with paper and email data collection system users.
a MPAS: mobile phone automated system.b PEDC: paper and email data collection.
Table 4 .
Cost analysis for paper and email data collectionCost, Aus $ (US $) a Paper and email data collection All costs are calculated in Australian dollars (Aus $1=US $0.72).Labor is calculated at Aus $50 (US $36.04) per hour.
a b c Emails are calculated at 5 min per email.
Table 5 .
Cost analysis for mobile phone automated system. | 2019-09-16T03:33:53.725Z | 2020-06-13T00:00:00.000 | {
"year": 2020,
"sha1": "784107146d8e861b1aaf1443a2665c2d03b3c00e",
"oa_license": "CCBY",
"oa_url": "https://mhealth.jmir.org/2020/8/e15284/PDF",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a2246a61159d2a71f9c7111701a10d7376043693",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14537539 | pes2o/s2orc | v3-fos-license | Expression of members of the myf gene family in human rhabdomyosarcomas.
Northern analysis of tumour RNA has been used to examine the expression of members of the myf family of muscle determining genes (myf3, myf4, myf5 and myf6) in a series of 20 rhabdomyosarcomas. A 2.0 kb myf3 transcript was observed in 85% of tumours, a 1.8 kb myf4 transcript was detected in 70% of tumours and a 1.7 kb myf5 transcript was observed in 55% of tumours. Transcription of myf6 occurred in 28% of tumours, but there were several transcript sizes (1.2, 1.5, 2.0 and 3.5 kb) and in some individual tumours two or more transcripts were observed. Only two rhabdomyosarcomas, one classified as embryonal and one as pleomorphic, failed to exhibit transcription of members of the myf gene family. We were unable to detect transcription of myf genes in neuroblastomas, Wilms' tumours, hepatoblastomas, paediatric non-Hodgkin's lymphoma and leiomyosarcomas. When considered together these observations suggest that expression of myf genes could provide an extremely useful marker in the diagnosis of rhabdomyosarcoma. ImagesFigure 1
The development of skeletal muscle is a complex process in which multipotent stem cells initially become committed to form mononuclear myoblasts. The myoblasts then fuse to form myotubes which in turn mature into striated muscle. Recent studies have demonstrated that the transition along this differentiation pathway may be determined, at least in part, by a family of trans-acting transcription factors that are directly involved in controlling the expression of muscle specific genes. The first genes found to encode muscle determining factors were the mouse MyoDl gene (Davies et al., 1987) and the rat myogenin gene (Wright et al., 1989) which were both isolated by procedures involving subtractive cDNA hybridisation. Subsequently Braun et al. (1989bBraun et al. ( , 1990 isolated two related genes designated myf5 and myf6 from a human foetal muscle cDNA library and an additional two genes called myf3 and myf4 (Braun et al., 1989a) that are the human homologues of, respectively, MyoDI and myogenin. Each of the four myf genes encodes a highly conserved basic-helix-loop-helix region that is believed to be responsible for the binding of myf proteins to enhancer regions in muscle specific genes. In addition, each gene can convert mouse fibroblasts into cells with myogenic characteristics and in analyses of normal tissue all four genes were expressed exclusively in striated muscle (Braun et al., 1989a(Braun et al., ,b 1990 Rhabdomyosarcomas are tumours that show differentiation towards striated muscle. Conventionally, three main histological subtypes are recognised (Enzinger & Weiss, 1988). Most (70-80%) are embryonal rhabdomyosarcomas that frequently have the microscopic appearance of foetal muscle and vary in morphology from undifferentiated round cell tumours with few discernible myoblasts to well differentiated tumours containing a high proportion of myoblasts (Enzinger & Weiss, 1988). They usually arise in the first and early second decades of life and account for 6-7% of paediatric neoplasms. Alveolar rhabdomyosarcomas, which usually occur during the second and third decades, are composed of aggregates of poorly differentiated cells that are separated by bands of dense fibrous tissue. Finally, the rare pleomorphic rhabdomyosarcomas occur later in life and are characterised by the presence of haphazardly arranged bizarre cells. Diagnosis is often problematic particularly for poorly differentiated embryonal tumours which can be difficult to distinguish from other classes of paediatric round cell tumours, such as neuroblastoma, hepatoblastoma and non-Hodgkin's lymphoma (Enzinger & Weiss, 1988). The final tissue diagnosis frequently requires the use of supplementary techniques such as electron microscopy and, in particular, the use of immunohistochemical reagents which detect, for example, muscle-associated intermediate filaments, contractile proteins and myoglobin. The most commonly used antibodies are those directed against desmin, myoglobin, fast myosin and sarcomeric actin. Their interpretation is sometimes difficult particularly in poorly differentiated rhabdomyosarcomas where expression of muscle-associated proteins is limited (Schmidt et al., 1988;Carter et al., 1989;Dodd et al., 1989;Carter et al., 1990). In order to determine whether the muscle determining genes myf3, myf4, myf5 and myf6 can be used to assist in the diagnosis of rhabdomyosarcoma we have, in the present study, examined the expression of these genes in a series of rhabdomyosarcomas that included representatives from all three histological categories.
Materials and methods
Tumours and cell lines Fresh specimens of primary soft tissue sarcomas were obtained from the Royal Marsden Hospital, London and Surrey, St Thomas' Hospital, London and The Hospital for Sick Children, Bristol, and stored at -70°C. The cell lines RD, A204, A673 and Hs729 were obtained from the American Type Culture Collection and maintained under conditions recommended by the supplier. The RMS cell line (Garvin et al., 1986) was kindly provided by Dr Julian Garvin.
Preparation of RNA Total cellular RNA was prepared from cell lines as described by Feramisco et al. (1982). To prepare RNA from tumour material the tumour (up to 0.1 g) was frozen in liquid nitrogen and ground into a powder with a pestle and mortar. The powder was added to 0.3 ml lysis solution (140 mm NaC12, 2 mm MgCl2, 200 mm Tris-HCI, pH 8.5, 0.5%, v/v, Nonidet P40) containing 1.3 ;g ml-' of the RNAase inhibitor Aluminon (Aldrich). The mixture was immediately vortexed for 5 s then centrifuged for 30 s at 8,000g. The supernatant was recovered and mixed with 0.5 ml of phenol and 0.35 ml classes of other paediatric tumours (neuroblastoma, Wilms' tumour, hepatoblastoma and non-Hodgkin's lymphoma) and of other soft tissue tumours (leiomyosarcomas). Fifteen rhabdomyosarcomas had an embryonal histology and were predominantly from children under 15. Of the remaining five rhabdomyosarcomas, three had an alveolar histology while there were single cases of pleomorphic and mixed embryonal/ alveolar tumours.
To detect transcription of myf genes in primary rhabdomyosarcomas Northern blots of total cellular RNA were hybridised to 32P-labelled cDNA probes. In these experiments ( Figure 1 and Table I) a 2.0 kb myf3 transcript was detected in 17/20 tumours, a 1.8 kb myf4 transcript was detected in 15/20 tumours, and a 1.7 kb myf5 transcript was found in 11/20 tumours. Transcription of myf6 was observed in 5/18 tumours but there were several different mRNA sizes (1.1, 1.5, 2.2 and 3.5 kb) and some tumours expressed more than one transcript. Thus STS259 contained both 1.5 and 3.5 kb transcripts while STS238 contained transcripts of 1.1, 1.5 and 2.2 kb (results not shown). Comparisons of the results obtained with the four myf probes failed to reveal consistent patterns of expression (Table I). Some rhabdomyosarcomas expressed all four myf genes. Other groups of tumours expressed (a) myf3, myf4 and myf5 but not myf6 (b) only myf3 and myf4 (c) only myf4 and myfS and (d) only myf3 and myf5. Finally two tumours, one embryonal rhabdomyosarcoma (STS249), and one pleomorphic tumour (STS23) showed no evidence for transcription of myf genes.
Several rhabdomyosarcoma cell lines were also examined for expression of the four myf genes. RNA from two lines, RD and RMS contained abundant myf3 and myf4 transcripts but failed to hybridise to myf5 and myf6 probes ( Figure 1). In contrast for the remaining three lines, A204, of TSE (0.5%, w/v, SDS, 5 mM EDTA, 10 mM Tris HCI, pH 8.5) vortexed and subject to centrifugation for 1.5 min at 8,000g. The aqueous phase was then extracted twice with 0.5 ml phenol and once with 0.5 ml chloroform. Finally, following the addition of 2.5 volumes of ethanol, the RNA was allowed to precipitate at -20°C for 15 min, pelleted by centrifugation, redissolved in 40 il of water and stored at -200C.
Northern analysis Northern analysis was performed exactly as described previously (Stratton et al., 1990) except that the hybridisation membrane was washed at 650C with 1 x SSC containing 0.5% (w/v) SDS. The following cDNA hybridisation probes were used: a 0.8 kb EcoRl-EcoRl myf3 fragment (Braun et al., 1989a); a 1.3 kb EcoRl-EcoRl myf4 fragment (Braun et al., 1989a); a 1.1 kb BamHl-BamHl myfS fragment (Braun et al., 1989b); a 1.2 kb EcoRl-EcoRl myf6 fragment (Braun et al., 1990); and a 1.1 kb PstI-PstI fragment of glyceraldehyde-3-phosphate dehydrogenase cDNA (kindly provided by Louise Howe). Table I). Loading of RNA samples was assessed by staining RNA gels with ethidium bromide and by hybridisation of Northern blots to a glyceraldehyde-3-phosphate dehydrogenase (G3PD) cDNA probe. aEmbryonal (E), alveolar (A), or pleomorphic (P); bExpression of myf3, myf4, myf5 and myf6 were determined by Northern analysis as described in the Materials and methods and illustrated in Figure 1. The presence of vimentin, desmin, fastmyosin and myoglobin were determined using antibodies as described by Carter et al. (1989Carter et al. ( , 1990; CThese samples were taken from metastatic lymph node tumours.
Expression of myf genes in human tumours
A673 and Hs729 we found no evidence for myf gene expression. An interesting correlation was observed between the presence of myf gene transcripts and cell morphology. Thus the two cell lines (RD and RMS) expressing myf3 and myf4 contained a significant proportion of spindle shaped cells that had the appearance of myoblasts while the lines that did not express members of the myf gene family had an undifferentiated appearance (results not shown). Similar results were obtained by Hiti et al. (1989) who detected MyoDl (myf3) in RD cells, but not in A204 and A673 cells, and noticed the same correlation between MyoDl expression and cell morphology.
To determine whether myf genes are expressed in other classes of paediatric tumours we have examined three Wilms' tumours, two neuroblastomas, two hepatoblastomas, and three non-Hodgkin's lymphomas and, as an additional control, we analysed two smooth muscle tumours (leiomyosarcomas). Transcription of myf genes was not detected in these tumours (results not shown).
Since each member of the myf gene family contains a short conserved basic-helix-loop-helix region (Braun et al., 1989a(Braun et al., , 1990, the possibility arose that particular myf gene probes might cross hybridise to transcripts from other family members. We believe that this is unlikely since, as described above, comparisons of the levels of transcripts observed with each of the four myf gene probes revealed many distinct patterns of hybridisation. In addition, the sizes of several of the major myf6 transcripts (1.2, 1.4 and 3.5 kb) were quite distinct from those observed for myf3, myf4 and myf5 (1.7-2.0 kb). It could also be suggested that the signal resulted from contamination with normal striated muscle. Again we believe that this is unlikely because (a) care was taken to remove normal tissue before the samples were stored (b) many of the tumours were from sites which did not contain striated muscle and (c) when compared on the same Northern blot the signal observed for tumour RNA was usually much more intense than that observed for RNA from striated muscle (result not shown).
There is some evidence that the expression of myf5 is correlated with the stage of muscle differentiation. Thus levels of myf5 transcripts were high in early foetal skeletal muscle but dropped considerably in adult muscle (Braun et al., 1989a,b). We were therefore interested to see whether the level of expression of the myf genes correlated with the degree of differentiation of rhabdomyosarcomas, which can be assessed by examining the immunophenotype defined by antibodies that detect muscle associated epitopes such as desmin, fast myosin and myoglobin (Carter et al., 1990). Myoglobin and fast myosin are usually associated with well differentiated elements that often reveal morphological features of differentiation towards striated muscle in conventionally stained sections. By comparison desmin is expressed in a broader spectrum of rhabdomyosarcomas. Unfortunately for the present fairly small groups of rhabdomyosarcomas we failed to find any correlation between myf gene expression and degree of differentiation as determined by immunohistochemical analysis (Table I).
Discussion
Northern analysis of tumour RNA has been used to demonstrate that the myf3 gene is expressed in a high proportion of primary rhabdomyosarcomas. These results are in agreement with studies carried out in other laboratories. Using a mouse MyoDl cDNA probe Hiti et al. (1989) detected transcripts in four out of five primary embryonal rhabdomyosarcomas, and two out of three rhabdomyosarcomas growing as explants in vitro. Similarly in studies on fresh rhabdomyosarcomas Scrable et al. (1989) detected MyoDl-related transcripts in five out of five alveolar tumours and eight out of eight embryonal tumours. We have now extended these analyses to other members of the myf gene family. Our results show that myf3 was expressed in the majority of rhabdomyosarcomas (17/20) usually together with myf4. By comparison the myf5 and myf6 genes, although yielding abundant transcripts in some rhabdomyosarcomas, were expressed in a lower proportion of tumours: 11/20 for myf5 and 6/18 for myf6.
Members of the myf gene family are apparently expressed quite infrequently in other classes of tumour. Both Hiti et al. (1989) and Scrable et al. (1989) failed to detect MyoDlrelated transcripts in other groups of paediatric tumours and in soft tissue tumours. Furthermore, in the present study we failed to detect transcription of the four myf genes in Wilms' tumour, neuroblastoma, hepatoblastoma, paediatric non-Hodgkin's lymphoma and leiomysarcoma. Since myf gene expression appears to be restricted to rhabdomyosarcoma, it is possible that the expression of these genes may prove useful in the diagnosis of rhabdomyosarcoma. In this regard myf3 and myf4, which are both expressed in a high proportion of rhabdomyosarcomas may be particularly useful.
One embryonal tumour (STS249) and one pleomorphic tumour (STS23) failed to show myf gene expression. However both tumours expressed desmin and myoglobin (Table I) and the diagnoses were considered to be sound. It is conceivable that the absence of myf gene expression is simply a reflection of the insensitivity of Northern analysis when compared, for example, to the immunohistochemical methods that were used to detect desmin and myoglobin. A major advantage of immunohistochemical methods is that they can be used to detect expression of proteins in small pieces of tumour. Indeed, if expression of myf genes is to become widely accepted as a marker in the diagnosis of rhabdomyosarcomas, it will be necessary to produce antibodies for use in routine immunohistochemical studies which ideally could be used to examine formalin fixed tissue. In conclusion we have demonstrated that each member of the gene family myf3, myf4, myf5 and myf6, is expressed in rhabdomyosarcomas. In addition, since the great majority of rhabdomyosarcomas express one or more of these genes and their expression was not detected in other classes of paediatric tumour, they could prove extremely useful in the diagnosis of rhabdomyosarcomas. However, if this method is to be adapted for routine use in histopathology laboratories, antibodies that recognise the myf proteins will be required. Indeed it is probable that the production of these antibodies should represent a major objective of future studies. | 2014-10-01T00:00:00.000Z | 1991-12-01T00:00:00.000 | {
"year": 1991,
"sha1": "dba4d1859fa4599166756990f65b4a0125ca9604",
"oa_license": "implied-oa",
"oa_url": "https://europepmc.org/articles/pmc1977834?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "dba4d1859fa4599166756990f65b4a0125ca9604",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
229469359 | pes2o/s2orc | v3-fos-license | Study on Smart Construction Materials and Practices Using Glass Fiber in Reinforced Concrete
Glass-fiber reinforced concrete (GFRC) is a composite material made of a cementitious grid made out of concrete, sand, water and admixtures. In which glass filaments are scattered in short length. The Non-Structural components, as façade boards, funneling and channels has been generally utilized in the development business. GFRC gives various points of interest, for example, being lightweight, imperviousness to fire, great appearance and strength. Trial-tests are led for concrete with glass fiber and without glass fiber are led. The distinctions in compressive quality and flexural quality have been found in this investigation by utilizing 3D shapes of differing sizes. Various applications and advantages of GFRC appeared in the examination, techno-economic correlation with different kinds, the test results, and furthermore the financial calculations introduced demonstrate the capability of GFRC as an elective development material.
Introduction
Glass Fiber Reinforced Concrete (GFRC) or (GRC) is cement applied with fiber. In addition to sand, GFRC fuses hydration effects from asphalt, or lime, and glass fibers. Glass filaments were first developed and used in Russia for strengthening concrete and cement. Shockingly, the incredibly simple Portland concrete structure was consuming them and also Antacid stable glass strands h along those lines.
In addition, GFRC is a type of solid that uses fine sand, concrete, polymer (normally an acrylic polymer), water, various admixtures and healthy antacid safe glass strands. Many blend plans can be accessed on different sites without reservation, but all give similarities in fixing extent. Glass fiber reinforced cementitious composites were primarily developed to build slim sheet segments, with a glue or mortar grid, and 0.5%, 1.0% 1.5%, fiber content. Different uses and applications were considered, either by making fortifying bars with permanent glass filaments consolidated and impregnated with plastics, or by making comparable small, rigid units, and impregnated with epoxy. During blending it is dispersed onto the solids. Glass threads are borne in a loop while liquid glass is drawn through the base of a warmed platinum tank or bushing by the form of fibers. At the same time 204 fibers are being drawn. They get tough out of the warmed tank while they cool. And they are grouped on a drum into a strand comprising of the 204 fibers. The fibers are covered with a sizing which protects the filament against weather and abrasion effects, as well as binding them together in the strand [2]. under stress because it is brittle. The use of fibers in concrete has greatly increased both its compressive as well as tensile strength. The concrete blends employed alkali resistant glass fibers. The benefits of technique improving and prestressing technology is the usage of steel insulation as high tensile steel wires has managed to solve concrete's stress incapacity however, the toughness and cracking tolerance have not enhanced. Gaurav Tuli, Ishan Garg [2] have analyzed Plain concrete possess very low tensile strength, weak ductility and minimal cracking. The strain properties as well as the resistance to crack, ductility and flexure strength and toughness is improved when fibers are added in certain percentage in concrete. To improve the flexural strength of concrete various works have been done. Various forms of additives and add mixtures are used. Addition of glass fibers provides concrete resistance to tension [3]. Concrete is one of the most commonly known construction substance often provided by the use of materials which are accessible locally. The compressive power, tensile strength and the break tensile strength are significantly enhanced as these fibers are inserted into concrete. In this analysis, glass fiber concrete measurements were conducted by adding 0.5 percent, 1 percent and 1.5 percent cement by adding as an admixture. Varma [4] studied Glass fiber reinforced concrete as one of the most flexible construction methods that architects and engineers can use. Mainly made of asphalt, sand and special alkali resistant (AR) glass fibres, GRC is a thin, high-strength concrete with many building applications. The workability of concrete of M20, M30, M40 and M60 grades of concretes were estimated in terms of compaction factor for addition of 0.03% of glass fiber. It was observed that the addition of glass fibers, the compaction factor of 0.93 to 0.97 was maintained for almost all grades of concrete.
Principal aim of Study
In this test study, the objectives are as follows: • Study the mix design aspects of GRC.
• Understanding the applications including GRC.
• Comparing GRC with different materials, for example, stone, aluminum, wood, glass, steel, marble and rock.
• Performing the laboratory test, for example, compressive, elastic and flexure by utilization of glass fiber in the solid pour.
Properties of Glass Fiber Reinforced Concrete
As the fiber provides the stack, the system provides perseverance and durability. These two elements are dependable at the stage where paired to cope with excessive burdens. It even forestalls lasting use and concoction. It establishes the elasticity by 10 percent and several times the opposition effect by around 100 times likewise of glass threads. It exhibited weariness of Glass fiber fortified cement (GFRC) as opposed to that of steel fiber reinforced cement (SFRC) when conducting cyclic stacking experiments on glass fiber concrete.
Experimental Work
In this experimentation, an endeavor has been made to discover the concrete strength and swapped concrete strength for M20 grade of cement. This examination decides the concrete materials properties and concrete quality. Mix design conveyed for M20 grade of concrete by IS 10262-2009 with water The materials were mixed by machine mixing process.
Specification of Specimen
The number of specimens casted was as per the below-mentioned details. The size of cube is 150 x 150 x 150mm, size of cylinder Diameter =150mm and Height=300mm, size of prism 100 x 100 x 500mm.
Compressive Strength
Compressive strength is the ability of a given material or part, when applied, to sustain loads that minimize the size of that material or structural part. In compression testing system as per IS 516: 1964, the Experimental test for the compressive strength of cubes was carried out. The cubes had been measured at 140 kg / cm2 / min. And it registered maximum loads.
Flexural strength
Flexural strength is defined as the stability of a beam or slab to withstand or resist the failure in bending. Flexural strength is measured by loading unreinforced concrete beams with a span three times the depth. The flexural strength is expressed as "Modulus of Rupture" (MR) in N/mm 2 . The MR is calculated as follows
Applications
The following are the applications of GFRC in the construction works:
Uses
x GFRC can outlive steel reinforced concrete and is extremely durable. It is also reliable and safe.
x Design Freedom.
x It is made from mold-state so GFRC can take up any shape and texture along with color.
x Requires low maintenance.
x Resistant to fires and climate.
x Economical and cost effective These are the ends drawn from the examination on addition of glass fiber in concrete. With 0.5 percent addition of fiber, the addition in the compressive quality is 13 percent, the increment in flexural quality is 42 percent and the increment in split elasticity is 20 percent over regular cement. With 1 percent addition of fiber, the addition in the compressive quality is 35 percent, the addition in flexural quality is 75 percent and the increment in rigidity is 37 percent. Therefore, reinforcing with glass fiber contributes greatly in upgrading the compressive quality of cement and the expansion is 1.78 occasions that of typical cement. From the test outcomes, it is discovered that the glass fiber has the high flexural quality. Subsequent to warming the solid at 300C for 2 hours, there is a decrease in the compressive quality. This outcome is appeared by the fireproof test. The decrease in the compressive quality is 32 percent over its unique quality without the addition of fiber. The decrease in the compressive quality is 25 percent over its unique quality by 0.5% addition of fiber. With 1 percent option of fiber, the compressive quality reduced by 10 percent over its unique quality. This exploratory examination shows a higher opposition of fiber strengthened cement to fire when contrasted with ordinary cement. Along these lines, glass-fiber concrete has superior fireproof-qualities. It is far more predominant than ordinary cement and it is likewise higher in flexural quality. Particularly, it is intended to fortify and offer strong steadiness to whichever material it projects in either concrete or GFRC. | 2020-11-26T09:04:12.337Z | 2020-11-21T00:00:00.000 | {
"year": 2020,
"sha1": "6cc417e85e9aea125767cfa894066f45fe295805",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/955/1/012033",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "3b5421a889608d94335fe9138ca42e4e92b4649e",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
250161621 | pes2o/s2orc | v3-fos-license | An Application of Axiomatic Design to Improve Productivity in the Circular Economy Context—The Salt Production Example
: Sustainability and a circular economy (CE) are crucial for the development of society. The CE approach should start by designing new products or processes or retrofitting existing ones to achieve the best efficiency and extend their life cycle. Designs that enable CE require the guidance of a design theory. Axiomatic design (AD) theory allows for the classification of designs and achieving the targets if appropriate requirements are adopted. This paper aimed to show that sustainability and productivity can be made compatible by ensuring functional independence, as defined in AD and using the circular economy concept. The paper presents how a salt washing machine could be improved concerning its performance. The analysis of the existing design showed fewer design parameters than the functional requirements. A viable enhancement was the addition of one design parameter, which made it possible to control the separation and washing independently. The resulting machine retrofitting increased the production rate by 20% to 30%—the productivity and the quality of the final product was also improved. The washing process now used less water and energy. Moreover, the brine feeding system was also redesigned, so that the brine was now reused, the land use was reduced as was the operating time, and the operators now worked in a more friendly environment. The industrial case study presented in this paper is an example of how innovative engineering design that fits the design science research (DSR) with the generation of knowledge. The objective of this design solution was to increase the efficiency of the entire process and consequently increase the productivity and sustainability.
Introduction
The application herewith presented is the overall upgrading of an already existing salt washing process to make it consistent with the circular economy (CE). The washing process works with salt-saturated brine, obtained by the evaporation of seawater, and includes a salt washing machine, whose specific improvement consists of conforming it with the AD's independence axiom, as described in a previous paper [1]. Nonetheless, this paper describes the matching of the upgraded washing machine with the brine recycling subsystem, which also conforms to the AD's independence axiom, and makes the whole washing process much more sustainable because it becomes closer to the circular economy concept.
The circular economy became a relevant societal issue in 2002 after the publication of the book Cradle to Cradle: Remaking the Way We Make Things [2]. According to the Ellen MacArthur Foundation, "a circular economy is a global economic model that decouples economic growth and development from the consumption of finite resources. It is restorative by design and aims to keep products, components, and materials at their highest utility and value, at all times" [3].
We are facing a paradigm shift: investing in the design, so that there is no waste, instead of investing in the use of waste. Keeping material cycles clean is the objective of the circular economy. It implies preserving the function and value of the products, components, and materials at the highest possible level as well as extending their lifespan.
The transition from a linear to circular economy implies changing the concept of what a product is. The product becomes a functionality/performance source rather than an added value item, and the business model now incorporates the end-of-life of the product [4].
In this context, products and their manufacturing processes typify the development of the circular economy. Product design determines the feasibility of assembly and manufacturability in a linear economy. Additionally, product design and the manufacturing process design determine the potential to achieve circularity. This paper presents an interpretation of the productivity of manufacturing systems from the circularity point of view. After a brief note about the inclusion of design in the circularity endeavor, the design science research of vom Brocke et al. [5] and the AD principles are presented to support this approach. Next, the retrofitting of a salt washing process was conducted by using the AD principles. The process was also improved so that the new solution reused the brine that became contaminated after the washing process. Consequently, it reduced the use of resources, namely, seawater, land area, and time. Finally, the paper presents the results and conclusions of the contribution of axiomatic design principles to move toward a circular economy through the growth of productivity, if this is attained by ensuring functional independence. The example presented in this paper shows that engineering design based on a design theory can solve technical problems and create scientific knowledge related to the circular economy.
The Perspective of Manufacturing Systems under the Circular Economy Point of View
The design of production systems involves the production process and production management. The former concerns the manufacturing technologies and materials flow and handling, and the production management focuses on the flow of information and signals. One must plan these two areas to perform according to the specified needs [6]. Additionally, good production management requires that the selected conceptual solutions for the whole process result from a set of independent functional requirements, ensuring the robustness of the process.
On the other hand, any manufacturing system aims to maximize productivity [6], as defined by Equation (1).
Productivity =
Total added value − Prodution costs Total investment (1) Productivity increases by increasing the total added value or decreasing the production costs or investment. Regarding the business operations, the added value of a product (or process or service) is created by its functions, performance, suitability, quality, and price [1]. The added value from Equation (1) results from the activities developed in several areas, which relate to the marketing and sales, product development, manufacturing, assembly, quality, and logistics.
Regarding the manufacturing processes, two modes allow for the increase in the added value: (1) a better use of available time, which means obtaining more value through the rise in the produced quantities per time unit; and (2) an enhanced quality of the manufacturing operations, which enables getting the specified functions right the first time [6].
Integrating life cycle costs by adding the production costs, total investment, environmental, and societal costs lead to sustainability. A circular economy (CE) is a crucial strategy for sustainability. The added value must be reached via functional performance regarding the contribution to the environment and society. The current model of valuing technical features and added value to the economy does not relate to sustainability. Technological solutions that imply the rationalization of the use of natural resources and allow for the substitution of components to increase the life of manufacturing systems meet the purposes of a CE. This paper shows that productivity and circularity are not antagonistic by using the example of an industrial process.
A Brief Perspective of Product Design towards the Circular Economy
The current literature essentially presents two approaches of product design toward a circular economy.
The first is a significant number of design guidelines based on Design for X (Design for disassembling, modularity, maintainability, etc...). The guidelines are technical recommendations for decision-making through the different design steps [7,8]. They denote a set of wishes such as long-life products, easy recycling, product-oriented services, and materials and energy sustainability.
The second approach relies on several indicators (usually designated by c-indicators) that quantify the evolution toward circularity during the design and development process. It is accepted that c-indicators are important to assist and monitor the CE transition [9]. The objective is to consider all possible iterations of the circular economy processes.
Despite the many design guidelines and c-indicators, there is a lack of theories that can support decision-making throughout the design process toward circularity [10]. This paper was based on the following assumption: uncoupled or decoupled designs-according to axiomatic design theory-are good designs in terms of efficiency, and, consequently, they are good in terms of productivity and sustainability. This assumption was based on the empirical observations on the effect of independence. Since the general validity of the assumption was not demonstrated at this time, the increase in efficiency was observed in many real cases. This was expected due to AD's independency axiom.
Bearing in mind product design as a key thought toward a circular economy, one has to look at engineering design as a scientific subject, since it follows an approach other than "routine design". This allows for the creation of scientific knowledge from the product, from the approach, or from both.
The Basics of Design Science Research
Design science research is an approach to support the solution of problems through the development of innovative artifacts, which leads to new design knowledge and to the understanding of the problems [5]. This approach allows the questions presented by the sciences of the artificial to be answered, as expressed by Herbert Simon, when he said " . . . we introduce 'synthesis' as well as 'artifice,' we enter the realm of engineering. For 'synthetic' is often used in the broader sense of 'designed' or 'composed.' We speak of engineering as concerned with 'synthesis,' while science is concerned with 'analysis.' Synthetic or artificial objects and more specifically prospective artificial objects having desired properties are the central objective of engineering activity and skill. The engineer, and more generally the designer, is concerned with how things ought to be how they ought to be in order to attain goals, and to function." [11] (p. 4).
There are some models concerning DSR, generally presented for application in information systems [12], which are appropriate for engineering design in general, and even to improve engineering education research [13]. However, "In case knowledge is already available to solve a problem identified, this knowledge can be applied following 'routine design,' which does not constitute DSR" [5]. By 'routine design', one means that the conceptualization of the design is developed using codes or algorithms.
As mentioned in [5], the design science research model proposed by Peffers et al. [14] is the most widely referred. This model is composed of six steps called activities: Activity 1: Problem identification and motivation, where the specific research problem is defined and the chosen solution is justified. Activity 2: Definition of the solution objectives that come from the problem definition as well as the previous knowledge about what is possible and feasible. Activity 3: Design and development where the artifact is created. Activity 4: Demonstration, where the use of the artifact is shown as a means to solve one or more instances of the problem.
Activity 5: Evaluation, where how well the artifact fits the chosen solution is observed and measured.
Activity 6: Communication of the problem and its importance, of the artifact and its utility and novelty, of the design accuracy as well as of its effectiveness to researchers and other relevant audiences such as practicing professionals, when appropriate.
The Basics of Axiomatic Design
In the axiomatic design (AD) terminology, the design is made along with the four design domains: the customer, the functional, the physical, and the process domains [15] (p. 10).
These domains are shown in Figure 1 and, according to Gummus et al. [16], their contents can be described as follows: to improve engineering education research [13]. However, "In case knowledge is already available to solve a problem identified, this knowledge can be applied following 'routine design,' which does not constitute DSR" [5]. By 'routine design', one means that the conceptualization of the design is developed using codes or algorithms.
As mentioned in [5], the design science research model proposed by Peffers et al. [14] is the most widely referred. This model is composed of six steps called activities: Activity 1: Problem identification and motivation, where the specific research problem is defined and the chosen solution is justified. Activity 2: Definition of the solution objectives that come from the problem definition as well as the previous knowledge about what is possible and feasible. Activity 3: Design and development where the artifact is created. Activity 4: Demonstration, where the use of the artifact is shown as a means to solve one or more instances of the problem.
Activity 5: Evaluation, where how well the artifact fits the chosen solution is observed and measured.
Activity 6: Communication of the problem and its importance, of the artifact and its utility and novelty, of the design accuracy as well as of its effectiveness to researchers and other relevant audiences such as practicing professionals, when appropriate.
The Basics of Axiomatic Design
In the axiomatic design (AD) terminology, the design is made along with the four design domains: the customer, the functional, the physical, and the process domains [15] (p. 10).
These domains are shown in Figure 1 and, according to Gummus et al. [16], their contents can be described as follows: "Customer Domain": Contains the Customer Needs (CNs) (i.e., the attributes that the customer seeks in the product or in the system that must be designed).
"Functional Domain": Contains the Functional Requirements (FRs) of the design object. In a good design, they are the minimum set of independent requirements that completely describe the functional essentials of the design solution. In any new design, the FRs should be defined in a solution-neutral manner. Reverse engineering, however, has no theoretical foundation, because one cannot infer the FRs by just looking at the DPs. In other words, "FRs can only be guessed" [17] (p. 205).
"Physical Domain": Contains the Design Parameters (DPs) of the design solution. The DPs are the elements of the design solution that are chosen to satisfy the specified FRs. "Customer Domain": Contains the Customer Needs (CNs) (i.e., the attributes that the customer seeks in the product or in the system that must be designed).
"Functional Domain": Contains the Functional Requirements (FRs) of the design object. In a good design, they are the minimum set of independent requirements that completely describe the functional essentials of the design solution. In any new design, the FRs should be defined in a solution-neutral manner. Reverse engineering, however, has no theoretical foundation, because one cannot infer the FRs by just looking at the DPs. In other words, "FRs can only be guessed" [17] (p. 205).
"Physical Domain": Contains the Design Parameters (DPs) of the design solution. The DPs are the elements of the design solution that are chosen to satisfy the specified FRs.
"Process Domain": Contains the Process Variables (PVs) that characterize the production process of the design solution (i.e., the variables that allow for the specified DPs to be attained).
For each pair of adjacent domains in Figure 1, the left one represents "What is required to achieve", or the goals, while the right domain focuses on "How to achieve the goals", that is, the design solution. This is conducted by mapping between the goals and the way to achieve them.
The existing constraints to this mapping are the bounds for the acceptable solutions and are classified as "input" and "system" constraints [15] (p. 14). The input constraints are defined at the start of the design process, and the system constraints are found during the design process.
The conventional product design process starts by selecting the higher level FRs and DPs and qualitatively evaluating the probability of success of such a solution. This procedure allows for unworkable solutions to be discarded before trying any more detailed development. "Because the final design cannot be better than the set of FRs that it was created to satisfy" [18] (p. 26), then seeking a proper set of FRs is the first step to finding a good solution.
The following sections present some high-level solutions with their qualitative appreciations. A unique concept in AD is the hierarchical decomposition by zigzagging between contiguous domains, as depicted by the red arrows in Figure 1. This decomposition advances in a top-bottom way, beginning at the system level, and continuing to levels of more detail, until enough detail is achieved so that all of the stakeholders are satisfied with the resulting design object.
Therefore, the decomposition defines the father/child relationships between the FRs belonging to different levels of the decomposition.
The fundamental hypothesis of AD is that two axioms govern good design practice: -The independence axiom (the first axiom): Maintain the independence of functional requirements. The first axiom means that adjusting each DP should just affect one FR.
-The minimum information axiom (the second axiom): Minimize the information content of the design.
The second axiom's purpose is to help find the alternative design solution with the minimum information content, which is the one with the highest probability of achieving the FRs.
The mapping between the FRs and DPs is denoted by Equation (2), the "design equation", where {FR} is the "FR vector", {DP} is the "DP vector", and [A] is the "design matrix" (DM): According to the independence axiom, the number of DPs should equal the number of FRs-Theorem 4 of AD states "In an ideal design, the number of DPs is equal to the number of FRs and the FRs are always maintained independent of each other" [19] (p. 45).
In the example of Equation (3), each FR is affected by just one DP. This means that every FR can be achieved independently from the others. In Equation (4), DP 1 should be tuned first to achieve FR 1 , but this impacts on all of the other FRs. Next, DP 2 could be tuned to attain FR 2 , but this also affects FR 3 . Finally, DP 3 could be tuned to achieve FR 3 . Therefore, choosing the right order to tune the DPs allows for the FRs to be achieved as in the case of Equation (3) (i.e., as if they were independent).
Any DM that cannot turn into a diagonal or triangular matrix just by changing the positions of rows and/or columns corresponds to a "coupled" design and should be avoided, as per Theorem 4.
Since the design process does not necessarily lead to a unique solution, the information axiom should be used to compare the alternative solutions and select the one with the highest probability of achieving the FRs [20].
According to AD, coupled designs must be avoided or redesigned so that they become decoupled or uncoupled designs. Hence, a question arises: how do we turn coupled designs into decoupled or uncoupled designs? Different approaches can be applied. For example, if the number of DPs is less than the number of FRs, one should make them equal. In fact, "When a design is coupled because of a larger number of FRs than DPs (i.e., m > n), it may be decoupled by the addition of new DPs so as to make the number of FRs and DPs equal to each other if a subset of the design matrix containing n x n elements constitutes a triangular matrix" (Theorem 2) [14] (p. 45). This was the approach applied in the example presented in this work.
The DM allows for the couplings to be identified and the DPs that cause the couplings must be changed. The design ranges should be made as large as possible, since increasing the design ranges can turn a coupled design into a decoupled or uncoupled one, as per Theorem 20: "If the design ranges of uncoupled or decoupled designs are tightened, they may become coupled designs. Conversely, if the design ranges of some coupled designs are relaxed, the designs may become either uncoupled or decoupled." [19] (p. 47).
Additionally, "When a given set of FRs is changed by the addition of a new FR, by substitution of one of the FRs with a new one, or by the selection of a completely different set of FRs, the design solution given by the original DPs cannot satisfy the new set of FRs. Consequently, a new design solution must be sought" (Theorem 5) [19] (p. 45). Therefore, new solutions cannot be attained by the simple adjustment on DPs, and a complete reevaluation of the design is required. Theorem 5 focuses on a widespread design mistake. During the design process, new ideas come along. The simple question of "why not?" when trying to make the artifact perform an extra task is costly and baffling. Solving "why not?" by adding a new DP may turn the design into a coupled design.
The following tasks apply to the next sections: Regarding the application of AD to assist CE, two approaches are feasible: (i) defining FRs that express the basics of CE and sustainability; and (ii) adopting Cs related to sustainability and CE to apply to designs where some FRs express economic and technical questions. This paper used the second approach since it is more suitable for the revamping of existing systems.
Based on the analysis of design matrices, one can infer that uncoupled and decoupled designs lead to increased efficiency. This happens because they prevent the losses that might come from couplings. These couplings do not exist in uncoupled designs and are easy to avoid in decoupled designs by choosing the appropriate order to adjust the DPs, as in the case of Equation (4).
The Industrial Salt Washing Process
The industrial process of sea salt harvesting in traditional crystallizer ponds collects a mixture of salt crystals and clay. The clay particles come from the bottom of the ponds. Figure 2 shows some of the aspects of the industrial sea salt harvesting process.
Building crystalline structures warrants no clay being inside the salt crystals, and one needs to separate the clay particulates from the crystals. The typical way to separate salt from the clay is to dilute the mixture through agitation in a liquid phase that does not dissolve the salt particulate. The liquid phase commonly used is an aqueous solution saturated with salt, called brine. Brine cannot dissolve more salt, thus balancing the salt that will likely dissolve and the salt that recrystallizes. Therefore, slow speed-controlled precipitation enables separation.
The Industrial Salt Washing Process
The industrial process of sea salt harvesting in traditional crystallizer ponds collects a mixture of salt crystals and clay. The clay particles come from the bottom of the ponds. Figure 2 shows some of the aspects of the industrial sea salt harvesting process. Building crystalline structures warrants no clay being inside the salt crystals, and one needs to separate the clay particulates from the crystals. The typical way to separate salt from the clay is to dilute the mixture through agitation in a liquid phase that does not dissolve the salt particulate. The liquid phase commonly used is an aqueous solution saturated with salt, called brine. Brine cannot dissolve more salt, thus balancing the salt that will likely dissolve and the salt that recrystallizes. Therefore, slow speed-controlled precipitation enables separation. Figure 3 depicts the salt washing process through stirring. Quick stirring enables the mixture to be diluted, while slow steering separates the salt from the clay particulates. Salt washing traditionally uses machines and a tank where the salt with clay is diluted in a salt-saturated brine. A screw conveyor shakes the mixture to disperse the brine's clay and salt crystals. The screw conveyor rotation motion also splits the salt from the clay through separate drainage. The salt is removed from the washing tank through the underflow outlet at the higher side of the tank. The brine with clay leaves the tank through an overflow outlet on the lower side of the tank. Some examples of existing salt washing machines are depicted in Figure 4. Building crystalline structures warrants no clay being inside the salt crystals, and one needs to separate the clay particulates from the crystals. The typical way to separate salt from the clay is to dilute the mixture through agitation in a liquid phase that does not dissolve the salt particulate. The liquid phase commonly used is an aqueous solution saturated with salt, called brine. Brine cannot dissolve more salt, thus balancing the salt that will likely dissolve and the salt that recrystallizes. Therefore, slow speed-controlled precipitation enables separation. Figure 3 depicts the salt washing process through stirring. Quick stirring enables the mixture to be diluted, while slow steering separates the salt from the clay particulates.
Axiomatic Design Analysis
The adopted strategy was to understand the actual process and guess the FRs. There is empirical evidence that the efficiency of systems increases with the independence of the functionalities. According to AD, one must carefully define the design's main goals at the onset of the process. The design process can proceed only after clearly stating those goals [21]. Figure 5 shows the primary customer need: to separate the clay from the valuable washed salt.
Axiomatic Design Analysis
The adopted strategy was to understand the actual process and guess the FRs. There is empirical evidence that the efficiency of systems increases with the independence of the functionalities. According to AD, one must carefully define the design's main goals at the onset of the process. The design process can proceed only after clearly stating those goals [21]. Figure 5 shows the primary customer need: to separate the clay from the valuable washed salt. The adopted strategy was to understand the actual process and guess the FRs. There is empirical evidence that the efficiency of systems increases with the independence of the functionalities. According to AD, one must carefully define the design's main goals at the onset of the process. The design process can proceed only after clearly stating those goals [21]. Figure 5 shows the primary customer need: to separate the clay from the valuable washed salt. From the Customer Needs, the high-level function at the first hierarchical level is: FR 1 -Wash salt Corresponding in the physical domain to: DP 1 -Salt washing machine On the second level of decomposition, it is necessary to accept the mixture of salt with clay (FR 11 ), dilute the mixture (FR 12 ), extract the clay (FR 13 ), and segregate the salt (FR 14 ). Figure 6 schematically shows the ideation of the FRs.
is empirical evidence that the efficiency of systems increases with the independence of the functionalities. According to AD, one must carefully define the design's main goals at the onset of the process. The design process can proceed only after clearly stating those goals [21]. Figure 5 shows the primary customer need: to separate the clay from the valuable washed salt. Figure 7 represents the solutions (DPs) to achieve the above-mentioned functions (FRs) in the physical domain. Each FR requires a DP to be fulfilled. In the machine under analysis, the DPs are the washing tank (DP 11 ), brine (DP 12 ), and screw conveyor (DP 13 ). The screw conveyor should allow for FR 13 and FR 14 to be achieved. The input Cs to improve the salt process toward sustainability are as follows: The input Cs to improve the salt process toward sustainability are as follows: Cs 1 -Use a certain area of land. Cs 2 -Limit the use of energy. Cs 3 -Restrict the work on weather conditions. As a system constraint, Cs 4 -No salt should dissolve in the process. Table 1 shows that the design violates Theorem 1, as the number of DPs is less than the number of FRs. Figure 8 shows the FRs and DPs as they occur along with the salt washing machine. Figure 9 displays a traditional salt washing machine. 11 Accept mixture (salt with clay) DP 11 Washing tank FR 12 Dilute mixture DP 12 Brine FR 13 Extract clay DP 13 Screw conveyor FR 14 Segregate The input Cs to improve the salt process toward sustainability are as follows: Cs1-Use a certain area of land. Cs2-Limit the use of energy. Cs3-Restrict the work on weather conditions. As a system constraint, Cs4-No salt should dissolve in the process. Table 1 shows that the design violates Theorem 1, as the number of DPs is less than the number of FRs. Figure 8 shows the FRs and DPs as they occur along with the salt washing machine. Figure 9 displays a traditional salt washing machine. In the existing washing machine, the same DP, the rotating shaft, implements two FRs. These two functional requirements exhibit a coupling: the fast stirring of the mixture is required to extract the clay, while slow stirring allows for salt to be segregated. Therefore, in Equation (5), the design matrix for the second hierarchical level has more FRs than DPs, so the design matrix is not square. 11 11 12 12 13 00 00 0 In the existing washing machine, the same DP, the rotating shaft, implements two FRs. These two functional requirements exhibit a coupling: the fast stirring of the mixture is required to extract the clay, while slow stirring allows for salt to be segregated. Therefore, in Equation (5), the design matrix for the second hierarchical level has more FRs than DPs, so the design matrix is not square.
The third-level FRs are fulfilled independently and are not relevant to the following discussion.
As one can see, the traditional salt washing machines were coupled since the number of DPs was smaller than the number of FRs at the second level. Their functional requirements were never fully satisfied because one cannot sufficiently increase the shaft speed to warrant effective clay extraction. The shaft speed could not be as slow as required to avoid affecting the salt segregation. This coupling impedes the best use of the machine. To produce a certain quantity of washed salt, the amount of brine used and the time are larger than necessary, if the functional requirements of the machine are independent.
Hence, the design team decided to retrofit the machine by searching for a solution with independent FR 13 and FR 14 .
The Retrofitting of the Washing Machine-Based on the Independence of Functions
The new solution divides the screw conveyor into two sections with independent rotation movement: a high-speed conveyor in the extraction section of the tank and a low-speed conveyor in the segregation section. Figure 10 shows the design parameters for a two-screw conveyor salt washing machine. At the second hierarchical level, the design parameters of the new design were as follows: The salt and the clay find their way from conveyor Section 1 to conveyor Section 2 by themselves. The hierarchical zigzag decomposition is depicted in Figure 11. At the second hierarchical level, the design parameters of the new design were as follows: • DP 11 -Washing Tank; • DP 12 -Brine; • DP 13 -Screw conveyor 1; • DP 14 -Screw conveyor 2.
The salt and the clay find their way from conveyor Section 1 to conveyor Section 2 by themselves. The hierarchical zigzag decomposition is depicted in Figure 11.
The salt and the clay find their way from conveyor Section 1 to conveyor Section themselves. The hierarchical zigzag decomposition is depicted in Figure 11. Figure 11. The zigzag path of the decomposition of a split screw salt washing machine (ada from [1]). (6), the matrix became square, and the design decoupled. Figure 11. The zigzag path of the decomposition of a split screw salt washing machine (adapted from [1]). (6), the matrix became square, and the design decoupled.
The Process of Brine Recycling
As explained earlier, the salt washing process uses stirred salt-saturated brine to remove the clay from the surface of the salt crystals.
The mass of washed salt depends on the used amount of salt-saturated brine. The retrofitted washing machine required more brine than the original one due to its higher production rate.
The brine is produced from the seawater that enters large and shallow ponds to evaporate and increase the salt concentration. The free surface of those crystallizer ponds is huge, so they ensure a substantial solar energy absorption to grant a large evaporation rate. Figure 12 shows the process of producing brine by evaporation in two ponds with increasing salt concentration.
The Process of Brine Recycling
As explained earlier, the salt washing process uses stirred salt-saturated brine to remove the clay from the surface of the salt crystals.
The mass of washed salt depends on the used amount of salt-saturated brine. The retrofitted washing machine required more brine than the original one due to its higher production rate.
The brine is produced from the seawater that enters large and shallow ponds to evaporate and increase the salt concentration. The free surface of those crystallizer ponds is huge, so they ensure a substantial solar energy absorption to grant a large evaporation rate. Figure 12 shows the process of producing brine by evaporation in two ponds with increasing salt concentration. The brine concentrating process is time-consuming. It requires a large land area, but the evaporation rate also depends on the weather conditions. Salt begins precipitating after the brine attains the point of salt saturation since the evaporation does not stop. The air temperature, humidity, and air velocity near the free surface of the ponds are of paramount importance. Additionally, the occurrence of rain over the ponds decreases the salt concentration in the brine, making salt harvesting a typical summertime activity. The lack of control over the weather conditions and the larger amounts of brine required by the improved washing machine dictated the review of the processes related to brine utilization.
The evaporation rate can be improved in several ways. A conceivable way is to blow the free surface of the brine to increase the air velocity. For this purpose, the use of fans would be possible. In this case, the top FR and DP would be: FR0-Increase the air velocity near the water surface. The brine concentrating process is time-consuming. It requires a large land area, but the evaporation rate also depends on the weather conditions. Salt begins precipitating after the brine attains the point of salt saturation since the evaporation does not stop. The air temperature, humidity, and air velocity near the free surface of the ponds are of paramount importance. Additionally, the occurrence of rain over the ponds decreases the salt concentration in the brine, making salt harvesting a typical summertime activity. The lack of control over the weather conditions and the larger amounts of brine required by the improved washing machine dictated the review of the processes related to brine utilization.
The evaporation rate can be improved in several ways.
A conceivable way is to blow the free surface of the brine to increase the air velocity. For this purpose, the use of fans would be possible. In this case, the top FR and DP would be: FR 0 -Increase the air velocity near the water surface. DP 0 -Fans. However, the energy consumption would be very high given the large size of the crystallizer ponds, and the probability of success is also dependent on the weather conditions. This design was discarded because it violates Cs 2 .
Another possible solution would be to heat the brine to speed up the water evaporation using solar thermal panels. In this case, FR 0 -Increase salted water temperature. DP 0 -Solar thermal panels.
Although this solution seems more feasible than the former, the vast land area required to deploy the solar panels made the design team look for other alternatives because it violates Cs 1 .
The chosen alternative was to reuse the brine that becomes soiled with the washing process, after removing by decanting most of the clay and other solid contamination. Next, the still salt-saturated brine feeds the salt washing machine again.
The dirty salt-saturated brine enters the decanter tanks of a small area, but much deeper than the crystallizer ponds. The brine is transferred from one tank to another, so that clay sediments and other impurities are deposited at the bottom of the tanks. The process should keep up with the opposite direction to the prevailing winds to retain the dirty foam that forms at the free surface of the brine. Decanting is a much faster process than evaporation to obtain salt-saturated brine that is clean enough to use in the salt washing machines. Figure 13 represents the brine recycling process by decanting. The recycled brine is not completely free of clay, and the deposited clay needs to be removed periodically from the bottom of the decanter tanks.
The salt washing process can be made more independent of the weather conditions because the washing machine and the small area decanter tanks can be installed under a large roof. Consequently, salt washing can be conducted at any time including in the winter, thus ensuring Cs 3 , and the workers assigned to the process could work in a much more friendly environment. During the winter, they would be protected from the rain, and the sun would not hit them directly during the summer. Although this solution seems more feasible than the former, the vast land area required to deploy the solar panels made the design team look for other alternatives because it violates Cs1.
The chosen alternative was to reuse the brine that becomes soiled with the washing process, after removing by decanting most of the clay and other solid contamination. Next, the still salt-saturated brine feeds the salt washing machine again.
The dirty salt-saturated brine enters the decanter tanks of a small area, but much deeper than the crystallizer ponds. The brine is transferred from one tank to another, so that clay sediments and other impurities are deposited at the bottom of the tanks. The process should keep up with the opposite direction to the prevailing winds to retain the dirty foam that forms at the free surface of the brine. Decanting is a much faster process than evaporation to obtain salt-saturated brine that is clean enough to use in the salt washing machines. Figure 13 represents the brine recycling process by decanting. The recycled brine is not completely free of clay, and the deposited clay needs to be removed periodically from the bottom of the decanter tanks.
The salt washing process can be made more independent of the weather conditions because the washing machine and the small area decanter tanks can be installed under a large roof. Consequently, salt washing can be conducted at any time including in the winter, thus ensuring Cs3, and the workers assigned to the process could work in a much more friendly environment. During the winter, they would be protected from the rain, and the sun would not hit them directly during the summer. Figure 14 shows the "brine cycle", from the brine intake of the washing machine to the gathering of clay and other solid impurities at the decanter tanks. Then, the recycled brine returns to the washing machine to feed the process again. The recycled brine remains salt-saturated and still contains some clay, but high purity is not essential. Anyway, a small amount of clean or "virgin" brine is used to finish the washing process, as shown in the figure. Despite the mutual feeding, the operation of the two processes is completely independent. Washing and decantation are independent because they can run with any proportion of recycled brine and virgin brine, therefore, there is no feedback between both processes. Figure 14 shows the "brine cycle", from the brine intake of the washing machine to the gathering of clay and other solid impurities at the decanter tanks. Then, the recycled brine returns to the washing machine to feed the process again. The recycled brine remains salt-saturated and still contains some clay, but high purity is not essential. Anyway, a small amount of clean or "virgin" brine is used to finish the washing process, as shown in the figure.
Despite the mutual feeding, the operation of the two processes is completely independent. Washing and decantation are independent because they can run with any proportion of recycled brine and virgin brine, therefore, there is no feedback between both processes. The zigzagging related to the salt production system including the brine production subsystem included at the third-level of decomposition is depicted in Figure 15.
Results and Discussion
Equation (8) shows the design equation of the process up to the third-level of decomposition. The third-level corresponds to the decomposition of FR11 and FR12. The equation shows that the design is a decoupled design. The zigzagging related to the salt production system including the brine production subsystem included at the third-level of decomposition is depicted in Figure 15. The zigzagging related to the salt production system including the brine production subsystem included at the third-level of decomposition is depicted in Figure 15.
Results and Discussion
Equation (8) shows the design equation of the process up to the third-level of decomposition. The third-level corresponds to the decomposition of FR11 and FR12. The equation shows that the design is a decoupled design.
Results and Discussion
Equation (8) shows the design equation of the process up to the third-level of decomposition. The third-level corresponds to the decomposition of FR 11 and FR 12 . The equation shows that the design is a decoupled design.
The new design has decoupled functions that allow for the speeds of the two screw conveyors to be consecutively adjusted. The new solution improved the separation and segregation functions, thus increasing the production by 20% to 30%. The solution im-proved the final quality of the product. The whiteness of the salt allowed us to evaluate the final perceived quality of the salt. A whiter salt means that it has fewer impurities than the darker salt. Moreover, the same amount of washed salt required less water and energy, following the CE objectives. Additionally, the new solution is socially more acceptable since it reduces the amount of hard work to produce the same quantity of washed salt, thus increasing the life cycle and performance of the process.
The production is mostly mechanized. In the day-to-day work, three people are needed; during salt collection, about ten. The new production reduced the time of exposure of the personnel to non-mechanized activities under weather conditions by 30%.
Therefore, retrofitting is a relevant point in the CE approach.
The proposed solution added one DP to the existing machine to control the separation and the washing functions in an uncoupled way. The final configuration of the new salt washing machine encompassed the division of the screw conveyor into two conveyors with independent rotation speeds, allowing for a higher rotation speed in the clay extraction section and a lower velocity in the salt segregation section. Moreover, it included a brine recycling process that avoids using virgin brine.
The salt flat to produce virgin brine was reduced from 68,200 m 2 to 19,900 m 2 . This released about 48,000 m 2 , representing 70% of the original flat salt area. The 48,000 m 2 of land free for other activities was about 10% of the total plant area (483,000 m 2 ).
FR 13 and FR 14 were now decoupled and achieved by sequentially adjusting the speeds of the two screw conveyors. The new solution improved the separation and segregation functions while increasing the washing speed by 20% to 30%. The process productivity and the final quality of the product were also improved. Finally, the same amount of washed salt was obtained more efficiently, with less water, land area, and energy.
In terms of DSR, one can see that the activities were accomplished: Activity 1 corresponds to the definition of the AD's Customer Needs (the increment of productivity and sustainability of the salt washing process).
Activity 2 corresponds to the definition of the AD's functional requirements and of their independence, which was conceptualized by the inclusion of a new design parameter.
Activity 3 corresponds to the proposed engineering solution that must obey the AD s independence axiom. This was accomplished by adding the new DP 14 , as shown in Equation (6). Activities 4 and 5 correspond, in practice, to evaluating that the process was operational, and that their objectives were achieved.
Activity 6 was expressed in a previous paper [1] and in the current paper, with the agreement of the owner of the industrial system.
Conclusions
Sustainability concerns the economy, environment, and society. The circular economy is a crucial strategy for sustainability. This paper proposes that sustainability should start at the design stage. This standpoint applies to the redesign or retrofit of products or processes. Axiomatic design (AD) allows for a design to be ruled by applying axioms, especially the independence axiom. This paper presents an example of improving a salt washing process regarding sustainability. The improvement in the salt production took place in two steps: retrofitting the washing machine and adding the brine recycling subsystem. The revised design solution increased the salt production rate and reduced the resources needed, namely, saltwater, land area, and time.
The traditional washing machines used the same DP (the screw conveyor) to attain two functional requirements, so the number of DPs was smaller than the number of FRs. As per AD's Theorem 1, the design was coupled, which is a poor design. The analysis of the design matrix in Equation (3) revealed this condition, which usually indicates the poor performance of the corresponding design solution.
The new design accomplished a decoupled design solution with a much better performance. The new DP was a second screw conveyor. Moreover, the new design recycled the dirty brine by decantation.
The new design solution reached the following benefits: -Improved productivity; -Reduced consumption of brine, water, and energy by about 30%; -Increased productivity by between 20% and 30%; -Increased perceived quality level of the final product; -Reduced weather dependency of the process; -Reduced land area use; -Improved quality of the working environment.
Axiomatic design allows for the analysis and identification of the origin of the problem, thus also allowing a better solution to be found that increased the productivity and improved the environmental sustainability, based on the AD's independence axiom. This case was developed without the use of codes or algorithms, so it is not a "routine design".
Finally, this paper contained some empirical evidence (as expected by the application of the AD' theory) that decoupled designs are good solutions regarding efficiency, productivity, and sustainability. Since the development of this case study agrees with the DSR model, it has scientific value in the context of design science and science of the artificial. | 2022-07-01T15:13:16.426Z | 2022-06-28T00:00:00.000 | {
"year": 2022,
"sha1": "6d77d960d5f3730bb7ed6d219f2e55778b3cc773",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/14/13/7864/pdf?version=1656411021",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "75484e51586be59dba8c5345bb883085b4020982",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
} |
261168651 | pes2o/s2orc | v3-fos-license | Addressing mood and fatigue in return-to-work programmes after stroke: a systematic review
Introduction Return-to-work is a key rehabilitation goal for many working aged stroke survivors, promoting an overall improvement of quality of life, social integration, and emotional wellbeing. Conversely, the failure to return-to-work contributes to a loss of identity, lowered self-esteem, social isolation, poorer quality of life and health outcomes. Return-to-work programmes have largely focused on physical and vocational rehabilitation, while neglecting to include mood and fatigue management. This is despite the knowledge that stroke results in changes in physical, cognitive, and emotional functioning, which all impact one’s ability to return to work. The purpose of this systematic review is to conduct a comprehensive and up-to-date search of randomised controlled trials (RCTs) of return-to-work programmes after stroke. The focus is especially on examining components of mood and fatigue if they were included, and to also report on the screening tools used to measure mood and fatigue. Method Searches were performed using 7 electronic databases for RCTs published in English from inception to 4 January 2023. A narrative synthesis of intervention design and outcomes was provided. Results The search yielded 5 RCTs that satisfied the selection criteria (n = 626). Three studies included components of mood and fatigue management in the intervention, of which 2 studies found a higher percentage of subjects in the intervention group returning to work compared to those in the control group. The remaining 2 studies which did not include components of mood and fatigue management did not find any significant differences in return-to-work rates between the intervention and control groups. Screening tools to assess mood or fatigue were included in 3 studies. Conclusion Overall, the findings demonstrated that mood and fatigue are poorly addressed in rehabilitation programmes aimed at improving return-to-work after stroke, despite being a significant predictor of return-to-work. There is limited and inconsistent use of mood and fatigue screening tools. The findings were generally able to provide guidance and recommendations in the development of a stroke rehabilitation programme for return-to-work, highlighting the need to include components addressing and measuring psychological support and fatigue management.
Introduction
Stroke is defined as "rapidly developing clinical signs of focal (or global) disturbance of cerebral function, lasting more than 24 h or leading to death, with no apparent cause other than that of vascular origin" (1).It is the second-leading cause of death and third-leading cause of death and disability combined (as measured by disabilityadjusted life-years) with rising trends and overall global public health burden (2).
Approximately 10% of all strokes occur in individuals aged below 50 years (3,4).The hospitalisation rates of acute ischemic stroke among those aged 25 to 44 have increased considerably from 2000 to 2010 by 44% in the United States, despite an overall decline (5).Stroke incidence has been increasing in young adults in developing countries due to: improvements in stroke detection, increase in vascular risk factors (e.g., alcohol consumption, smoking, hypercholesterolemia, obesity) in young adults, and potentially environmental factors (e.g., air pollution) (4).
As younger adults are responsible for supporting family and generating income, a key rehabilitation goal is in their ability to return to work.Stroke survivors who return to paid work have shown improved psychosocial outcomes (6), subjective wellbeing, and life satisfaction (7,8).Conversely, the failure to return to work following stroke contributes not only to a loss of identity, lowered self-esteem, quality of life and poorer health outcomes for younger stroke survivors, but also increases socioeconomic burdens arising from the loss of work productivity (9,10).
Return-to-work programmes have largely focused on physical and vocational rehabilitation, while neglecting to include mood (e.g., depression, anxiety, stress) and fatigue management.This is despite the knowledge that stroke results in changes in physical, cognitive, and emotional functioning, which all impact one's ability to return to work.A review of the literature on return-to-work after stroke found that the rehabilitation process involves multiple predictors for its success, including physical factors (stroke severity, functional disability), social factors (ethnicity, income, gender, occupation) and cognitive/emotional factors (psychiatric disorders, fatigue, cognitive functioning) (11).A study found that psychiatric morbidity was a significant determinant of return to paid work after stroke, hence authors recommended appropriate management of the emotional consequences of stroke, suggesting that this would optimise recovery and enable successful return-towork in working aged stroke survivors (12).Fatigue, which is closely related to mood, has also been found to be a significant barrier to return-to-work after stroke (13,14).
Research has shown the predictive effect of mood and fatigue on return-to-work after stroke, yet there is no published research examining mood and fatigue components in return-to-work intervention programmes after stroke.Prior systematic reviews on return-to-work programmes after stroke have largely found studies of poor quality, heterogeneity in methodology and limitations of inadequate search, highlighting the need to examine high quality randomised controlled trials (RCTs) (8,15).Hence, this study sought to examine solely RCTs to address this.The aim of this systematic review is to conduct a comprehensive and up-to-date search of RCTs of return-to-work programmes after stroke.The focus is especially on examining components of mood and fatigue, if they were included.
Methods
This review is registered with PROSPERO with registration number CRD42023388567. 1 It is reported in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement (16).
Eligibility criteria
All studies yielded in response to the search terms were identified against the inclusion and exclusion criteria.The inclusion criteria were RCTs published from inception to 4 January 2023, studies published in English, include a population of stroke survivors (16- Studies not eligible included: participants of other diagnostic groups or with mixed etiologies (e.g., traumatic brain injury/stroke mix) and interventions are not sufficiently detailed (e.g., description of intervention, specific components, dosage and frequency of sessions).Qualitative studies, previous systematic reviews or meta-analyses were excluded.
Information sources
Searches were conducted using electronic databases (Medline, PubMed, Embase, PsycInfo, Scopus, Web of Science, Cochrane Central Register of Controlled Trials) and manual search.The search terms used to identify relevant articles included "stroke, " "cerebrovascular accident, " "cerebral infarction" or "brain attack" or "apoplexy, " "return to work, " "employment, " or "job, " "rehabilitation, " "training, " "programme, " "intervention" or "protocol, " and "randomized controlled trial, " "controlled clinical trial, " "randomized, " "trial, " "groups" or "double blind." We combined the search terms using Boolean operators "AND" and "OR." A manual search of the reference lists for relevant studies was also undertaken to identify studies that were overlooked in the electronic search.
Search strategy
Full search strategies for all the databases are included in Supplementary Appendix A. The search was separately conducted and compared by 2 authors (N.Y.C.C and Z.Z.J.K).
Selection process
After the search and removal of duplicates, studies were screened using the eligibility criteria specified above.Titles that were irrelevant
Data collection process
Relevant data were extracted and recorded in tables to illustrate the characteristics of the included studies (Table 1).Data extracted included characteristics of participants (age, gender, time since onset of stroke), mood and fatigue components in intervention, mood/ fatigue measures, and outcomes of intervention.
Assessment of risk of bias in included studies
The selected studies were assessed for risk of bias using the Cochrane Collaboration's tool for assessing risk of bias in randomised trials revised version (RoB 2) (17).The effect of interest was the effect of assignment to the interventions at baseline, estimated by the intention-to-treat analysis.Outcome domain of interest is participants' return-to-work.Domains in RoB 2 included the following biases: the randomisation process, deviations from intended interventions or missing outcome data, outcome measurement, and selection of the reported results.The risk of bias assessment was performed independently by N.Y.C.C and Z.Z.J.K. Disagreement was resolved by discussion and reaching consensus.Signalling questions were used to determine the levels of bias (high risk, some concerns and low risk) assigned to each domain.
Study selection
The search yielded a total of 1,654 articles.After removing duplicates, 1,297 articles were screened using the titles and abstracts, of which 1,289 articles were discarded.The full text of the remaining 8 articles was reviewed and 3 articles were excluded, with reasons such as other diagnostic group besides stroke (i.e., traumatic brain injury) and return-to-work being a secondary objective.Five final articles were included in the final review (see Figure 1).
Study characteristics
Table 1 provides the characteristics of the included studies, which were all randomised controlled trials on stroke rehabilitation.Four of the five studies were recently published between 2020 and 2022 (18)(19)(20)(21).The total number of subjects included in these 5 studies is 626, with sample size ranging from 46 to 376.Only 1 study had a sample size larger than 100 (18), while the other 4 studies had fewer than 100 subjects (19)(20)(21)(22).The mean age ranged between 44 and 66. 19) did not provide definitions of "work." The studies generally included information on the diagnosis of stroke, age, gender ratio and employment at the time of stroke.Most studies provided additional information such as follow-up period and time since onset of stroke.
Risk of bias in included studies
Two papers were sub-studies of large randomised controlled trials, where their primary results and methodology were published in another paper (18,21).The original protocols were retrieved to assess for risk of bias.
Table 2 depicts the risk of bias assessment.Three studies were judged to be at low risk of bias for all domains (18,19,22).The other two studies were assessed to raise some concerns.In Ghoshchi et al. (20), no information was provided regarding allocation concealment, participants' awareness of their assigned intervention and if there were any deviations from the intended intervention.However, given that all participants who were randomised were included in the analyses, the overall risk of bias was judged to raise some concerns.In Radford et al. (21), outcome data was missing for 13% of participants, of which there were more control participants with missing data than intervention participants.Grant et al. (25) acknowledged that there may be bias of results towards the intervention group due to more knowledge known about their vocational status than the control group, raising high risk of bias in the domain of missing outcome data.However, as this was a feasibility RCT which aimed to evaluate the parameters (e.g., assessing willingness of participants to be randomised, measuring acceptability of intervention) needed to deliver the stroke-specific vocational rehabilitation, the study's overall risk of bias was maintained at having some concerns.
Intervention features
Descriptions of the return-to-work interventions were obtained.In two studies (18,21), information was obtained from the original papers which included details of the intervention protocol (23)(24)(25).
In the largest study (n = 376), Cain et al. (18) aimed to describe characteristic of younger working-aged stroke individuals and identify the factors associated with return-to-work at 12 months post-stroke, by comparing early mobility-based rehabilitation to usual care.They included three main components in the intervention: (a) beginning within 24 h of stroke onset, (b) focus on out-of-bed (i.e., sitting, standing, walking) activity, and (c) at least three additional out-of-bed sessions compared to usual care.Trained physiotherapy and nursing staff assisted subjects to continue out-of-bed activity at a frequency according to a detailed intervention protocol, with the frequency adjusted as per the individual's recovery rate.The intervention duration was 14 days or until discharge from stroke-unit care, depending on which was sooner.The control group received usual care, which were at the discretion of individual sites.
Ghoshchi et al. (20) aimed to assess return to work and quality of life after stroke, utilising technological treatment in their • There was no differences in return-to-work between groups.
• There was no statistically significant differences between groups in six-month Fatigue Assessment Scale, Patient Health Questionnaire-9, and Generalised Anxiety Disorder-7.The small negative mean differences indicated that the intervention was slightly better than the control.19) focused on addressing post-stroke fatigue.The intervention sessions included: introduction and psychoeducation on fatigue, goal setting and activity planning, progress assessment and goal modification, cognitive restructuring, dealing with setbacks and barriers, and making future plans.The focus was on encouraging participants to overcome fears of physical activity, increase physical activity using diary monitoring and activity scheduling, achieving balance between activities and rest, and addressing unhelpful thoughts related to fatigue and low mood.The intervention took place over a period of 12 weeks, which included 6 phone calls of an hour each, followed by a booster call 2-4 weeks later.The control group received a leaflet from the national stroke association about post-stroke fatigue.
Ntsiea et al. (22) conducted a workplace intervention programme.The intervention was tailored to individuals' functional ability and workplace challenges.The intervention started with a work skill assessment for formulating individual treatment plans.Thereafter, sessions took place at the workplace, which included: (a) separate interviews with the subject and employer to identify perceived barriers and motivators of return-to-work; (b) working on identified barriers and discussing a plan for reasonable workplace accommodations (including vocational counselling, coaching, emotional support, workplace adaptation, coping techniques, fatigue management); and (c) monitoring progress of the intervention programme and making adjustments as required.The intervention lasted 6 weeks, with sessions taking place once a week for 1 hour per session, except for work skill assessment sessions which took a minimum of 4 hours.The control group received usual care which included general activities provided by physiotherapists and occupational therapists to improve impairments and limitations to prepare for return home.
Radford et al. ( 21) conducted an early stroke specific vocational rehabilitation (ESSVR) intervention.Individuals received a mean of 10 sessions, with sessions lasting approximately an hour.ESSVR included assessment of the individual, job analysis, provision of information, education of cognitive and executive functioning skills, advice and psychological support, goal setting, workplace assessment, liaison (with family members, employer, other professionals and services).Psychological support was provided to participants, family members and employers to assist with adjustment following the stroke, work preparation and throughout the return to work process.This involved asking how participants felt during sessions, listening to their concerns and providing encouragement and positive reinforcement as they tried to regain skills and confidence.Work preparation was individualised, including discussion of work options, simulations and interventions (e.g., fatigue management).The control group received usual stroke rehabilitation provided by primary and secondary care, community, and social services, which included rehabilitation for activities of daily living.PRISMA flow diagram (15).
Outcomes
Across most studies, the participants were employed at the time of their stroke (18,(20)(21)(22).Work status at the respective follow-up time points was reported in all 5 studies.Table 1 summarises the outcomes of the intervention.
Three studies included components of mood and fatigue management in the intervention (19,21,22), of which 2 studies found a higher percentage of subjects in the intervention group returning to work compared to those in the control group.In Ntsiea et al. (22), 60% of subjects in the intervention group had returned to work compared to 20% in the control group at 6 months follow-up, which was statistically significant (p < 0.001).Furthermore, subjects in the intervention group had 5.2 times higher odds of returning to work at 6 months follow-up than those in the control group (95% CI [1.80-15.0],p = 0.002) (22).In addition, fatigue was also found to be one of the main reasons for subjects not returning to work (22).In Radford et al. (21), descriptive statistics showed a higher percentage of subjects in the intervention group (37.5%) reporting full-time work at 12 months follow-up, compared to subjects in the control group (11.8%).Out of those who returned to work by 3 months post-stroke and were able to sustain this until 12 months (n = 12), 8 subjects were from the intervention group, compared to 4 subjects from the control group.
The remaining 2 studies which did not include components of mood and fatigue management did not find any significant differences in return-to-work rates between the intervention and control groups (18,20).
Mood measures were only included in 3 studies (18,19,21), namely the Irritability Depression Anxiety Scale, Hospital Anxiety and Depression Scale, Patient Health Questionnaire-9, and Generalised Anxiety Disorder-7.Fatigue measures, namely the Fatigue Assessment Scale, was only included in 1 study (19).Although Ntsiea and colleague's workplace intervention programme featured elements of mood and fatigue management, it did not include any mood or fatigue measures for screening or measurement of outcome (22).
Discussion
Return-to-work is an important outcome and rehabilitation goal for many working aged stroke survivors, promoting an overall improvement of quality of life, wellbeing and life satisfaction.Stroke survivors should be well-supported in their reintegration into working life.It is evident that alongside the neurological and physical effects of a stroke, survivors also experience emotional and cognitive changes.These elements have to be addressed in a comprehensive return-towork rehabilitation programme.
Return-to-work programmes have largely focused on physical and vocational rehabilitation, while neglecting to include mood and fatigue management.Research has shown the predictive effect of mood and fatigue on return-to-work after stroke (26)(27)(28)(29)(30)(31), yet there is no published research looking at mood and fatigue components in return-to-work intervention programmes after stroke.This systematic review comprised of a comprehensive and up-to-date search of return-to-work programmes after stroke, specifically examining components of mood and fatigue management.This review concentrated on randomised controlled trials, addressing limitations of previous systematic reviews (8,15).The included studies were also relatively recent, published between 2015 and 2022.
Depression affects a third of stroke survivors up to 15 years poststroke, which can continue to be present long after the stroke has settled (26).Depressive symptoms have been shown to have a predictive effect on return-to-work after stroke (27)(28)(29).Possible explanations for the increased prevalence of depression post-stroke include: depression being a risk factor for stroke, depression and stroke having common risk factors, depression being a psychological reaction to stroke or outcomes of stroke (e.g., cognitive impairment, physical disability), and stroke having a direct pathophysiological effect on the brain that leads to changes in imbalances (26).
Post-stroke fatigue occurs in around half of stroke patients, which can persist for over a year after the stroke (30).It is found to be worsened by stress and physical exercise and alleviated by rest (30).Risk factors for post-stroke fatigue include age, being female, being single, cognitive impairment, disability, posterior stroke, inactivity, overweight, alcohol, sleep apnea, and psychiatric issues (e.g., depression, anxiety) (30).A qualitative study with stroke survivors found that fatigue had a devastating influence on their ability to return-to-work (31).
Of the 3 studies which included mood and fatigue management in their intervention programmes (19,21,22), 2 studies found positive effects in their outcome measures (21,22).However, they were both underpowered with small sample sizes.There were several similarities observed between the return-to-work rehabilitation programmes of Ntsiea et al. ( 22 18) found less depressive traits to be statistically predictive of return-towork post-stroke.These findings suggest that mood and fatigue management may potentially be one of the elements of a rehabilitation programme that makes it successful in promoting return-to-work after stroke, however larger scale studies are still needed to support this finding.In this review, mood and/or fatigue measures were found to be included in 3 studies (18,19,21).It is important to measure levels of mood symptoms and fatigue pre-and post-intervention after stroke for several reasons: (a) to screen for mood and fatigue symptoms that may hinder engagement in the intervention, (b) examine if mood and fatigue symptoms predict return-to-work outcomes, and (c) examine if mood and fatigue symptoms improved from the intervention.Studies should also assess for physical recovery during the intervention process, as this is likely to influence mood and /or fatigue outcomes.
A systematic review of psychometric properties and clinical utility of mood screening tools for stroke survivors examined 27 screening tools to identify the most suitable for clinical practise (32).The review identified the observer-rated Stroke Aphasic Depression Questionnaire -Hospital version as having met both psychometric and clinical utility criteria for screening of post-stroke depression.Self-rating scales identified were the Patient Health Questionnaire-9 and Geriatric Depression Scale 15-item to screen for depression, and the Hospital Anxiety and Depression Scale to identify anxiety (32).It is prudent to include more than a single mood measure to screen for both depression and anxiety.
With regards to assessing fatigue, Mead et al. ( 19) recommended the Fatigue Assessment Scale for use in clinical research.The Fatigue Assessment Scale is a short 10-item self-report scale evaluating symptoms of chronic fatigue, with a high internal consistency of 0.90 (33).It has been used in many diseases including stroke and is the only fatigue measure that has a cut-off score for stroke patients (≥24 indicating post-stroke fatigue) (34).
While return-to-work has been established to be an important rehabilitation goal after stroke with significant benefits, the limited RCTs yielded from this systematic search highlights a dearth of high quality research investigating return-to-work intervention poststroke.It is encouraging to see that RCTs are gradually emerging in this research field, as seen in this review where 3 out of the 4 studies were published recently.Still, more research is needed to understand the effect of mood and fatigue on return-to-work after stroke, and furthermore to guide the necessary components of a stroke rehabilitation programme for return-to-work.
Limitations
The limitations in this review included the exclusion of non-English studies and small number of participants in 4 of the included studies.Future RCTs involving larger sample sizes are required.Two studies were also assessed to raise some concerns in the risk of bias assessment (20,21), with Radford et al. (21) feasibility randomised controlled trial which was more descriptive in nature with limited statistical testing.It was also generally difficult to compare the rehabilitation programmes and outcomes between the studies given substantial heterogeneity between the studies in terms of study designs, definition of work, length of follow-up period and outcome measures.
Conclusion
Overall, the findings of this systematic review demonstrated that mood and fatigue are poorly addressed in rehabilitation programmes aimed at improving return-to-work after stroke, despite being a significant predictor of return-to-work.There is limited and inconsistent use of mood and fatigue screening tools.The findings were generally able to provide guidance and recommendations in the development of a stroke rehabilitation programme for return-towork, including being customised to individual needs, involving work site visits and employers, and use of screening tools.Given the prevalence of mood dysfunction and fatigue post-stroke, it is imperative to include components addressing and measuring psychological support and fatigue management in all post-stroke rehabilitation programmes for improving return-to-work outcomes, to ensure that they are comprehensive and holistic.
7. Cain et al. (18) and Ntsiea et al. (22) defined "work" as paid formal employment, while Radford et al. (21) included paid work, unpaid (voluntary) work and full-time education.Ghoshchi et al. (20) and Mead et al. ( significant differences were found between the intervention and control group in terms of the number of subjects who returned to work (p < 0.406).•Regression analyses found that the ModifiedBarthel Index score at follow-up significantly influenced return-to-work (OR = 7.5, 95% CI [2.04-27.59],p < 0.002).
FIGURE 1
FIGURE 1 ) and Radford et al. (21): (a) the programme was tailored to the individual's needs and work demands, (b) focus on workplace preparation and skills training, (c) work site visits were conducted and employers were involved in discussion of the returnto-work plan, and (d) provision of fatigue management and psychological support.The remaining studies did not find any positive effects in their outcome measures.Although psychological support was not included in their intervention programme, Cain et al. ( risk of bias; Some concerns of bias;High risk of bias.For papers which were sub-studies, the original papers were retrieved and reviewed to evaluate the risk of bias.# Information was obtained from the original study protocol paper(23).^ Information was obtained from the original study protocol papers (24, 25).Chen et al. 10.3389/fneur.2023.1145705Frontiers in Neurology frontiersin.org 85 years old), with one of the primary outcomes of rehabilitation being return-towork (including paid work, unpaid work, volunteering, housework).Eligible studies included interventions of an RCT design, of any type and duration against an active or passive control group.Examples include cognitive training/rehabilitation, digital interventions (computerised, application-based), and vocational rehabilitation.
TABLE 1
Characteristics of reviewed studies.
TABLE 2
Risk of bias assessment. | 2023-08-26T15:52:55.969Z | 2023-08-22T00:00:00.000 | {
"year": 2023,
"sha1": "3344382d59ddbda66cd94b1d46d778e058c82183",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2023.1145705/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "11dfbb95de6ecdd09c09b75a46c47c2182f4ea8f",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
} |
221841743 | pes2o/s2orc | v3-fos-license | A predictable conserved DNA base composition signature defines human core DNA replication origins
DNA replication initiates from multiple genomic locations called replication origins. In metazoa, DNA sequence elements involved in origin specification remain elusive. Here, we examine pluripotent, primary, differentiating, and immortalized human cells, and demonstrate that a class of origins, termed core origins, is shared by different cell types and host ~80% of all DNA replication initiation events in any cell population. We detect a shared G-rich DNA sequence signature that coincides with most core origins in both human and mouse genomes. Transcription and G-rich elements can independently associate with replication origin activity. Computational algorithms show that core origins can be predicted, based solely on DNA sequence patterns but not on consensus motifs. Our results demonstrate that, despite an attributed stochasticity, core origins are chosen from a limited pool of genomic regions. Immortalization through oncogenic gene expression, but not normal cellular differentiation, results in increased stochastic firing from heterochromatin and decreased origin density at TAD borders.
. Higher activity origins display higher ubiquity across replicates and cell types (a) Euler diagrams showing the fraction of origins shared by three immortalized cell lines. (b) Black dots show the percentage of origins in each quantile that overlap origins detected in a previous SNS-seq 1 study. Grey dots represent the expected chance overlaps of randomly shuffled, control genomic regions of equal size and number as our origins. P-values obtained by Chi-square Goodness-of-Fit test using observed and expected values for overlap. (c) As in (b) for regions identified by INI-seq 2 . Red dots depict the percentage of early-firing origins identified by INI-seq 2 , which is an in vitro method that identifies earliest firing origins. (d) As in (b) for OK-seq 3 regions. (e) Tightly clustered core origins are more likely to be identified by the alternative origin mapping method OK-seq 3 . Bar plot showing the percentage of tightly clustered core origins (in black) that overlap with DNA replication initiation zones identified by OK-seq. Dotted bars represent the expected chance overlap of randomly shuffled, control genomic regions of equal size and number to OK-seq regions. P-values obtained by Chi-square Goodness-of-Fit test using observed and expected values for overlap. (f) Core origins overlap with the pre-RC components ORC1 and ORC2 binding sites. Graph shows the percentage of origins in each quantile that overlap with regions bound by ORC1 or ORC2 (red) or ORC2 (blue) within ± 2 kb. Paler coloured dots represent the expected chance overlap of randomly shuffled, control genomic regions of equal size and number as our origins. (g) ORC2 binding sites that occupy larger genomic regions are more likely to be associated with DNA replication origins. Pie chart represents the percentage of ORC2-bound sites in the genome that intersect a core or a stochastic origin (within ± 2Kb). Left panel represents ORC2-bound regions longer than 1Kb, and the right panel represents ORC2-bound regions longer than 2 Kb. p-values were obtained using the Chi-square of Goodness-of-Fit test in R with observed and expected overlap values. Table 2. Y-axis is of arbitrary units representing the importance assigned to each variable by each algorithm. (d) Schematic summary of the hematopoietic cell (HC) differentiation protocol. HC (CD34+) were isolated from three independent human cord blood donors and expanded in three independent cultures for 6-7 days. Then, erythropoietin (+EPO) was added to the culture medium (Day 0) for 6 days, and cells were harvested at day 0, day 3 and day 6 for SNS-seq and RNA-seq analysis.
(e) Origins with increased activity after erythrocyte differentiation (day 6) are in genomic regions that host genes related to erythrocyte differentiation. The genomic coordinates of origins that were significantly upregulated upon EPO addition (day 0 vs day 6) were analysed with GREAT. GREAT analysis was performed on genomic coordinates of the origins that were significantly upregulated upon EPO treatment (day 0 vs day 6). Origin regions were associated with genes using the single-gene (SG) rule of GREAT. Only one category came up as statistically significant at Binomial p-value p<0.05, which was plotted here. (a) Pie charts representing the percentage of DNA replication initiation events (as assessed by normalized SNS-seq counts) at known origins that originate from Q1, Q2 (core origins) or Q3-10 (stochastic origins) in all cell types used in this study.
(b) Origin G-rich sequence-specificity is lost upon immortalization. In immortalized cells, origins that are down-regulated (black bars) in comparison to the parental cell line (HMEC) tend to overlap with CpGi (left panel) or G4 (right panel) elements. In contrast, origins upregulated upon immortalization (in white bars) have less than expected overlaps with CpGi or G4 elements. For reference, the dotted line shows the percentage of all origins that overlap with a CpGi (left panels) or G4 (right panels) are shown.
(c) Same as in (b), but for core origins that are up-or down-regulated upon immortalization. For reference, the dotted line shows the percentage of core origins that overlap with a CpGi (left panels) or G4 (right panels) are shown.
(d) Mouse core (left panel) and stochastic (right panel) origin density across topologically associating domains (TADs) of mouse embryonic stem cells 6 . Origin density along TAD domains (blue) or equal-size control regions (grey) was computed as follows. TADs were divided into 100 equal bins (slices) and the origin density in each bin was calculated as number of origins per Mb. The p-value was calculated using the non-parametric Wilcoxon test in R.
(e) Core origin density across TADs (determined in hESC H1) that are active in hESC H9 (left panel), HC (middle panel) or HMEC (right panel). Origin density along TADs was computed as in (d).
(f) Core origins coincide with putative regulatory elements. Plot shows the overlap of origins (Q1-Q10) with human genome regions that have putative regulatory functions (as defined by ReMap, >10 peaks). | 2020-09-23T13:06:08.155Z | 2020-09-21T00:00:00.000 | {
"year": 2020,
"sha1": "36e53a23b131d55042cd462628a5899a6e27db23",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-020-18527-0.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3ce9df90adeb445a5d74912ca764b61197f2453a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
17384973 | pes2o/s2orc | v3-fos-license | Matched asymptotic solutions for the steady banded flow of the diffusive Johnson-Segalman model in various geometries
We present analytic solutions for steady flow of the Johnson-Segalman (JS) model with a diffusion term in various geometries and under controlled strain rate conditions, using matched asymptotic expansions. The diffusion term represents a singular perturbation that lifts the continuous degeneracy of stable, banded, steady states present in the absence of diffusion. We show that the stable steady flow solutions in Poiseuille and cylindrical Couette geometries always have two bands. For Couette flow and small curvature, two different banded solutions are possible, differing by the spatial sequence of the two bands.
Introduction
Many experimental results confirm the possibility of shear banding, i.e. the separation of bands of different shear rates and apparent viscosities in the flow of various systems of wormlike micelles (aqueous solutions of surfactants and salt [1][2][3][4][5] or organic solvent solutions of metallic complexes [6]), or lyotropic liquid crystals [7]. This type of behaviour can be explained as being the result of a constitutive instability, suitably described by non-monotonic flow curves such as those arising from the Johnson-Segalman (JS) [8] or Doi-Edwards models [9]. It is possible that similar constitutive instabilities are responsible for spurt and extrudate distortions of polymer melts [10].
The steady banded flow solutions of the JS model were previously studied in planar [11,12], Poiseuille [13][14][15][16][17], and cylindrical Couette geometries [18]. In all these geometries banded flow solutions have continuous degeneracy [16,17,19], resulting from the indeterminacy of the positions of the interfaces and of the number of bands. The stress value can be kinetically selected [20,12,18,16,17], and it may differ from the top jump prediction of Refs. [13,14], although the selected value depends on the flow history and on the imposed shear rate [19]. This is in contrast to experiments on wormlike micelles that show a well defined stress plateau in the flow curves: the total stress is history independent and do not change with the shear rate. Furthermore, the evolution of the stress during transient flow may be rather long, suggesting the slow migration of the interface between bands to an equilibrium position [2].
Recently [19,21,22] we showed that, by supplementing the JS model with a stress diffusion term, one can lift the degeneracy of the steady flow and also account for the slow migration of the interface. The origin of this diffusion term can be justified in terms of the Brownian movement of polymer chains in an inhomogeneous stress field and can be deduced from the Fokker-Planck equation for a system of dumbbells [23]. There is no calculation, at present, for such a term in the wormlike micelle system. Similar results have been obtained by Refs. [24,25] by introducing phenomenological diffusion terms in toy constitutive models. Although non-local terms are not important in homogeneous flow, in banded flow a realistic description of interfaces as strongly inhomogeneous regions requires such terms. This necessity has been noted by many authors [23,[26][27][28].
Recent experiments [29] on wormlike micelles in pipe (Poiseuille) and cylindrical Couette flow geometries, or in cone and plate [30] using NMR microscopy found shear rate profiles with two or three bands. Birefringence measurements in Couette geometry [31,32] suggest the existence of only two bands, although the relation between birefringence and shear rate is hard to establish because it depends on microstructure. Most experiments on shear banding are performed in geometries imposing inhomogeneous total stress (Couette, pipe flow, cone and plate). Typically a stress quantity (e.g. torque, or pressure drop, or shear stress) is measured as a function of a rate of flow quantity (e.g. mean strain rate, flux through a pipe, wall speed), and an engineering "flow curve" is constructed. Under inhomogeneous flow conditions the flow curves are not simply the local constitutive relation of the fluid. It is therefore important to assess theoretically the question of how many bands can occur and which are the flow curves for a given constitutive model and in various geometries.
We present elsewhere numerical computations of flow curves in Couette flow [19] for the JS-d model. Here we show how matched asymptotics techniques can be used to obtain analytic results on the number of bands, interface width, and flow curves in various geometries. The application of these techniques to the JS-d model represents only an example, and the same method can be easily adapted to other constitutive models.
The structure of the paper is the following. In Section 2 we introduce the flow equations in various geometries. The matched asymptotic solutions of these equations are presented in Section 3. In Section 4 we discuss how the inner layer solution controls the history independence and stability of the flow. We apply our results in Section 5 to predict the shapes of flow curves and the interface width. In Section 6 we discuss the results and suggest possible extensions.
General equations
The dynamics of the diffusive Johnson-Segalman fluid is described by two equations. The first is the momentum balance: where ρ is the fluid density, v is the velocity field, and T = −p I + 2ηD + Σ is the total stress tensor where the pressure p is determined by incompressibility, Σ is the viscoelastic stress carried by the polymer strands, η is the "solvent viscosity" and D is the symmetric part of the velocity gradient. The second equation is a constitutive relation for the polymer stress Σ, which we take to be the the JS constitutive model [8] with an added diffusion term [21,19]: where µ is the "polymer" viscosity, τ is the relaxation time, and D is the diffusion coefficient.
The time evolution of Σ is governed by the Gordon-Schowalter (GS) time derivative [33], where a is the "slip parameter" and Ω is the antisymmetric part of the velocity gradient. For a = 1 there is no slip and the GS derivative becomes the upper convective derivative. For a = 0 (total slip) the fluid stress is not transmitted to the polymer strands and these can only be oriented by the flow: the GS derivative becomes the corotational derivative. In this work we consider |a| < 1.
We study the case of a fixed average shear rate, which represents a global constraint on the velocity gradient. We discuss several shear geometries: planar shear, slit and pipe Poiseuille flow, and cylindrical Couette flow between concentric cylinders. We consider parallel stream lines in all geometries and leave the possibility of secondary flow for further investigation.
Planar shear and Poiseuille flow
Parallel stream line flow corresponds to v = v(y)e x , for planar shear between plates at positions y = 0, L and for Poiseuille flow through a rectangular slit of infinite length and walls at positions y = ±L.
We introduce dimensionless variables, is the velocity at y = 0 (the maximum absolute value), having opposite sign to the shear rate and stress. With these definitionsV has the same sign as the reduced shear rate and stress. In terms of these quantities the momentum balance reads: The dimensionless pressure gradient f vanishes in planar shear, ǫ = η µ is the retardation parameter (viscosity ratio) and α = ρL 2 µτ is the ratio of the Reynolds and the Weissenberg numbers. In shear-banding experiments (e.g. for wormlike micelles) α ∼ 10 −4 − 10 −3 , therefore in this case one can neglect inertia and replace Eq. (2.4) by its creeping flow limit: The solutions of Eq. (2.5) are: whereσ := ǫγ +Ŝ is the total stress.
The constitutive equations read
Cylindrical Couette flow
In the Couette geometry with concentric cylinders at radii R 1 < R 2 and assuming circular stream lines we have v = v(r)e θ . The momentum balance in cylindrical coordinates reads α∂tv = 1 is the shear rate and α = ρR 2 1 µτ has the same significance as in planar shear. Creeping flow corresponds to: The constitutive equations read where ∆r = ∂ 2 r + 1 r ∂r is the Laplacian. The shear rate condition is: All the above variables are dimensionless and have the same sign, with the following definitions: 1 − a 2 , andr = r/R 1 . Γ is the torque per unit length applied at the inner cylinder, and has the same sign as V , the velocity of the inner cylinder, being opposite to the shear rate and stress. With these definitionsΓ andV have the same sign as the shear rate and stress.
Pipe flow
Like for Couette flow we use cylindrical coordinates, with the z axis parallel to the stream lines, v = v(r)e z . The pipe is an infinite length cylinder of radius R. The momentum balance reads α∂tv = 1 r ∂r[r(Ŝ + ǫγ)] − 2f , whereγ = ∂v ∂r is the shear rate. Creeping flow corresponds to a total stress that is linear in r, like for the slit geometry: ǫγ +Ŝ =σ(r) = fr. (2.12) The shear rate condition is:V The constitutive equations are: We have used the following rescalings: ,D = D τ R 2 , andr = r/R.
Matched asymptotic solution for the steady flow
The flow is described by the system of nonlinear, parabolic partial differential equations of the reaction-diffusion type, Eqs. (2.7), (2.10), or (2.14). The nonlinear reaction terms are due to the polymer stress relaxation. The nonaffine deformation (slip) is essential for the nonlinearity, and all rescalings are possible for |a| < 1.
We consider smallD, for which the diffusion terms represent a small perturbation to the steady flow equations. Nevertheless, this perturbation is singular and the solution can not be represented as a uniformly convergent power series inD. The usual technique applying to this situation is asymptotic matching [34]. The solution is divided into an inner layer solution around the interface where the diffusion terms are important and outer layer solutions a distance from the interface farther than its width (that scales likeD 1/2 ), where diffusion terms are exponentially small and can be neglected.
The algebraic system of Eqs. [13,14]. The dependence of σ bottom and σ top on ǫ can be obtained from Eqs. (3.1), and is represented in Fig. 1b.
Inner layer solution
In any of the Eqs. (2.6) and (2.7), or Eqs. (2.9) and (2.10), or Eqs. (2.12) and (2.14), we impose stationarity and change variables to: wherer * is the position of the interface. The stress is expanded about the interface position,σ =σ * +D 1/2r ∂rσ+O(D), whereσ * =σ(r * ) is the value of the stress at the interface. By neglecting terms of orderD 1/2 and higher we obtain, for all studied geometries: For pipe flow we may also useX in =Ŵ in , that together with Eq. (3.3c) implies (Σ θ,θ ) in = 0.
We refer here only to single interface solutions. More complex solutions with an arbitrary number of interfaces can be treated in the same way. There are two types of single interface solutions, differing by the sequence of bands. Let us refer to the solution with the high shear rate band at the left of the interface (towards smallerr) as (+−), and to the solution with the high shear rate band at the right of the interface (towards biggerr) as (−+).
The inner and outer solutions should match at the interface (Prandtl matching principle, Ref. [34]), leading for the sequence (+−) to: or for the sequence (−+) to:
Equations of flow curves
The flow curve expresses the relation between measurable quantities: velocity of the inner cylinderV and torqueΓ in the Couette geometry, maximum velocityV and pressure gradient f in Poiseuille flow, gap velocityV and shear stressσ in planar shear.
In order to obtain the flow curves we use the shear rate conditions, Eqs. (2.8), (2.11), (2.13), and the relations (2.6b), (2.9), (2.12) between the total stressσ and the positionr. In the banded regime, the shape of the flow curve depends on the number of interfaces and on the sequence of bands. We shall show in the next section that in the presence of stress diffusion steady flow always corresponds or can be reduced by symmetry to a single interface.
In Poiseuille flow, using Eqs. (2.6b), (2.8) or (2.12), (2.13), a single interface flow curve with the high shear rate band near the wall (right of the interface) is described by: whereσ * =σ(r * ) is the value of the total stress at the position of the interface. The inverse sequence of bands is prohibited in Poiseuille flow, because the high shear rate band can not exist atr = 0, whereσ = 0.
In Couette flow, using Eqs. (2.9), (2.11) a single interface flow curve with the high shear rate band near the inner cylinder (left of the interface) is given by: The flow curve for the inverse sequence of bands (high shear rate band near the outer cylinder) is given by: Planar flow represents a special case because the total stress is constant throughout the gap (Eq. (2.6a)) and thus: where ν is the proportion of high shear rate band in the flow. As we shall argue below, the flow curves above are well defined because the valueσ * of the shear stress at the interface is a geometry-independent constant.
4 Uniqueness and stability of the steady flow
An existence conjecture for the inner layer solution
The inner layer solution controls the existence and stability of steady banded flows. Because we are dealing with singular perturbations, this control can be performed even by a very thin interface (very smallD). Before stating an important property of the inner layer solution, we note that steady banded flow is a particular case of a moving interface solutionr * (t). We may look for matched asymptotic solutions in this case as well and one may show by using the change of variabler that the moving interface inner layer solution should obey: is the rescaled velocity of the interface.
Conjecture. There is a unique valueσ * =σ sel ∈ (σ bottom (ǫ), σ top (ǫ)) of the total shear stress at the position of the interface, such that a steady inner layer solution for the banded flow of the JS-d model (Eqs. Remark 2. A stationary inner layer solution which obeys the Prandtl matching principle represents a heteroclinic orbit of the 4D dynamical system Ŝ , ∂rŜ ,Ŵ , ∂rŴ , connecting the hyperbolic fixed points Ŝ + (σ sel ) , 0, W + (σ sel ), 0 and Ŝ − (σ sel ), 0,Ŵ − (σ sel ), 0 . As discussed in [19,21], and as is valid rather generally in these cases, the value of the parameterσ * =σ sel allowing this solution is isolated (actually unique in the interval (σ bottom (ǫ), σ top (ǫ))). The condition on the sign of the derivative dc(σ) dσ * means that an increase of the stress at the interface aboveσ sel produces a movement of the interface that decreases the size of the low shear rate band. Conversely, a decrease of the stress at the interface belowσ sel produces a displacement of the interface that increases the size of the low shear rate band. The same property can be found for other reaction-diffusion systems (e.g. the FitzHugh-Nagumo model of nerve conduction in biophysics) and has been occasionally referred to as "dominance principle" [35,36].
Remark 3. We have numerically checked this conjecture for various values of the parameter ǫ of the JS-d model (see next Section). All of these features are easy to prove for toy models that lead to integrable dynamical systems [22].
Numerical test of the conjecture
In order to test the conjecture for the JS-d model, and determine the relation between c andσ * for different values of the unique parameter ǫ of the model in reduced variables, we have numerically integrated the following system of partial differential equations: is conveniently chosen to scan the interval [σ bottom (ǫ), σ top (ǫ)]. Provided that the length L is taken much larger than the interface width, i.e. √ β ≪ L, the stationary solution is an interface in a position corresponding to a value of the stress equal to σ * =σ * (c). β can be interpreted as the diffusion coefficient and σ(r) as the total stress inside the gap of width 2L although the applicability of the method does not rely on its connection to a concrete physical problem, but on its formal analogy to Eqs. (2.5) and (2.7), in the asymptotic (L/ √ β → ∞) limit.
In this way we determine σ sel (ǫ) = σ * (c = 0; ǫ) and dc dσ * (σ sel ; ǫ) = dσ dc −1 (c = 0; ǫ). The first dependence will be used in calculating the flow curves in Section 5, while the sign of the latter constitutes a proof of the dominance principle.
In order to determine precisely the position of the stationary interface and hence the selected stress, we have solved Eqs. This algorithm is more convenient than the one used in [21] that considers a constant stress σ throughout the gap (planar shear) and tunes "by hand" the value of σ to obtain a stationary interface. Here the tuning of the stress is automatically obtained while integrating Eqs. (4.4). The selected value of the stress is compared with σ bottom and σ top in Fig. 1b and agrees with the result of the method in [21].
In order to test the dominance principle we have calculated dc dσ * . Changinĝ r → −r and c → −c leaves Eqs. (4.4) invariant. Thus if c +− (σ * ) and c −+ (σ * ) are the velocities of the interface with the high shear rate at its left and right, respectively, then c −+ (σ * ) = − c +− (σ * ). Hence, our check that dc +− dσ * > 0, as shown in Fig. 2c, represents the sufficient test of the dominance principle.
Number of bands and stability of banded flow
The numerically tested conjecture implies that the interface is stationary in a position inside the gap where the value of the stress isσ sel . This position is unique (r sel = (Γ/σ sel ) 1/2 for Couette flow,r sel =σ sel /f for pipe flow) because in steady flow the total stress is monotonically dependent on the radius. Therefore, in Couette and pipe geometries banded flow has only one interface separating two bands. For Poiseuille slit flow, the dependence of the total stress on position is symmetric with respect to the middle of the slit, so there are two interfaces atr sel = ±σ sel /f . For both Poiseuille slit and pipe flow the sequence of bands is unique, with a centre low shear rate band and outer high shear rate bands (or an annular band in pipe flow). In Couette flow, two sequences of bands are possible under certain conditions that we shall discuss in the next section. There are no restrictions on the sequence and order of the bands in planar flow because the total stress is constant throughout the gap; interfaces can exist anywhere, and any number and sequence of bands is possible, provided that the bands are much thicker than the interface width.
The stability of banded steady flow in presence of diffusion terms was tested in the Couette geometry by numerically evolving the dynamical equations [19] for both normal (high shear rate at the inner cylinder) and inverted sequences of bands. We have also proved (Appendix A) the stability of different types of banded solutions in Poiseuille and Couette flows with respect to perturbations of the position of the interface. A linear stability analysis for 1D perturbations around a banded profile, as well as the stability with respect to surface waves in 2D, involves lengthy functional analysis and will not be discussed here. Linear stability of banded flow for 1D perturbations in slit Poiseuille geometry was shown in Ref. [24] for a simplified model with only one order parameter. For the JS model (without diffusion) linear stability of the banded flow with any finite number of bands was proven in [15] for 1D perturbations in Poiseuille geometry, and possible instabilities induced by surface waves were discussed in [11].
Although linearly stable, the inverted sequence of bands is metastable because the interfaces separating a nucleus of low shear rate band within the high shear rate band will find themselves in a region of stress lower thanσ sel and according to the dominance principle will have non-zero velocities oriented such that the nucleus will grow [22]. The same is true for a nucleus of high shear rate band in the region where the total stress is higher thanσ sel . In Poiseuille and Couette geometries, any banded steady flow profile can be conveniently represented as a segment on the local constitutive curveγ(σ), a solution of Eqs. (3.1) for variable stressσ (Fig. 3). Using the relation between r andσ (Eqs. 2.6b,2.9, 2.12) one can map a steady stateγ(r) onto a segment of the local constitutive curveγ(σ) for stresses 0 <σ < f (Poiseuille), or Γ/(1 + p) 2 <σ <Γ (Couette). The outer layer solutions lies on the local constitutive curve, while the inner layer solution is a horizontal segment at σ =σ sel , corresponding to a small width interface.
Planar shear
The equation of the low shear rate branch isV =γ − (σ), with 0 <σ <σ top (ǫ), and the high shear branch can be described asV =γ + (σ), withσ bottom (ǫ) < σ. The banded flow corresponds to the plateauσ =σ sel . The homogeneous branches of the flow curve are on the local constitutive curve represented in Fig. 1.
Poiseuille flow
The low shear rate branch can be obtained from Eq. .7) is automatically fulfilled because it becomes equivalent to σ bottom (ǫ) <σ sel (ǫ) < σ top (ǫ), that is true for ǫ < 1/8 (actually for all values of ǫ for which the local constitutive curve is nomonotonic, Fig. 1). Eq. (5.5) is not fulfilled (thus the inverted sequence of bands is forbidden) because it becomes equivalent to fr > σ bottom , ∀r <r * sel , obviously not true forr = 0. The banded branch continues until f = ∞ and r * = 0. The low shear rate band is continuously squeezed until it disappears in the limit f = ∞. Thus, the flow curve has no high shear rate branch, because the high shear rate band occupies the entire volume only at f = ∞. The flow curve in this geometry is represented in Fig. 4a for different retardation parameters ǫ.
Cylindrical Couette flow
In Couette flow Eq. (5.5) is always fulfilled and the normal sequence (−+) of bands is allowed for all shear rates along the plateau (see Fig. 3b-d). Nevertheless, the inverted sequence (−+) is not allowed for the values of Γ, p, ǫ when the total stress at the inner cylinder is higher than σ top , or when the total stress at the outer cylinder is lower than σ bottom , see Eq. (A.8a) implies that the branch (+−) has a positive slope ∂Γ ∂V > 0, while Eq. (A.8b) implies that the branch (−+) has a negative slope ∂Γ ∂V < 0. The latter does not imply a mechanical instability, because any mechanical drift is constrained by the imposed mean shear rate. Furthermore, the stability of this sequence of bands is shown by our numerical simulations [19] and by the stability analysis with respect to the position of the interface (Appendix A). As suggested in [16,17], stability of banded flow is not directly connected to the slope of the flow curve, but rather to the slope of the local constitutive curve of the homogeneous bands; if the latter is negative for at least one band (intermediate branch in Fig. 1a), then the flow is unstable. Fig. 4b corresponds to values of the parameters (p, ǫ) chosen in Region I of Fig. 5, with both sequences of bands allowed. Fig. 4c corresponds to parameters chosen in Region II (the inverted band sequence is forbidden at higher strain rates), while Fig. 4d corresponds to parameters chosen in Region IV (shear rates allowing the existence of the inverted sequence are limited both at upper and lower values). Interestingly, the branch corresponding to the inverted band sequence no longer crosses the normal band sequence branch for p = 0.11, ǫ = 0.05 in Fig. 4d. Note that the "plateau" in the banded region is steeper (larger dΓ/dV ) for a more highly curved geometry (larger p). The various dangling segments corresponding to different branches of the flow curves may be the origin of hysteresis phenomena. Part of these phenomena were described in a previous paper [19] using numerical integration of the dynamical equations. We have shown there that some segments of the flow curve can be reached only by special experimental scenarios. This is in agreement with the results of this work. For instance, starting from rest in Fig. 4b, when ǫ = 1/25, the system can first reach the branch (+−) within the interval of gap velocitiesV 1 <V <V 2 and the branch (−+) within the intervalV 2 <V <V 3 (Fig. 4b). In order to reach other parts of the banded branches one must, for example, prepare the system with a given band sequence and then adiabatically change the value of the gap velocity in order to scan the entire length of the branch. This is possible for p < 0.1 for both bands (+−), but for p = 0.11 in Fig. 4d all start-up preparations of the flow end with the sequence (+−) and it is impossible to reach the isolated branch (−+). Numerically one could start from rest with a smaller value of p, reaching thus the (−+) branch, and then change p adiabatically, but this is not a conceivable experimental procedure.
Interface Width
Linearising Eqs. (3.3) atr = ±∞ we obtain where δŜ in , δŴ in are small deviations of stresses with respect to asymptotic steady values and Asymptotically, the interface profile can be approximated by a combination of exponentials, and the widths of the interface towards the high shear rate band w + and towards the low shear rate band w − are the characteristic decay lengths of the slowest of the exponentials on the two sides of the interface, related to the eigenvalues of M ± : where i = 1, 2 correspond to the two eigenvalues χ − 1,2 of M − and to the two eigenvalues χ + 1,2 of M + (see Fig. 6a). The dependence of the interface width on the retardation parameter ǫ is shown in Fig. 6b. The interface is asymmetric (thicker in the low shear rate band).
The non-vanishing imaginary parts of χ + 1,2 imply damped oscillations of the interface profile towards the high shear rate band. The wavelength of these oscillations is the inverse of |Im χ + i | and because |Im χ + i | < Min(Re χ + i ) (Fig. 6a) the width of the interface is smaller compared to this wavelength the overdamped oscillations are hardly noticeable in Fig. 2a showing the interface profile.
As seen in Fig. 6a, a bifurcation occurs for low polymer viscosity. For ǫ > 0.1, the eigenvalues χ + 1,2 are no longer complex conjugate (Reχ + 1 = Reχ + 2 ). This bifurcation produces a discontinuity of the first derivative of w + (ǫ) at ǫ = 0.1. At the same time the imaginary parts of χ + 1,2 vanish. This particularity of the JS model was also noticed in [14]. At ǫ = 1/8, which is the limit of existence of banded solutions (above ǫ = 1/8 the local constitutive curve is monotonic) χ − 1 , χ + 1 vanish, therefore both widths w ± diverge, Fig. 6b. The interface width scales like √ Dτ , which is typically of the same order as the polymer chain size [22]. The prefactor can be rather big for small polymer viscosity (it diverges at the critical point ǫ = 1/8 like (1/8 − ǫ) −1/2 ) and it is geometry independent. Thus, the width of a thin, steady interface should be the same in planar shear, as in Poiseuille and Couette flows for any curvature. Ref. [30] determined the width of the interface for micelles experiments in cone and plate geometry, and reported an increase in width of a factor 3 when the cone-plate angle changes from 4 o to 7 o . Although we did not investigate this particular geometry here, we expect that rather generally the width of a thin, steady interface depends only on the constitutive model and does not depend on the geometry. The widths reported in [30] are much larger than the polymer chain size, therefore they are most probably produced by a different mechanism. A possible explanation is the dynamical broadening due to travelling surface waves in two-layer flows [37,38].
Conclusions
We have discussed the steady banded flow of the Johnson-Segalman model, considering 1D shear rate profiles in Poiseuille, cylindrical Couette and planar shear geometries, under controlled shear rate conditions. The introduction of a non-local diffusion term in the JS local constitutive relation lifts the continuous degeneracy and the history dependence of the steady flow solution for Poiseuille and cylindrical Couette geometries, and generally for any geometry that imposes a non-uniform total shear stress. In order to analyse these phenomena we used matched asymptotic techniques which are common in the field of reaction-diffusion systems describing processes like combustion or nerve conduction, but which, to our knowledge, are new in the field of creeping flow of complex fluids. The use of the JS model in this paper should be considered only as an example of application of these methods in order to investigate the consequences of diffusion terms on banded flow.
We showed that in Poiseuille and cylindrical Couette geometries steady banded flow has two bands, which seems to be the case in birefringence experiments [31,32]. NMR visualisation of more than two bands as reported in Ref. [29] in the Couette geometry could be due to slowly decaying (or pinned) transients. Note that the speed of a moving interface scales as D 1/2 . Another possibility, not taken into account here but that needs further investigation, is that concentration effects may stabilise banded flows with more than two bands. Thus, a depletion layer at the inner cylinder increases locally the selected stress and may equilibrate a second interface closer to the inner cylinder than the first equilibrium position which corresponds to higher concentration and lower selected stress.
In the absence of diffusion steady banded flow can form any number of bands. The same arbitrariness of the number of bands was shown in steady Poiseuille flow [16,17]. In numerical simulations of Ref. [25] no selection was noticed, except for the one imposed by the mesh size. Recently, we showed numerically [19] that even in Couette flow, solving the flow in the absence of diffusion is an ill-posed problem, unstable to noise in the initial conditions. The lack of robustness of the flow increases when the curvature decreases, in the sense that larger and larger regions inside the gap may split into multiple bands under perturbations of the initial conditions. Refs. [12,39] found an unique, single interface solution even in the planar shear of the JS model without a diffusion term, although this interface has a finite width that suggests diffusion terms intrinsic to the finite element scheme. With diffusion the system of dynamical equations becomes parabolic leading to compactness of the global attractors and to stability with respect to initial conditions [24].
In certain conditions in cylindrical Couette flow (low curvature and high polymer viscosity), both normal and inverted band sequences are allowed for the same shear rate. Nevertheless, the inverted band is metastable and can be reached from rest only on the second half of the plateau [19]. Although this branch has not been observed experimentally, its existence and linear stability is proven rigorously and may have consequences on the kinetics of shearbanding within a restricted domain of control parameters. An interesting feature shown by experimental flow curves and reproduced by our theoretical model is the presence of a selected stress plateau, reminiscent of isotherms at liquid-gas or decompositions of binary systems first order phase transitions. In thermodynamic transitions the plateau is fixed by a Maxwell type construction, justified by the equality of the chemical potentials of the coexisting phases. In the model we present here the equilibrium of "phases" is a condition of existence of a stationary interface, and relies on the balance between stress diffusion across the interface and non-linear stress relaxation. The retardation parameter plays the rôle of temperature, and its critical value ǫ = 1/8 is analogous to an equilibrium critical point. The extremities of the plateau for varying ǫ describe a curve that is analogous to a binodal for phase separation, the critical point being the top of the binodal (Fig. 1a). As in first order transitions the width of the interface separating the two bands diverges at the critical point. Whether the studied phenomena have more than a formal connection to thermodynamics is still an open question.
Apart from settling some basic questions concerning how the presence of diffusion terms affects measurable flow curves, we hope that the results presented here will stimulate more accurate and diversified experiments, eventually searching for various unusual features such as inverted band sequences and the special kinetic pathways involved. This could be a test of the theoretical assumptions but could also increase the operational range in view of possible applications.
The partial derivative occurring in Eq. (A.1) can be obtained as Using Eqs. (A.2, 3.5, 2.6b) for Poiseuille flow we obtain: where According to the tested conjecture, an increase of the total stress will displace the interface towards the middle of the slit or pipe ( dc dσ < 0), so using Eq. (A.4) the stability condition (A.1) for plug flow in Poiseuille geometry reads: Using the fact thatγ − (σ) is an increasing function ofσ it follows from Eq. (3.5) thatV (σ sel ,σ sel ) <γ − (σ sel ) <γ + (σ sel ) and therefore P (σ sel ,σ sel ) < 1. P (f,σ sel ) is a monotonically decreasing function of f , provided thatγ + (σ) is increasing withσ, as shown by Because the lowest allowed value of f for banded flow is f =σ sel , the stability criterion is always satisfied.
A similar stability analysis can be performed in the case of the cylindrical Couette flow. Using Eqs. (A.2), (3.6), (3.7), (2.9) for Couette flow we obtain: where the sequence +− refers to the high shear rate band near the inner cylinder and the sequence −+ refers to the high shear rate band near the outer cylinder.
To conclude, banded flow is stable with respect to displacements of the interface in Poiseuille slit and pipe geometry. In cylindrical Couette flow, both sequences of bands are stable with respect to displacements of the interface, provided that their existence is allowed by further criteria discussed in Appendix B.
In this region the sequence (+−) is allowed for torques within I s Γ , while the sequence (−+) is forbidden for all values of the torque, because I −+ = ∅.
These five regions are shown in Fig. 5. Region III is very narrow, and practically represents the frontier between the Regions I and IV. In general, for given Γ and ǫ the inverted band sequence is forbidden for sufficiently highly curved geometries (large enough p). A less formal approach to obtain the same results is shown in Fig. 3. Fig. 3c corresponds to the case of large curvature (region IV), in which the value of the stress does not allow for the existence of the low shear rate band at the outer cylinder forbidding thus the sequence (-+), because Γ/(1 + p) 2 < σ bottom . For small curvature (Fig. 3b) the values of the stress at the inner and outer cylinders allow the existence of both high and low shear rate bands, thus both sequences of bands can exist. The possibility of an inverted band sequence was also found in Ref. [40] for a two fluid model of shear banding in polymer blends. | 2014-10-01T00:00:00.000Z | 1999-11-04T00:00:00.000 | {
"year": 1999,
"sha1": "f73a239c05b3e056edc580f18c9c24fd6ac58c01",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/9911064",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f73a239c05b3e056edc580f18c9c24fd6ac58c01",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
} |
117516700 | pes2o/s2orc | v3-fos-license | Evaluation of RC building strengthened with column jacketing method with consideration of soft-story
The assessment and strengthening method of existing reinforced concrete buildings in terms of seismic responses have been highly important for the past decades especially buildings with soft-story mechanism which are found to be more vulnerable. Several strengthening techniques have been proposed including Fibre Reinforced Polymer and steel jacket of RC column elements. This paper aims to evaluate the effectiveness of two strengthening techniques with variability of soft-story under severe earthquake excitations. Five-story RC building is investigated and non-linear dynamic analysis is performed to capture the inelastic responses of the structure. Four sets of ground motion are selected and matched with target response spectrum for Aceh earthquake. Seismic responses of the considered models and strengthening techniques are compared.
Introduction
Recent reported earthquake damage in Indonesia [1] has shown the higher percentage of structural failure associated with soft-story mechanism especially for reinforced concrete building. The softstory mechanism occurs when the stiffness of one story is much less than the adjacent story stiffness. The structure was designed with inadequate seismic vulnerability assessment. Rajeev [2] reported the effect of soft-story parameters in seismic fragilities of the structure. The structure with higher softstory level was found to be more vulnerable in moderate to high intensity earthquake. Therefore, strengthening technique to overcome the challenge in soft-story building have been proposed and developed for the past decades. The most common retrofitting techniques are FRP and steel jacketing which have the efficiency in terms of workability and implementation. Evaluation of each strengthening method is critical to obtain the most efficient technique in retrofitting of the building.
The applications of FRP and steel jacketing method and finite element modelling techniques for reinforced concrete column have been reported [3,4]. Recently, Seifi et al. [4] proposed the physical model for FRP confined concrete column calibrated with the experimental data. Moreover, steel jacketed concrete column model was also proposed Campione et al. [5]. Recent Study conducted by Futardo et al. [6] reported the efficiency of strengthening technique in soft-story building. The study shows the method of column jacketing is effective in reducing seismic responses in soft-story building. However, the application of the strengthened column model in building with variability of soft-story 2 1234567890''"" level has yet been evaluated. The premise of this study is to illustrate the efficiency of retrofitting techniques in reinforced concrete building with different soft-story parameters.
Building model
Five-story reinforced concrete building is employed to illustrate the effect of strengthening technique in different soft-story effect. Figure 1 depicts the building geometry and main element detailing. The building is designed to resist 20 kN/m gravity load and seismic load based on based on SNI-1726-2012 for Indonesian Earthquake Code with the location is in Aceh. The concrete has compressive strength of 25 MPa and the yield strength of the reinforcement steel is 420 MPa.
Impact of soft-story
The definition of soft-story is that the stiffness of lateral resisting force in any story is less than 70% of the stiffness of the adjacent story which induces the localized concentration of drift. The soft-story is quantified by story lateral stiffness, therefore the parameter of soft-story used in this paper is the relative height between the first-story column and the adjacent floors column height. (1) where K1 and K2 are the lateral stiffness of first and second story while L1 and L2 defines the column height of the first and second story respectively. In this study, the parameter SS(Soft-Story) is chosen as 40%, 60%, 80% and 100% to illustrate the sensitivity of the parameter to the structural responses. The story height for each parameter of SS is summarized in table 1.
Element model
The building model is constructed in OpenSees finite element software [7]. Concrete01 material is applied to model the confined and unconfined concrete properties based on Mander et al. [8] respectively. Moreover, Steel02 material based on Giuffre-Menegotto-Pinto is used to model the reinforcement bar. The beam and column elements are constructed with flexibility-based fibre-section element with 5 Gauss-Lobatto integration points. Section aggregator is used to model the shear-axial and flexural response. The method of including shear can be found in [9]. Furthermore, P-Delta effect and Corotational transformation is included to compute the second order effect for columns and beams respectively. Stiffness proportional damping is used and calibrated to be 5% for the first elastic mode. For this study, the base support is assumed to be fixed.
Strengthening technique and modelling
In this study, the soft-story column is located at the first-story where the failure is most-likely to occur. Therefore, only the first-floor column is strengthened to observe the effect of the retrofitting method.
FRP jacketing
The base-column is strengthened with 3 layers of MasterBrace FIB 450/50 Carbon Fibre Sheet with fabric thickness of 0.255 mm, fabric width of 500 mm, tensile strength of 4900 MPa and 230 GPa tensile elastic modulus with near surface-mounted (NSM) system of 13 mm diameter reinforcement bar. The ConfinedConcrete01 material available in OpenSees [7] is utilized to assign the effect of FRP confinement in RC column and applied in fibre-section element. The application of this method can be found in [4]. The experimental conducted in [4] showed the failure mode of FRP jacketed column was due to the buckling of the bar. Therefore, the ReinforcingSteel material is assigned to model the reinforcement bar to incorporate the buckling effect based on Dhakal and Maekawa model and rupture behavior.
Steel jacketing
The steel jacketing technique used in this study is based on the work of Rosario Montouri and Vincenzo Piluso [10]. The column is strengthened with angles and battens to increase the stiffness and confinement. Angle size of 150.150.12 is used with batten width and thickness of 15mm and 3mm respectively with 250 MPa yield strength applied 0.5 m from the column end. Figure 2 shows the strengthened column in this study. The physical model approach used in this study was proposed by Campione et al. [5]. The angles properties were modified from the work of by defining the interface and contact between the steel angles and reinforced concrete column. The maximum load can be resisted by the steel angles is the function of interface stress along the direction of the contact surface as shown in equation (2).
where na is the number of steel angles, nla is the width of each side of the angles, l0 is the total length of the angles, c0 is the cohesive strength, μ is the friction coefficient [5] and fle,max is the maximum confinement pressure determined based on [9]. The yield strength of the angles is then defined based on the force over total-area relationship as defined in equation (3).
The steel-jacketed concrete column is modelled by utilizing fibre-section element with the angles with the angle is modelled with Steel02 material with properties as described previously.
Earthquake ground motions
Four set of ground motion records are selected and matched with target response spectrum of Banda Aceh earthquake with site class D. The records include Kobe, Loma Petra, Tabas and Managua records. Figure 3 depicts the acceleration spectrum for matched four records and target spectral acceleration.
Result and discussion
Inter-story drift of each building models considering different strengthening method and soft-story variability is compared. Inter-story drift represents the structural damage as described in FEMA356 [11] which is 1.0% for Immediate Occupancy, 2.0% for Life-Safety and 4.0% for Collapse Prevention. The result in terms of maximum story drift from four set of ground motions considered is compared. For building with soft-story parameters of 1.0 and 0.8, both FRP and steel jacketing provides better responses than unstrengthened building for Kobe and Loma Petra records. It is noticed that in this case, the strengthening method with FRP and near surface mounted system is ineffective in increasing structural performance due to buckling of NSM bar. For Kobe and Loma Petra earthquake, the result shows that steel jacketing technique is effective in reducing the seismic response for SS1 and SS2. However, the effect of steel jacketing method is found inefficient when the soft-story parameter increases to SS3. Furthermore, to observe the effect of strengthening methods in structure lateral capacity, pushover analysis is performed with inverted triangle lateral force distribution. Figure 8 to 10 depicts the capacity curve of each base column model and soft-story parameter.
Conclusion
Five-story reinforced concrete building with variability of first story column height to represent the soft-story is evaluated under two different jacketing retrofitting technique including FRP jacket with near surface mounted system and steel jacket with angles and battens. Pushover and nonlinear dynamic analysis are performed to assess the nonlinear responses of the structure. Based on the analysis results, several observations are made as follows: 1. The soft-story affect the lateral capacity of the reinforced concrete building. It is observed that the increasing height of the first story results in decreasing peak value of base shear in pushover curve for all building models. 2. Base column strengthened with FRP and near mounted surface system is noticed not efficient in strengthening the structure. More numbers of NSM bars and layers of CFRP may be required for the strengthening method to be applicable. Furthermore, providing the FRP jacket not only at the base column may also be considered. 3. Steel jacketing techniques shows efficiency in increasing structure lateral stiffness and decreasing the nonlinear response under earthquake excitation. 4. Effectiveness of strengthened column varies under different ground motions.
Further study shall be made incorporating more parameters involved in comparing the strengthening method for soft-story building including different location of strengthened elements, concrete jacketing technique, structural vulnerability and different numerical model for strengthened elements. | 2019-04-16T13:28:32.066Z | 2018-07-01T00:00:00.000 | {
"year": 2018,
"sha1": "af9d9784c8b10c5625b9613088000a076a81d77e",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1757-899X/383/1/012030/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "9a39f110720609f4ddceb3159ec87764dccf2424",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Geology",
"Physics"
]
} |
265503746 | pes2o/s2orc | v3-fos-license | User-Centered Design of a Gamified Mental Health App for Adolescents in Sub-Saharan Africa: Multicycle Usability Testing Study
Background There is an urgent need for scalable psychological treatments to address adolescent depression in low-resource settings. Digital mental health interventions have many potential advantages, but few have been specifically designed for or rigorously evaluated with adolescents in sub-Saharan Africa. Objective This study had 2 main objectives. The first was to describe the user-centered development of a smartphone app that delivers behavioral activation (BA) to treat depression among adolescents in rural South Africa and Uganda. The second was to summarize the findings from multicycle usability testing. Methods An iterative user-centered agile design approach was used to co-design the app to ensure that it was engaging, culturally relevant, and usable for the target populations. An array of qualitative methods, including focus group discussions, in-depth individual interviews, participatory workshops, usability testing, and extensive expert consultation, was used to iteratively refine the app throughout each phase of development. Results A total of 160 adolescents from rural South Africa and Uganda were involved in the development process. The app was built to be consistent with the principles of BA and supported by brief weekly phone calls from peer mentors who would help users overcome barriers to engagement. Drawing on the findings of the formative work, we applied a narrative game format to develop the Kuamsha app. This approach taught the principles of BA using storytelling techniques and game design elements. The stories were developed collaboratively with adolescents from the study sites and included decision points that allowed users to shape the narrative, character personalization, in-app points, and notifications. Each story consists of 6 modules (“episodes”) played in sequential order, and each covers different BA skills. Between modules, users were encouraged to work on weekly activities and report on their progress and mood as they completed these activities. The results of the multicycle usability testing showed that the Kuamsha app was acceptable in terms of usability and engagement. Conclusions The Kuamsha app uniquely delivered BA for adolescent depression via an interactive narrative game format tailored to the South African and Ugandan contexts. Further studies are currently underway to examine the intervention’s feasibility, acceptability, and efficacy in reducing depressive symptoms.
Background
The incidence of depression peaks during adolescence (ages of 10-24 years), with the cumulative probability of depression rising from 5% in early adolescence to as high as 20% by the end of that period [1,2].Left untreated, depression interferes with schooling and affects young people's social relationships [2].Depression in adolescence has been associated with a significant reduction in future income, greater risk of substance use, and risky sexual behaviors and is a major risk factor for suicide [3][4][5].
Depression is underdiagnosed and undertreated worldwide but particularly in low-and middle-income countries (LMICs) [6], where low levels of average public expenditure on mental health (<US $2 per capita) result in a significant shortage of mental health professionals and large treatment gaps [7].For example, <5% of individuals needing treatment in LMICs receive minimally adequate treatment for depression [8].In addition, high levels of mental health stigma prevent adolescents from seeking care [9].These barriers, coupled with the devastating consequences of poverty, cause adolescents in LMICs to be disproportionately affected by the burden of depression.
With the increase in smartphone ownership and access to the internet, digital interventions such as smartphone apps are a promising strategy to reduce these large treatment gaps in LMICs [10,11].This is especially relevant for adolescents given that they make up the vast majority of smartphone users [12].Digital mental health interventions offer several advantages.They have the potential to be delivered at scale with low marginal costs for each additional user [13,14].They may help overcome stigma as individuals can access them discreetly using their devices [15], and they are flexible and convenient as users can choose when, how, and where to access them.In LMICs, where individuals might have to travel long distances to access health care, the portability of digital technologies can save traveling time and reduce expenses.They may also help reduce the pressure on already overstretched health care systems [16].Digital interventions open the possibility for more tailored psychological treatments without having to wait until the next appointment [17].It also allows for continuous monitoring and prediction of suicide risk in real time [18].Mental health apps may also empower individuals by making them feel more in control of their health and may help raise societal awareness of mental health issues [19,20].
Despite the many potential advantages of digital mental health interventions, evaluations of their effectiveness in adolescents have yielded mixed results [21,22].Many studies are characterized by low adherence and high attrition rates [23,24].For example, it is estimated that only 3.3% of users continue to engage with mental health apps after 30 days [24].Although some commercial smartphone apps attract more users, many have not been rigorously evaluated and show little fidelity to evidence-based treatments [25,26].Furthermore, most of the evidence has been gathered in high-income countries and, therefore, generalizability to an African context, where conditions and resources differ vastly, is questionable [10,21,27].
Given these limitations, this study used multiple user-centered and participatory action research methods to design and develop an app to address depression among adolescents in rural South Africa and Uganda.This paper documents the development process of the Kuamsha app (meaning activate in Swahili) and summarizes the results of multicycle usability testing.
Treatment Model and Human-Supported App
We developed the app to be consistent with the principles of behavioral activation (BA)-a highly transferable and evidence-based treatment for adolescent depression-and to include human support (via phone calls), which has been associated with greater adherence and effectiveness of digital therapies [21,28].We did not have any other preconceived ideas about the design or format of the app.
BA is an evidence-based psychological therapy derived from principles of response-contingent positive reinforcement given the evidence on low levels of such reinforcement in relation to depression.BA focuses on two core principles: (1) increasing activities that are meaningful and positively reinforcing for the individual (activation) and (2) addressing processes that inhibit activation (eg, negatively reinforced avoidance behaviors) [29].
BA was chosen as it is relatively simple to deliver, easily understood by patients, and less costly than cognitive behavioral therapy (one of the most researched forms of psychotherapy) [30,31].BA has also been found to have similar effect sizes as antidepressant medications [32].Furthermore, it has been effectively adapted for use with adolescents, in low-income and diverse cultural settings, and in digital formats [27,[33][34][35][36].
Although BA was originally conceptualized as a treatment for depression, the underlying principles apply to broader populations as they include basic problem-solving skills for everyday challenges, awareness of how one's behavior influences mood, reduction of avoidance behaviors, and redirecting attention away from ruminative thinking [29].Indeed, BA has been recently proposed as a transdiagnostic approach, and there is evidence demonstrating its effectiveness in improving anxiety symptoms, activation [37], and overall well-being among the general population [37][38][39].
Although there are already some smartphone apps available that deliver BA therapy for depression (eg, Mobilyze!, MoodMission, Boost Me, Moodivate, and Behavioral Apptivation), none of these apps have been developed for adolescents in a sub-Saharan African context [40][41][42][43][44].As such, these apps contain elements that are not feasible for use in low-resource settings: they are data intensive, are only available in Apple's iOS platforms for use on iPhones (which are expensive), or are intended to be used in conjunction with a licensed clinician.
To improve adherence, the Kuamsha app was built to be supported by brief weekly phone calls from peer mentors.Although only a handful of studies have explored the involvement of lay workers in digital mental health interventions [35], task shifting using nonspecialist workers has been increasingly used in sub-Saharan Africa to deliver physical and mental health treatment services [45,46].We designed the app to be supported by trained lay workers who would serve as peer mentors and whose main role was limited to helping users understand the app's content and overcome barriers to engagement.The peer mentor component was designed through an iterative process involving a multidisciplinary group of mental health professionals, public health specialists, and researchers from Uganda, South Africa, the United Kingdom, and the United States.We drew from established models of peer-to-peer coaching [34,47,48] and iteratively adapted the program to ensure that it was culturally relevant.The details of the training and supervision of these peer mentors will be described in a separate study.
Ethical Considerations
The study was reviewed and approved by institutional review boards in South Africa (the University of the Witwatersrand Written informed consent was obtained from either the parent or legal guardian of adolescents aged 15 to 17 years or directly from adolescents aged ≥18 years.In addition, written assent was obtained from participants aged <18 years. To protect the participants' confidentiality, adolescents were identified only by a participant ID number.The interviews were audio recorded and transcribed verbatim, removing any potentially identifiable information.Electronic audio recordings were destroyed once transcribed.Identifying information (name, address, and contact information) was stored separately from all other data in a secure and locked space within the South Africa and Uganda offices.Access to this information was strictly limited to specific named project staff members.
Participants were not paid for taking part in the study, although they were provided with light refreshments (juices and snacks) following the interviews.Participants in the prepilot were provided with a smartphone, which they could keep at the end of the study.No other incentives or benefits were offered.
The app was developed as part of the "DoBat and Ebikolwa n'empisa" research programs, which aim to develop scalable psychological treatments to address depression among adolescents in low-resource settings.The trial in South Africa was registered with the South African National Clinical Trials Register (DOH-27-112020-5741) and the Pan African Clinical Trials Registry (PACTR202206574814636).
Study Setting
The study was conducted in 2 different countries and settings, each representing a different cultural and social context (Figure 1).In South Africa, the study was embedded in the Medical Research Council and Wits Rural Public Health and Health Transitions Research Unit Agincourt health and sociodemographic surveillance system study area in the rural Bushbuckridge subdistrict of Mpumalanga Province, South Africa.The study area is located along the western border of Mozambique and comprises 31 adjacent villages with a population of 117,000 in 21,000 households [49].As in other parts of the country, there is a high prevalence of depression among adolescents (18.2%) and a paucity of mental health and social care services, particularly psychological therapists [50][51][52].In Uganda, the study was carried out in the Wakiso District, a periurban area located approximately 30 km from Kampala.The Wakiso District is home to nearly 2 million people.The site combines periurban communities close to the Entebbe-Kampala road with relatively isolated fishing villages extending to the shores of Lake Victoria.The study area is poor, and the most common occupation is subsistence farming.Among adolescents, 50% attend secondary school, and 15% of youth (aged 18-30 years) are neither working nor in school.The nearest specialist psychiatric resources are in Kampala, a 2-hour journey by public transportation [53].
In both settings, mobile phone ownership is on the rise.A 2019 census conducted at the study site in South Africa showed that >93% of households owned a cell phone and 82% had a smartphone.Data from the 2014 National Population and Housing Census in Uganda suggest that the proportion of adolescents who own mobile phones in the Wakiso District is >80% and increasing steadily [53].
Study Participants
The study's target population was adolescents aged between 15 and 19 years from South Africa and Uganda.We focused on midadolescence specifically as it is a transitional period when individuals start to develop their self-identity and make vital decisions within the context of less parental control and heightened peer influence [1].It is also a time when the prevalence of depression increases significantly, accounting for more disability-adjusted life years than any other mental health condition [54].Furthermore, adolescents within this age range are more likely to be familiar with smartphones than their younger counterparts, enhancing the practical feasibility of our intervention.
To be eligible to participate, respondents had to be residents of the study sites; be fluent in Xitsonga (South Africa), Luganda (Uganda), or English (either country); be willing and able to provide informed consent or assent; and have a caregiver willing to provide consent (if aged <18 years).We aimed to achieve a balance of sex and age representation in our sample.
Other recruited stakeholders included caregivers of adolescents, schoolteachers, community stakeholders, and mental health care providers.The objective of these discussions was to elicit various views and perspectives from multiple stakeholders, triangulate these perspectives with the information gathered from adolescents, and ensure that the intervention was culturally acceptable within communities.Different recruitment strategies were followed across study settings.In South Africa, the local partner (the Medical Research Council and Wits-Agincourt Research Unit) consulted the Community Advisory Board and key educational stakeholders to obtain their advice and approval for the project.Adolescents in South Africa were recruited via their schools.In Uganda, the local partner (BRAC Uganda) first obtained verbal approval from the local council chairperson and the community elders before beginning recruitment.Adolescents in Uganda were recruited via the BRAC network of clubs for adolescents (Empowerment and Livelihood for Adolescents [ELA] clubs).Field supervisors visited the schools (South Africa) and ELA clubs (Uganda) and provided an overview of the study and written information sheets for participants and parents or guardians.Adolescents willing to participate in the study were asked to return their signed assent or consent forms to a field-worker who collected them at school or ELA clubs on another occasion.Owing to the challenges of the COVID-19 pandemic and limited resources, convenience sampling was deemed to be the most appropriate sampling method.
Overview
An iterative user-centered agile design approach was used to co-design the app with the study's target population [55][56][57].An array of qualitative research methods was used, including focus group discussions, in-depth interviews, participatory workshops, usability testing sessions, questionnaires, and expert consultation.Discussions were framed using various elicitation techniques, including semistructured topic guides, role-playing exercises, and card-sorting games.
User-centered development was carried out in 4 phases: conceptualization, prototyping, product release, and evaluation.An iterative feedback loop approach was used in which each phase's findings were used as inputs for the following phases and to refine previous phases.Table 1 summarizes the development stages.
The objectives and types of elicitation techniques used in each phase are outlined in the following sections.Samples of interview guides and elicitation techniques are provided in Multimedia Appendix 1.
Conceptualization Phase
The objective of this phase was to understand adolescents' needs and context, assess the feasibility of delivering BA using a digital platform, and decide on the app's main features.This phase included 5 focus groups with adolescents, 4 in-depth interviews with adolescents who provided helpful feedback in the focus groups, and 2 focus groups with caregivers and schoolteachers.The purpose of these discussions was to (1) discuss adolescents' values, goals, struggles, social relationships, and the type of meaningful activities they engage in; (2) learn about access to and use of smartphones and desirable content and features in apps; (3) examine perceptions and local language terms for depression and understand the local support systems; and (4) understand caregivers' and teachers' views on the potential limitations of using digital tools in this population.
Prototyping Phase
The objective of this phase was to use the results from phase 1 to develop a simple, scaled-down version of the app and ask adolescents to provide feedback.This phase included 8 usability testing sessions, 3 focus groups, and 6 participatory workshops.In the usability testing sessions, adolescents were given smartphones and asked to interact with apps that targeted different app components (eg, mood monitoring).Facilitators observed their behavior and asked questions as they progressed.During the focus groups, paper-based wireframes and basic prototypes of the Kuamsha app were presented, and adolescents were asked to provide feedback on the app features, including story characters, artwork, background music, and game elements.The participatory workshops were all-day events in which adolescents and other stakeholders interacted with and provided additional feedback on the app's content.
Product Release Phase
On the basis of the outcomes of phases 1 and 2, the objective of this phase was to build a minimum viable product (MVP) and test it with the study's target population.This phase included 24 usability testing sessions in which adolescents interacted with the Kuamsha app and were asked to provide feedback on usability, design, and recommendations for improvements.Some of these sessions were unguided, in which adolescents were asked to freely explore the app and verbalize their experiences and general impressions (think-aloud methodology) [58].This method was helpful in investigating whether the user interface was easy to use without much explanation.Facilitators were XSL • FO RenderX trained to take notes on how the app was explored, including features that adolescents did not interact with.Other sessions were guided, in which the facilitator walked participants through each screen while probing for understanding.This phase also included a prepilot study with 17 adolescents with depression in South Africa.The entire 11-week intervention was tested, including the peer mentor component.
Evaluation Phase
This phase included 2 studies that explored the intervention's feasibility, acceptability, and efficacy in reducing depressive symptoms.One was a feasibility study in Uganda with 31 adolescents from the general population (the Ebikolwa n'empisa study).The other was a randomized controlled pilot trial in South Africa with 196 adolescents with depression (the DoBAt study) [59].The results from both of these studies are expected in late 2023.
Sampling Procedures Across Development Phases
The sample sizes for the conceptualization, product release, and evaluation phases were predetermined and aligned with our a priori research design.In contrast, we maintained a more flexible approach when it came to the prototyping phase.This variability was driven by the dynamic and iterative nature of the app development process.Specifically, the sample size of the usability testing sessions was adjusted with each app refinement or whenever concerns about specific app features arose.We attempted to reach saturation throughout each design stage before advancing to the next development phase.
Analysis
Descriptive statistics summarized the participants' characteristics.Interviews and focus group discussions were audio recorded and transcribed verbatim, removing any potentially identifiable information to protect participants' privacy.Transcribed interviews were summarized and grouped into main themes using the framework method [60].
Data from the workshops and usability testing sessions were analyzed using an instant data analysis approach extensively used in the development of technologies [61].Under this approach, the interviewers were trained to record all problems that arose during the sessions.Following these sessions, the interviewers and a group of study investigators discussed and ranked these problems.This technique reduces the time required for transcribing and analyzing interviews and allows for the fast identification of the most critical and severe usability problems.
The evidence from all development phases was triangulated to develop a game design document that outlined the platform's main features and "guiding principles."This document was used as a road map for the development of the app.
Results
In this section, we summarize the main findings of each developmental phase and describe how the user-centered design approach informed specific app components.
Adolescents who took part in the focus group discussions were aged 17.1 (SD 1.42) years on average, and the sample was evenly split between male and female participants.All adolescents were enrolled in school and had an average 9.7 (SD 1.27) years of schooling.On average, about one-third (11/37, 30%) had part-time jobs, including selling products in the market, cutting firewood, herding cattle, gardening, and working as a hairdresser.Most adolescents (33/37, 89%) had access to a smartphone.Study participants were very supportive of the idea of having an app that would support their mental health:
I like the idea of getting advice through an app because I would get the help I need, unlike if I were to get advice from a person. [P1; female; focus group 4]
I like the idea because there are times when you don't feel like talking to anyone, or you don't have someone to talk to, so the app would be a good idea.[P5; male; usability testing session 4] Adolescents' preferences regarding a mental health app revolved around 4 main themes: app features, cultural validity, confidentiality, and technological aspects.An excerpt of the results is presented in Table 2.
Caregivers and schoolteachers supported the idea of an app-based intervention to address depression among adolescents, which they acknowledged as an unaddressed problem.They also expressed their concerns about adolescents' tendency to overuse and misuse their phones (eg, accessing inappropriate content).These findings helped frame how the intervention was conducted but did not directly feed into the app's design.For example, the insights from caregivers and schoolteachers influenced our decision to perform in-school recruitment in South Africa rather than through household visits.Furthermore, they emphasized the importance of addressing adolescents' phone use habits, resulting in the development of user guides and measures to limit data use.The results of these focus groups will be published with the results of the main trial.
Theme 1: App Features
Participants mentioned that the app should include stories of individuals who had been through difficult situations and narrate how they improved their circumstances.In addition, they suggested that the app should include points and difficulty levels to keep them motivated and engaged.In general, mobile games seemed to be very popular and were already used by many as a strategy to avoid rumination and exercise attentional control.
Combining both findings, the Kuamsha app was developed using a narrative game format (app component 1).Under this approach, the app teaches the principles of BA through an engaging story.The stories were developed collaboratively with a storywriter (HC) using the findings from the conceptualization phase as the basis for the stories and drawing inspiration from other commercially available narrative smartphone apps (eg, "Episode").
We developed 2 stories with the adolescents: the Song Contest and the Football Match.Each story details a challenge-to win a schoolwide song contest or play in a football match.
Participants become a part of the story as one of the characters (which they are able to personalize).Each story consists of 6 modules ("episodes") played in sequential order, and each module is designed to illustrate different BA skills (eg, self-care, problem-solving, and activity scheduling).To improve learning, participants can interactively choose actions that their story characters can undertake.These actions lead to different outcomes so that participants can understand the consequences of their actions.Participants have the opportunity to "correct" outcomes by reconsidering their actions.To keep the complexity manageable, story branches quickly merge with the main storyline through a branch-and-bottleneck structure (Figure 2).This approach allows the player to construct a relatively distinctive and personalized story while ensuring consistency across users [62].A summary of each module is shown in Multimedia Appendix 2. Learning is also facilitated by a story narrator that appears as a bird.As participants choose actions, the bird might "talk through" what the action would lead to or might sum up learning points as they occur in the story.To ensure that players reflect on the story and the concepts and choices made, each module ends with a summary of the lessons learned.This section includes a series of multiple-choice questions, which are saved as part of the game data for later analysis and potential review by peer mentors ahead of their weekly calls.
Each module was designed to take 15 to 20 minutes to complete, and adolescents were expected to complete at least 6 episodes from 1 story within an 11-week intervention period.We allowed users to progress freely within the app, permitting them to complete both stories if they desired to do so during the intervention period.
A key challenge in digital interventions is ensuring that the skills learned within the controlled environment of the app are effectively transferred to real-life situations.To enable this, the Kuamsha app integrates real-life exercises (app component 2) to ensure that the users put the BA skills learned into practice.As part of this component, participants are asked to think of a realistic and achievable goal to work on over the 11-week intervention period and schedule a series of weekly homework activities that align with each module.
Participants are asked to report how often they completed these homework activities and monitor their mood (app component 3) as they engaged in these activities.Users receive feedback on how their mood has progressed over time to highlight the relationship between what they do and how they feel.They are also reminded to report their progress on their weekly activities via notifications (app component 4).Notifications are sent up to 4 times a week, but users are allowed to disable this function as some adolescents suggested that notifications were not always desirable.
Different game design elements (app component 5) were introduced, as suggested by the adolescents.First, the Kuamsha app includes character personalization.Second, it also includes in-app points that users can gain at different stages of the app journey (eg, every time they complete a module, report on their homework activities, and monitor their mood).Third, the Kuamsha app includes 2 different absorbing activities or "mini-games" designed to create immersive experiences to captivate players' attention and foster a state of absorption [63].
One is a music absorbing activity, which consists of a rhythm game in which users tap the screen in time with the music.The other is a football absorbing activity, in which users practice taking shots on goal while avoiding distractors.Similar to other games that the adolescents mentioned playing in the qualitative work, we provided the option to adjust the difficulty level of these minigames as they progressed through the story.
Theme 2: Cultural Validity
Questions regarding cultural appropriateness emerged several times, along with the importance of incorporating elements specific to the study population.Adolescents wanted the story characters to look similar to them and the app to target common problems in their community.For example, many adolescents mentioned risky behaviors-such as alcohol, smoking dagga (cannabis), and teenage pregnancy-as important deterrents to achieving their goals.Although the app was not intended to target risky behaviors specifically, the stories were designed to cover topics such as drinking alcohol and relationships to make them more relatable to the target population.Special care was also taken to ensure that the stories were inclusive with regard to gender, age, appearance, and socioeconomic status.
Theme 3: Confidentiality
Another key theme that emerged was confidentiality.Adolescents saw the app as an opportunity to discuss their problems discreetly without feeling judged.
As many household members often share a single smartphone, numerous participants across both settings mentioned using security patterns and log-in codes to prevent unauthorized access to their personal content.This theme highlighted the need to password protect (app component 6) the Kuamsha app to create a safe space.
Theme 4: Technological Aspects
Within the technological aspects, adolescents discussed the lack of data and limited space as the 2 main barriers when using their smartphones.This theme prioritized a low-storage app and the need to explore features that would allow the user to access the app without internet connectivity.
Drawing from these findings, the Kuamsha app was designed to be accessible in both web-based and offline modes (app component 7).In this way, the app only requires an internet connection when it is accessed for the first time, during which the app automatically downloads the modules to the device's internal storage.After that, the users can complete all modules at any other time regardless of the internet signal.Furthermore, the graphic elements were reduced to use little storage space and increase the app's stability and performance.
Prototyping Phase
Using the results from phase 1, a simple, scaled-down version of the app was developed (beta model).In total, 82 adolescents were involved in phase 2 and asked to provide feedback.A total of 3 qualitative research methods were used in this phase: usability testing sessions, focus group discussions, and participatory workshops.
During the usability testing sessions, adolescents were given smartphones and asked to interact with different apps (eg, Mood Tracker, Avatar Maker, and Magic Tiles).Each of these apps targeted a different app component (eg, mood monitoring, character personalization, and games that increased in difficulty).
In addition, we designed different paper-based wireframes of other components, such as activity scheduling and allowing players to choose how the story unfolds.These sessions had two objectives: (1) to obtain feedback on the different app components and (2) to examine adolescents' familiarity with smartphones.An excerpt of the results is presented in Textbox 1.
Textbox 1. Prototyping phase: excerpt of adolescents' feedback on the app's main components.
Personalization
• "I enjoyed personalizing my character.I decided to make my character look that way because some of the features look like mine.The character looks almost like me even though I have also added the beard to show that one day when I grow up I would like to have a beard."[Male; user testing session (UT) 4] • "I liked it for the fact that I got to personalise my own character and I have added all the features that are similar to mine.I wanted the character to look like me." [Female; UT 5]
Mood monitoring
• "I like the idea of rating myself and if I see that I need help, I can try new ways like going to the social workers."[Male; UT 1] • "I prefer to express my feelings through an emoji."[Male; UT 3] • "The emojis are very clear and they show clearly what they mean."[Female; UT 5] Choosing how the story unfolds • "It is a good game to play.It shows you that before taking a decision between going to the tavern or preparing for the exam, you need to first think of the benefits that you will get out of each decision."[P1; female; UT 7] • "It would be fun for me because I would be getting to decide how the story goes until the end."[P3; female; UT 8]
Absorbing games
• "I like this game a lot.It takes my concentration as by the time we met I was not happy but through it, now I'm starting to feel fine."[Male; UT 1] • "It is easy if you concentrate but hard to win.The more it becomes hard, the more you enjoy the game."[Female; UT 2] • "It was not difficult to play this game because you gave me instructions before I began playing, but I wouldn't have found it easy to play it on my own."[Male; UT 4] Overall, adolescents were familiar with smartphones and enjoyed most of the app components, particularly those involving gamification.The subtheme of cultural validity re-emerged when participants were asked to personalize their characters.Adolescents preferred games that were challenging but, at the same time, easy to understand from the beginning.This feedback led to the development of an onboarding process (app component 8), whereby the user would be introduced to the app and guided through its main components the first time they opened it.This onboarding process explains how to interact with the interface, choose their preferred language, and select one of the stories.
When asked to monitor their mood, adolescents preferred to use emojis to describe their feelings.An initial list of 15 emojis was created.Adolescents were shown these emojis without descriptions and asked to rate which emotions were represented to select the emojis that better captured the emotions portrayed in the mood monitoring (ie, unhappy and happy).Numbers in the slider were intentionally omitted to keep it simple, as suggested by the adolescents.To ensure the integration of the mood monitoring component into the game, we implemented a system in which users receive in-app points every time they complete mood monitoring and brief feedback on how their mood is progressing.
Focus group discussions were used to obtain feedback on the story characters, artwork, and background music.Adolescents were asked to list popular names in the community, and some of these names were used for the characters in the stories.In addition, the discussions served as a platform to address concerns about the limited access to mental health treatment in both study settings, which was a recurring theme raised by the participants.In response to these insights, we identified the need to design and integrate an emergency button (app component 9) to provide users with a direct avenue to seek immediate assistance during critical moments.
RenderX
The participatory workshops reviewed and adapted the storylines and homework activities to ensure that they were appealing to adolescents in both study sites.As part of these workshops, adolescents were asked to read through the scripts and prepare a play in which they would role-play the stories.Following these discussions, modifications were made (in terms of language and content) to make the stories more locally and culturally relevant.
Product Release Phase
The results from phases 1 and 2 provided input for developing an MVP of the Kuamsha app.The study team partnered with Sea Monster-one of Africa's leading gaming companies-to develop the app.
The MVP was developed for low-cost Android Go devices, with testing taking place on Samsung Galaxy A2, the device used in the study.It was built on Unity (Unity Technologies), a cross-platform engine popular for mobile game development.
The app has 2 critical dependencies.One is Ink, a plug-in that allows players to interact with the stories via multiple-choice answers or text fields.The other is Firebase, which captures player data on a web-based database and tracks users' engagement with the app.The MVP was tested via usability testing sessions and a prepilot before rigorously evaluating it.
Usability testing sessions were conducted with 12 adolescents from Uganda and 12 from South Africa.Most adolescents (20/22, 91%) found the app a fun way to learn new skills and relax the mind.A field-worker observation in South Africa stated the following: She looked very interested and engaged because she was smiling and giggling as she was reading the story.
Some differences were noted between female and male participants, with male participants preferring the "Football Match" story and female participants preferring the "Song Contest" story.
These sessions helped highlight some parts of the app that were not intuitive and needed refinement.For example, adolescents in Uganda struggled with literacy skills at a greater level than anticipated.According to census data from the Wakiso District, 85% of adolescents in the study site area are literate [53].However, although many adolescents could decode app content, some had difficulties with reading comprehension.Owing to time and resource limitations, it was decided that only adolescents with an acceptable reading comprehension level would be enrolled in the feasibility study in Uganda.The text's complexity was assessed using a web-based tool that corresponded to a primary 7 reading level.Eligible participants were required to score ≥83% on a reading comprehension assessment drawn from the Young Lives study [64].
Similarly, when asked to choose a goal, most adolescents set goals related to their lives, but these were rarely specific, measurable, or time based (eg, "I want to make wise decisions" or "I want to work hard").This highlighted the critical role of peer mentors in making sure that goals were meaningful and achievable within the 11-week intervention period.On the basis of these findings, we adjusted the app so that the goals and homework activities could be modified anytime the user wanted.
A prepilot study was conducted in the South African study site with 17 adolescents who screened positive for mild to moderately severe depression based on scoring between 5 and 19 on the 9-item Patient Health Questionnaire modified for adolescents (PHQ-A).The adolescents were asked to use the Kuamsha app for 11 weeks.They were allocated to a trained peer mentor from whom they received 6 weekly phone calls.App engagement metrics were collected throughout this time.
Table 3 shows the sociodemographic characteristics of study participants at baseline.a The Simple Poverty Scorecard total score ranges from 0 (most likely below a poverty line) to 100 (least likely below a poverty line).An average score of 43.5 on the Simple Poverty Scorecard corresponds to a poverty rate of 57.3%.This was computed using the South African upper national poverty line, which is set at 32.57South African rands per person per day (£2 in 2017 prices [US $2.58]) [65].
The average PHQ-A score was 7.2 (range 5-15).In total, 12% (2/17) of the participants had moderate levels of depressive symptoms (score between 10 and 19), and the rest (15/17, 88%) were mildly depressed (score between 5 and 9).A total of 41% (7/17) of the participants endorsed the ninth item on suicidal ideation.These participants were referred to the trial psychologist to assess whether they needed any additional intervention.After the assessment, the trial psychologist verified XSL • FO RenderX that none of these adolescents had identified means or plans to attempt suicide, and as such, they were included in the prepilot.
Table 4 shows the app engagement metrics for 94% (16/17) of the participants (there was a problem linking the device with the game data for 1/17, 6% of the participants).On average, the Kuamsha app was launched 38.4 times per participant, and users spent approximately 4 hours on it over the intervention period.Most participants (13/16, 81%) opened both stories, and most (12/16, 75%) completed at least 6 modules (the recommended number of modules to be completed during the intervention).
The average number of modules completed was 8.8, and several adolescents (7/16, 44%) completed both stories (12 modules in total).Although, on average, this reflects strong engagement with the app, it is worth highlighting that 25% (4/16) of the participants did not complete the recommended modules.Among this group, 6% (1/16) of the participants only completed 2 modules and 19% (3/16) completed 4 modules.Importantly, there was no statistically significant difference in sociodemographic characteristics between participants with lower and higher engagement.We also found no differences in depressive symptoms during screening or at endline.To further explore the factors influencing varying levels of engagement, we plan to conduct in-depth interviews during the feasibility studies in Uganda and South Africa and stratify participants based on high versus low app engagement.These qualitative data will help guide improvements in future iterations of the app.
Adolescents' goals during the intervention period included improving communication with friends or family, making new friends, studying more often, and exercising more.On average, participants set-up 8.3 weekly activities and recorded completing them 47.2 times.In total, 25% (4/16) of the participants used the emergency button.Of these 4 participants, 3 (75%) made a single call using the button, whereas 1 (25%) used it on 3 separate occasions.The peer mentors had access to these data ahead of every weekly call, and they were trained to follow up on such instances.Example screenshots of the app and its main components are shown in Multimedia Appendix 3.
Principal Findings
This study described the user-centered development of a smartphone app to reduce depression among adolescents in a sub-Saharan African context and summarized the findings of multicycle usability testing.
The smartphone app, called the Kuamsha app, is a gamified app that uses storytelling techniques and game design elements to deliver BA therapy in an engaging way.Central to the study was an iterative co-design process with adolescents from rural South Africa and Uganda.Extensive formative research was conducted with 160 adolescents, who guided the development and provided feedback at each developmental phase.
The user-centered development was carried out in 4 phases, each combining different research methods and elicitation techniques to ensure that a depth and breadth of user perspectives was incorporated into the design.In addition, a broad array of local stakeholders across both study settings was consulted during the development process to ensure that the intervention was culturally acceptable within communities.This iterative methodology enhanced the app's acceptability, cultural relevance, usability, and validity with the targeted population.
Adolescents' preferences regarding a mental health app revolved around 4 main themes: app features, cultural validity, confidentiality, and technological aspects.These themes informed the specific components of the app, which included (1) a narrative game format that leverages the power of storytelling and immersive narratives to create a dynamic and engaging experience for users, (2) integration of real-life exercises and homework activities within the app to ensure that the users practiced the BA skills learned in the app, (3) mood tracking to help adolescents recognize the link between mood and behavior, (4) push notifications to remind users to report their progress on their homework activities, (5) game design elements (character personalization, in-app point system, and absorbing "mini-games"), (6) password-protected access for security, (7) an offline mode allowing users to play on the app without an internet signal, (8) an onboarding process to guide users through the app, and (9) an emergency button to refer adolescents at risk of suicide.
The results of multicycle usability testing sessions showed very high engagement metrics.Some of our engagement metrics were significantly higher than those of other apps that have been reported in the literature [24,66,67].These preliminary results suggest that the Kuamsha app is acceptable in terms of usability and engagement.In total, 2 studies are currently underway to rigorously test the intervention's feasibility, acceptability, and efficacy in reducing depressive symptoms.
Overall, this study contributes to the broader literature on digital mental health interventions, particularly regarding the importance of involving potential users and key stakeholders as active collaborators in intervention design [57,68].It also highlights the potential of leveraging storytelling and games as an effective strategy to maintain user engagement and enhance learning [69].Finally, and in line with previous findings, our results show the importance of developing an app that is relatable and caters to the unique needs, challenges, and preferences of the target user [68].Given the limited number of apps specifically developed for (and with) adolescents in sub-Saharan Africa, this study fills a critical gap and serves as a pioneering effort to create a culturally relevant and acceptable intervention for addressing depression among this population.
Limitations
This study has several limitations.First, a convenience sampling strategy was used to recruit adolescents from both study sites, which may have resulted in some selection bias.However, the relatively large number of adolescents consulted may have helped minimize this.Second, most of the formative work was conducted with adolescents from the general population and not specifically with adolescents identified as having depression.The exception to this is the prepilot study, which included adolescents who screened positive for mild to moderately severe depression on the PHQ-A.Although feelings of hopelessness and self-harm did arise during the discussions (particularly during one-to-one sessions), more formative work with adolescents with depression might have helped ensure that the app was optimized to address their needs and concerns.However, the app was thoughtfully designed with several features aimed at supporting user engagement.First, the app was designed to be supported with weekly phone calls from peer mentors, serving to sustain user engagement and address issues related to low motivation.Second, the app includes reminders to help users stay on track and overcome challenges related to memory and concentration frequently associated with depression.Third, recognizing the nonlinear nature of depression recovery, the app allows users to navigate setbacks and relapses without feeling discouraged.The results of the DoBAt Study will provide more information about the feasibility and acceptability of the Kuamsha app among adolescents with depression [59].Fourth, although this study was conducted in 2 diverse sub-Saharan African settings, contextual differences should be taken into account when considering the transferability of the study findings to other sub-Saharan African populations.Fifth, the formative work revealed literacy difficulties among adolescents in Uganda, and therefore, a reading comprehension test was included as part of the screening criteria in the feasibility study in Uganda, which might have excluded the most vulnerable adolescents.Further app development work should explore the use of audio voice-overs or alternate features to increase the accessibility of the app to low-literacy populations.
Conclusions
Although there has been a significant increase in digital mental health interventions in recent years, many of these platforms suffer from low uptake, slow rollout, and unsustainable business models.Furthermore, few interventions have been rigorously designed and evaluated for adolescents in sub-Saharan Africa or are available in local languages.Given the limited access to effective interventions in LMICs, innovative, scalable treatment delivery options are urgently needed.
Benefiting from the rapidly growing penetration of mobile devices, the Kuamsha app could help bridge the large treatment gap in South Africa, Uganda, and potentially other similar settings.If the feasibility studies produce promising results, they will inform the development of a further larger randomized controlled trial to support better management of depression among adolescents in low-resource settings while simultaneously strengthening primary and community-based health care systems.
[Wits] Human Research Ethics Committee [M181027], Ehlanzeni District, and Mpumalanga Provincial Departments of Health and Education [MP_201903_003]), Uganda (Makerere University School of Public Health [HDREC 750] and the Uganda National Council for Science and Technology [UNCST HS724ES]), and the United Kingdom (Oxford Tropical Research Ethics Committee, OxTREC 39-18).
Figure 1 .
Figure 1.Study locations in South Africa and Uganda.
Table 2 .
Conceptualization phase: excerpt of adolescents' preferences regarding app components.The app) should show the true image of us.But now as I look to this character, I see that we are not looking the same.They used the colour for white people."[Female; IDI b 4] like to get advice on things like sexual activities.The app would advise us on what will happen when we engage in sexual activities at an early age."[P4; female; FG 5] • "The app should also have pictures of the people smoking and what happens to them when they are smoking.It can also have pictures of the people who were addicted to alcohol and how they managed to stop drinking alcohol." [P3; male; FG 2] Confidentiality Creating a safe space • "(The app) will not judge you.It will be you and the app.Also confidentiality will be there.It won't tell other people about your problems.That is nice idea."[P5; female; FG 1] • "Getting advice through an app would be a good idea because most of the time when you tell a person or a friend your problems, they don't keep them to themselves; they tell people.If I share my problems on the app; I will get the advice that I need without worrying about what they will say about you to other people."[P3; female; FG 5] • "There will be a problem if [other people] can get to access what I am doing on the app." [P4; male; FG 2] Technological aspects Phone data • "I always run out of airtime to buy data and have to wait until month-end."[Male; IDI 1] Storage • "I'm always scared that [the phone] will run out of space."[Female; IDI 3] a FG: focus group.b IDI: in-depth interview. | 2023-12-01T06:17:54.516Z | 2023-11-30T00:00:00.000 | {
"year": 2023,
"sha1": "7f388fce5cd191a74863dc9386acfb21ce55bd88",
"oa_license": "CCBY",
"oa_url": "https://formative.jmir.org/2023/1/e51423/PDF",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "013df707df359d5ccfa076cd5feb23ee23431a91",
"s2fieldsofstudy": [
"Psychology",
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
54640032 | pes2o/s2orc | v3-fos-license | Experimental research on the formation of mineral-rich areas in residue gold placer deposits
Underestimation of the reserves of raw material base of gold mining (namely residue dump complexes) leads to negative social and economic consequences in the regions engaged in placer gold mining. Nonetheless, their efficient development is possible only through the use of new rational technologies of preparation and deep processing of rock mass for subsequent extraction of valuable minerals. In order to create such a technology, large-scale research was made on the nature and degree of the influence that the basic factors of the technology have on the process of migration and concentration of gold particles in a residue rock mass.
Introduction
Exploitation of placer gold deposits of non-ferrous metals, noble and rare metals is of significant importance to Russian Far East, and, despite the discouraging forecast regarding the exhaustibility of natural resources, their development will carry on over many years.Placer deposits include all the basic characteristics of geologic system that reflects the unity and interconnection of the components such as mineral resources, natural processes, technique and technologies, social and economic structures, system organization, production activity, etc. [1].According to the specialists' estimation [2][3][4][5][6], residue placers possess a great potential.Some gold-mining areas have accumulated significant amounts of auriferous dump complexes -billions of cubic meters of gravel dumps, peats, etc.
The need for a more careful and constructive attitude to the problem of increasing gold extraction out of placer deposits is also connected with ecological state of placer goldmining areas.The waste products from the primary geogenic deposits have a negative influence on the natural environment, considering that amounts of their rock mass are over 2 billion cubic meters in Far East of Russia alone.
Residue placer deposits are significant reserves of mineral resource base of noble metals, but their development is complicated due to the fact that valuable components in a massif are in chaotic, scattered state, and their extraction is connected with full processing of all residue formations, which often proves to be unprofitable because of the significant material costs and expenses.Therefore, the problem of finding an efficient method of placer deposits development is acute and is of great financial and social importance.
The research on the residue formations of placer deposits has led to the foundation of in-dump enrichment of a placer deposit, where the particles of noble metals migrate and concentrate into lower layers of the mountain massif, which is chiefly driven by natural factors.These factors include the following processes: hydrodynamic process, including the processes of filtration and suffusion, cryogenic processes and microseismic processes.Significant similarity in genetic technologic nature was found in the analogous processesnatural processes of placer deposit formation, particularly geological differentiation of mineral matter.The term "differentiation" is used to characterize many geological processes, in which primary mineral resources form byproducts that are genetically similar to them but have a different structure or traits.
Unlike the processes of formation of mineral-rich areas of residue placer deposits, the geological differentiation of mineral matter is a lot more time-consuming, although it is necessary to know its characteristics to estimate all the factors that aid in the migration of valuable components of alluvial rock mass into a bedrock-adjacent area of dump complexes.The differentiation of mineral rock mass of residue dump complexes of placer deposits reflects the processes of selective formation of productive layer, i.e. natural division and displacement in rock mass of residue dump complexes of valuable component separate monofractions and enclosing rocks relative to each other depending on their morphological, granulometric, hydraulic and physical characteristics, which is driven by natural and anthropogenic processes [8,9].
One of the characteristics of residue placer deposits is the stochastic nature of gold distribution and other valuable components in a massif, which, in the process of exploration, determines the need for the development and overall enrichment of the rock mass.Therefore, when the amount of valuable components is insufficient, the attempts to process residue placer deposits prove to be futile.For this reason, the formation of residue placer deposits of new kind with selective position of productive layer of sands in lower strata is a major scientific and practical problem.As a rule, a bedrock-adjacent mineral-rich layer is characterized by its homogeneous structure, relatively small amount of productive rock mass, which allows for a more economic and technologic development of residue placer deposits.
The technology itself is the following [10,11]: a drainage ditch and an accumulating ditch are created in the upper and lower parts of the block of residue sands, which entails the formation of a filtration flow in the block afterwards, and this flow allows the gold particles to move vertically.If such a flow is continuous, it leads to the formation of a mineral-rich layer of sands in the bedrock-adjacent part of the block.It allows for the separation of sands with lower contents of gold and reduces the range of washout.
In order to study the processes, lying in the bases of the technologies, laboratory researches were made in 2008-2009.They were focused on the influence of cryogenic processes on the efficiency of gold particles migration [10].The results are: the more freeze-thaw actions happen, the deeper is gold distributed based on the capacity of the rock mass; 0,25 mm fraction of gold migrated across the whole depth of the rock mass, where 7.1 % of this fraction reached the bottom; cryogenic processes as an independent factor have little influence on the migration of gold particles.
The further studies were focused on the influence of the processes on gold migration.In the first experiment, gold migration was caused by free flows of water, in the second it was caused by the combination of the processes: free flows of water and freeze-thaw actions.The experiments were held simultaneously on the specially designed unit that consisted of 6 tanks 25х12х10 cm, and each of them was connected to a pipeline with the function of regulation of water flow rate, which amounted to 10 l/h.In the first five tanks, gold with certain thickness stayed on the upper layer; in the sixth tank, designed for freeze-thaw experiments, the upper layer accumulated the gold of all kinds of thickness.In each tank, the migration was caused by the filtration flow during 25 days.One of the tanks got frozen twice at a temperature of -18 °С, and after it thawed at a room temperature, the water flow continued.The results of these experiments are shown in Fig. 1.
Under the complex influence of external factors on the rock mass, gold distribution based on the output of a rock type proceeded more effectively, gold migration into lower layers increased by 20 -64.3%.0.1 mm fraction reached the bottom of the vessel, and it was 19% more under the complex influence.Content analysis based on the rock length shows that the movement of gold particles in a horizontal plane is caused only by water flow energy.
In 2010, the research on gold migration took place in natural conditions.Holes were filled with gold fractions from placer deposit of the stream "Bolotistyi", which was a permanent source of the water flow.In winter, the rock mass was freezing, and after it thawed, it was excavated layer by layer.As a whole, the experiment lasted 300 days.
Fig. 2 accurately illustrates the change in the content of gold based on the depth relative to the original one: in the upper layer, it reduced by 62%, while in the lower layer it increased by 68.9%.The increase in the content was caused mainly by the migration of 0.25 mm fraction.The horizontal movement of the particles is insignificant, except for the water supply location.
In 2012, industrial tests began on a test section allotted by "Ros-DV", LLC (Khabarovsk Krai).In the middle part of the deposit from the stream "Bolotistyi", developed in 2006-2007, in close proximity to the silt-detention basin that served as the source of technological water, there was a block of residue sand, 815 m in size and 1 m in volume.The materials used were tailings extracted from the processing, that contained small and thin gold.The next stage implies the freezing of the block, but its thawing process generated a filtration flow during the washing process.
Before the next freezing, certain tests were selected.Their results are shown in Fig. 3a.The average content of gold in the four upper samples was 21.6% decreased in comparison to that in hole 1, in hole 2 -22.9%, in hole 3 the migration is barely visible due to the unprocessed area.
Assuming the limit of the content equal to 125 mg/tons, we will see a cut, where area 1 is the part of the block, where the content of gold reduced (overburden rocks), area 2 is a productive layer the volume of which is equal to the half of that of the sand block.The Change in gold content based on the depth: 1 -the primary gold content in a rock mass; 2 -gold content after the experiment formation of area 3, presenting a layer of sands with high content of gold (over 125 mg/tons) is explained from the position of this area over the depression curve, which caused this area to not be subject to the influence of water flow, and the migration is caused by the filtration from the surface of the rainwater and "freeze-thaw" process in the rock mass.
In October 2014, (the experiment lasted 2 years), several samples were selected from the test section, and the result of their beneficiation is illustrated in Fig. 3b.The average content of gold in the five upper tests reduced by 21.2% in comparison to that in hole 1, 8.7% in hole 2, and 24.4% in hole 3. Thus, during the 2 washing processes, it was possible to form a mineral-rich layer (with over 125 mg/t), the volume of which is 44% of overall volume of the sands in the experimental block.
The obtained data may be used to expand the theoretical knowledge of the process of gold particles migration, and develop the methods of calculating the basic parameters of the technology that is capable of controlling the factors influencing gold migration.The main source of increasing the effectiveness of the developed technology is the foundation of the cycle of forming a free flow of process water, which would allow for a significant decrease in its flow rate.
In 2015, for the purpose of studying the influence from the cyclic flood-and-drain process of the rock mass of the residue placer deposit on the migration of gold particles, a laboratory research was held.
2.5-liter tanks were filled with a rock mass with water supply to create a filtration flow.The rock mass was imparted with equal amount of gold fractions: -1+0.5 mm -150 mg, -0.5+0.2 mm -50 mg, -0.2 mm -28.8 mg.
The water supply to tank 1 is constant, to tank 2 -cyclic, and the flood-and-drain process implied 5 days of constant water supply, 2 days of draining; the gold was placed evenly upon the whole volume of the rock mass; process water flow rate in each tank was 12.5 l/ph.
After 25 weeks (25 flood-and-drain cycles), the water supply was stopped, the selection of samples is conducted layer by layer skipping 2 cm.The analysis of the laboratory research emonstrates the following: with constant water supply, the content of gold in the 2 upper layers reduced by more than 50% in comparison to the original; the increase of gold content in the third layer was 24.5 %, in the second -38.0%, in the third -30.4%; the nature of the change in gold content in different fractions in the same: a little decrease in the upper layer, significant decrease in the second and an increase in the layers below.
with cyclic water supply, the maximal change in gold content was 4.6 % in comparison to the original; it is possible to observe the migration of only the particles from 0.5+0.2mm.
In 2016, the experiment was continued in laboratory conditions.Four 20 cm-high tanks were filled with a rock mass, containing evenly spread component of thickness -1.0+0.1 mm.Every tank was fed with water the flow of which was stopped at certain intervals.The first tank was fed with water with no intermission, the second was fed for 2 days with 1-day breaks, the third one was feed day after day.In the fourth tank, the filtration flow was active primarily for 2,5 minutes until the tank was completely full, then it was stopped, but was resumed before the rock mass was drained (approximately in 1.5-2.0min), i.e., the incomplete draining of the rock mass took place.
After 2 months, the gold was extracted from the samples selected layer by layer; its content is shown in Fig. 4.
The process of gold migration proved to be the most effective with the cycle of incomplete draining.At tank depth of 8 -20 cm, the content of gold increased by 23.2% in comparison to the original, and process water flow rate reduced by 40 and 50%, respectively.
In 2017, the laboratory researches on the processes of the formation of productive areas of residue placer deposits by means of cyclic influence of filtration flow were continued.Table 1 illustrates the parameters of flood-and-drain process in three blocks of residue rock mass, the experiment lasted 125 days.
The results of the beneficiation of the samples are shown in the diagram in Fig. 5.
The analysis of the obtained data shows that the worst migration of gold took place in block 1, the change in the content was mainly caused by fraction -1.0+0.5 mm; the migration of gold particles the thickness of which was below 0.5 mm is insignificant.
The change in the content of gold in tank 3 was also basically caused by fraction -1.0+0.5 mm, although it is possible to observe the movement of gold particles from fraction 0.5+0.2mm from the second and third layers being 14 cm deep, and the 73% increase in the content of gold in fraction -0.2+0.1 mm on the bottom of the tank.The effect of gold particles migration is noticeable in tank 2. It is possible to observe the decrease in the content of gold in fractions -0.5+0.2 and -0.2+0.1 mm, and in the 7 upper layers, the
Conclusions
It was established that the migration of gold particles in residue sands is more intensive than with more cycles of "freeze-thaw" actions.Cryogenic processes have a noticeable influence on the migration of gold of small sizes (-0.25 mm).It was experimentally proven that the main factor influencing the movement of the particles is filtration flow.Under the complex influence on the rock mass, the area of gold concentration is displaced downwards, being increased by 20 -64% in comparison to the influence of the water flow alone, and it also moves across the length of the dump, with the prevalent influence comes from the volume of water free flow.
On the basis of natural experimental researches it was established that after the influence on the rock mass from the water flow and its freezing in winter, the content of gold in the upper layer reduced by 62% in comparison to the original, in the lower layer it increased by 68.9%.As a result of industrial testing on the test section during the two washing seasons, it was possible to form a mineral-rich layer the size of which is 44% of overall volume of sands in the block.
The migration of gold under cyclic flood-and-drain process of the rock mass is more effective; the content of gold in the laboratory research in the bottom layer increased by 43%, water flow rate reduced by 50%.
Fig. 1 .
Fig. 1.Dependence of gold content in the areas of concentration on the depth of the rock mass: 1 -under the influence of free water flow; 2 -under 2 additional freeze-thaw actions Fig. 2. Change in gold content based on the depth: 1 -the primary gold content in a rock mass; 2 -gold content after the experiment
Table 1 .
Flood-and-drain cycle parameters | 2018-12-12T23:04:39.719Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "f509e108540f19ebe4693544aa9ecc94623d447f",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2018/31/e3sconf_pcdg2018_03023.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f509e108540f19ebe4693544aa9ecc94623d447f",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Geology"
]
} |
199662011 | pes2o/s2orc | v3-fos-license | Macronutrient Analysis of Target-Pooled Donor Breast Milk and Corresponding Growth in Very Low Birth Weight Infants
The macronutrient composition of target-pooled donor breast milk (DBM) (milk combined strategically to provide 20 kcal/oz) and growth patterns of preterm infants receiving it have not been characterized. Caloric target-pooled DBM samples were analyzed by near-infrared spectroscopy. Weekly growth velocities and anthropometric z-scores were calculated for the first 30 days and at 36 weeks corrected gestational age (CGA) for 69 very low birthweight (VLBW) infants receiving minimum one week of DBM. Samples contained mean 18.70 kcal/oz, 0.91 g/dL protein, 3.11 g/dL fat, 7.71 g/dL carbohydrate (n = 96), less than labeled values by 2.43 kcal/oz and 0.11 g/dL protein (p < 0.001). By week 3, growth reached 16.58 g/kg/day, 0.95 cm/week (length), and 1.01 cm/week (head circumference). Infants receiving <50% vs. >50% DBM had similar growth, but infants receiving >50% DBM were more likely to receive fortification >24 kcal/oz (83% vs. 51.9% in the <50% DBM group; p = 0.005). From birth to 36 weeks CGA (n = 60), there was a negative z-score change across all parameters with the greatest in length (−1.01). Thus, target-pooling does not meet recommended protein intake for VLBW infants. Infants fed target-pooled DBM still demonstrate a disproportionate negative change in length z-score over time.
Introduction
Neonatal practitioners commonly assume that human breast milk contains 20 kcal/oz, but the macronutrient content of human milk depends on many factors including gestational age [1]. Compared to preterm milk, term milk has less energy and protein, and in both populations, energy and protein content decreases over time as lactation progresses [2]. Because donor breast milk (DBM) commonly comes from mothers of term babies later in lactation, one major concern regarding the use of DBM in preterm infants is its nutritional adequacy, particularly its protein concentration, since higher protein intake and increased linear growth are associated with improved neurodevelopmental outcomes [3][4][5]. Pooled DBM has been shown to contain as low as 14.6 kcal/oz [6], and protein content in maternal preterm milk ranges from 1.2 to 1.7 g/dL in the first four weeks of lactation whereas the content in DBM is generally accepted as 0.9 g/dL [7]. For infants weighing less than 1 kg, the European Society for Pediatric Gastroenterology Hepatology and Nutrition (ESPGHAN) recommends an enteral intake of 4.0-4.5 g/kg/day of protein [8], which correlates to 2.7-3.0 g/dL assuming enteral fluid intake of 150 ml/kg/day. However, protein fortification for human milk is difficult, and standard multicomponent human milk fortifiers may still be insufficient as manufacturers of commercially available bovine-derived human milk fortifiers assume a baseline protein concentration of 1.4-1.6 g/dL in human milk [9,10].
Earlier studies comparing fortified DBM versus premature infant formula reported notably decreased growth velocities in the DBM group, particularly in weight and length [11,12]. However, more recent studies have shown similar growth can be achieved with adequate monitoring and fortification [13][14][15].
Target-pooling is a method that some milk banks employ to increase nutritional content by combining milk of multiple donors strategically, rather than randomly. One specific technique is to add skimmed fat from lower-calorie breast milk to higher-calorie breast milk to mimic hind milk and achieve a minimum of 20 kcal/oz. However, it is unclear how protein concentrations of DBM and subsequent infant growth are affected by caloric targeting. The objective of this study was to characterize the macronutrient composition and variability of caloric target-pooled DBM and the corresponding growth velocities of very low birth weight (VLBW) infants who receive it.
Patient Sample
This prospective observational study was performed at the neonatal intensive care unit (NICU) at TriHealth Good Samaritan Hospital in Cincinnati, Ohio, with milk analysis conducted at Cincinnati Children's Hospital Medical Center (CCHMC). The study was approved by the Institutional Review Board at both institutions with waiver of informed consent (CCHMC 2015-5191, TriHealth 15-085).
VLBW infants admitted to the NICU from December 2015 to April 2017 who received more than 1 week of DBM during the first 30 days of life as supplementation to maternal milk were eligible. Infants who transferred to another hospital or passed away in the first 30 days of life or did not follow the standardized feeding protocol (see below, Section 2.3) were excluded.
Milk Collection and Analysis
During the time period in which eligible infants were admitted, target-pooled DBM purchased from Mothers' Milk Bank of Ohio (MMBO, Columbus, Ohio) were screened for unique pools, and a representative bottle for each unique pool was marked. Caloric and protein content, as measured by MMBO and labeled on each bottle, was recorded. Per unit protocol, NICU milk technicians prepared feedings from refrigerator-thawed bottles by hand homogenizing and either pipetting or pouring into measured containers. From each marked bottle, a minimum of 1 ml of the remaining milk, more if allowed, was saved, and kept frozen for sample collection. For analysis, samples were heated to 37 • C, gently homogenized by hand, then homogenized for 30 seconds using a sonicator. Using a near-infrared (NIR) human milk analyzer (SpectraStar 2400, Unity Scientific, Brookfield, Connecticut), which was calibrated using a bias set of human breast milk obtained from MMBO, samples were then analyzed in 1-1.5 mL aliquots, triplicate if volume allowed.
Standardized VLBW Feeding Protocol
DBM is utilized for infants with birth weight </= 1500 g when maternal milk is not available for the first 30 days of life. Enteral feedings are initiated within 48 h of birth at 15 mL/kg/day for 3 days and subsequently advanced by 10 mL/kg/day every 12 h to a goal of 160 mL/kg/day. Fortification to 24 kcal/oz occurs at 75 mL/kg/day, usually day of life 7, using Similac (Abbott Nutrition, Columbus Ohio) human milk fortifier hydrolyzed protein concentrated liquid (HMF-HPCL). Additional fortification occurs as clinically indicated for poor growth using additional HMF-HPCL, Similac Special Care 30, Similac NeoSure, and/or Similac Liquid Protein Fortifier, per dietitian's discretion. In addition to enteral intake, parenteral nutrition provides 2.5 g/kg/day of protein starting on day of life 1, then 3.5 g/kg/day onward until intake is limited by fluid volume.
Enteral Intake Data
Enteral intake data were obtained from charted enteral feeding volumes and fortification status of donor and maternal milk for the first 30 days of life or until DBM was transitioned to formula. Percentage of DBM intake was calculated by dividing the volume of DBM by the total volume of human milk that the infant received during the studied time period. The last day on which DBM was given, whether the infant was still receiving DBM on day 30, and the highest caloric density of fortification of DBM were recorded. Utilizing the NICU's established milk tracking system (Women and Infants, Timeless Medical Systems, Charlottetown, Prince Edward Island, Canada), the source pool from each bottle of DBM that the infant received was identified.
Anthropometric Data
Weekly weight, length, and head circumference (HC), as recorded by clinical care, were collected until 4 weeks of age and also at 36 weeks corrected gestational age (CGA). Growth velocities, Olsen body mass index (BMI) [16], and Fenton z-scores [17] were calculated for each time point. Weight velocity was calculated using the two-point model. Per unit practice, length boards were used as needed to verify measurements that appeared abnormal. Outliers in length and HC (a gain of greater than 3 cm or a loss of greater than 2 cm) were excluded. For patients discharged prior to 36 weeks CGA, measurements at 35 weeks CGA were recorded if available. Small for gestational age (SGA) was defined as a birth weight below the 10th percentile, and appropriate for gestational age (AGA) was defined as birth weight between the 10th and 90th percentile.
Statistical Analysis
Analyses were performed using SAS Studio version 3.71. NIR results were compared to labeled values using paired t-test analysis. Subgroups comparisons of SGA vs. AGA status and DBM intake percentage (<50% vs. >50%) were analyzed using 2-tailed 2-sample t-tests and χ 2 test. P < 0.05 was considered statistically significant. A sample size of 45 patients was estimated to detect a difference of 2.18 g/kg/day difference in weight (assuming full enteral feeding volume of 160 ml/kg/day with DBM fortified to 24 kcal/oz and baseline protein concentration 0.9 g/dL, yielding a projected protein intake of 3.87 g/kg/day, 0.63 g/kg/day less than ESPGHAN recommendations) with 80% power and alpha 0.05 and based on the largest randomized trial known at the time of study design describing growth in infants fed DBM [11].
Study Infants
Of 235 infants screened, 85 met inclusion criteria ( Figure 1). An additional 16 were excluded due to early discharge or modified feeding plans after completion of the standard feeding protocol. Thus, 69 had growth data available at 30 days, and 60 had measurements available at 36 weeks CGA. The summary of their characteristics can be found in Table 1. Of those 69 patients who had growth data, 65.9% were still receiving DBM as part or all of their feedings at 30 days old, and 5 additional patients were transitioned early from DBM to formula due to poor growth at 27-28 days. Further, 15.9% were SGA and 71.0% received increased fortification, which occurred on average at day 18.5.
Donor Milk Analysis
Samples from 96 unique pools were obtained. Review of enteral intake charting and milk tracking showed 146 unique pools of DBM were actually utilized during the study period. NIR analysis found mean contents of 18.70 ± 1.75 kcal/oz, 0.91 ± 0.19 g/dL protein, 3.11 ± 0.57 g/dL fat, and 7.71 ± 0.38 g/dL carbohydrate ( Table 2). Mean coefficients of variation of triplicate or duplicate analysis were 1.61% for calories, 6.81% for protein, 2.68% for fat, and 1.81% for carbohydrate. Labeled nutritional information demonstrated mean calorie content of 21.13 ± 1.01 kcal/oz and mean protein content 1.02 ± 0.18 g/dL. On average, compared to labeled values, the samples had 2.43 kcal/oz less (p < 0.001) and 0.11 g/dL less protein (p < 0.001) (Table 3, Figure 2).
Donor Milk Analysis
Samples from 96 unique pools were obtained. Review of enteral intake charting and milk tracking showed 146 unique pools of DBM were actually utilized during the study period. NIR analysis found mean contents of 18.70 ± 1.75 kcal/oz, 0.91 ± 0.19 g/dL protein, 3.11 ± 0.57 g/dL fat, and 7.71 ± 0.38 g/dL carbohydrate (Table 2). Mean coefficients of variation of triplicate or duplicate analysis were 1.61% for calories, 6.81% for protein, 2.68% for fat, and 1.81% for carbohydrate. Labeled nutritional information demonstrated mean calorie content of 21.13 ± 1.01 kcal/oz and mean protein content 1.02 ± 0.18 g/dL. On average, compared to labeled values, the samples had 2.43 kcal/oz less (p < 0.001) and 0.11 g/dL less protein (p < 0.001) (Table 3, Figure 2).
Growth Analysis
Mean weight velocity reached 16.58 g/kg/day by week 3, mean length velocity ranged from 0.95 to 1.03 cm/week during weeks 2-4, and mean HC velocity reached 1.01 cm/week by week 3 (Table 4). When comparing the subgroups of SGA and AGA infants, the mean velocities were statistically different for weight velocity in weeks 1 and 2 (p = 0.001, p = 0.009) ( Table 4). There were no large-forgestational-age infants. Infants whose enteral intake comprised of less than 50% DBM had similar growth velocities compared to those whose enteral intake was greater than 50% DBM with the exception of weight velocity at week 2 (p = 0.024) ( Table 4). Further, 51.9% in the <50% DBM group and 83.3% in the >50% DBM group received fortification beyond 24 kcal/oz (p = 0.005). For the 60 infants who had growth measurements available at 36 weeks CGA, the Fenton z-score decreased for HC during weeks 1-2 and for both weight and length during all four weeks. From birth to 36 weeks CGA, there was a negative z-score change across all three parameters with the greatest change seen in length (−1.01) (Table 5, Figure 3). HC z-score improved to within 0.23 of birth, and a
Growth Analysis
Mean weight velocity reached 16.58 g/kg/day by week 3, mean length velocity ranged from 0.95 to 1.03 cm/week during weeks 2-4, and mean HC velocity reached 1.01 cm/week by week 3 (Table 4). When comparing the subgroups of SGA and AGA infants, the mean velocities were statistically different for weight velocity in weeks 1 and 2 (p = 0.001, p = 0.009) ( Table 4). There were no large-for-gestational-age infants. Infants whose enteral intake comprised of less than 50% DBM had similar growth velocities compared to those whose enteral intake was greater than 50% DBM with the exception of weight velocity at week 2 (p = 0.024) ( Table 4). Further, 51.9% in the <50% DBM group and 83.3% in the >50% DBM group received fortification beyond 24 kcal/oz (p = 0.005). For the 60 infants who had growth measurements available at 36 weeks CGA, the Fenton z-score decreased for HC during weeks 1-2 and for both weight and length during all four weeks. From birth to 36 weeks CGA, there was a negative z-score change across all three parameters with the greatest change seen in length (−1.01) (Table 5, Figure 3). HC z-score improved to within 0.23 of birth, and a small increase (0.1) was noted in weight z-score between week 4 and 36 weeks CGA. Olsen BMI was the only measure to have a net increase in z-score over time. Further, 11/60 infants were SGA at birth; an additional 10 AGA infants became <10% for weight by 36 weeks CGA. There appeared to be a difference between the two DBM subgroups in both weight and length, though it was not statistically significant for any measurement (Figure 3), and with the exception of the change in BMI from week 4 to 36 weeks CGA, the weekly change in z-score and also net change from birth to 36 weeks CGA were not statistically different. After excluding the 11 SGA patients, again there was no statistically significant difference between the two DBM groups except between week 4 and 36 weeks CGA where the length z-score continued to decrease by −0.14 in the >50% DBM group but increased by 0.12 in the <50% DBM group (p = 0.023) (Figure 4). difference between the two DBM subgroups in both weight and length, though it was not statistically significant for any measurement (Figure 3), and with the exception of the change in BMI from week 4 to 36 weeks CGA, the weekly change in z-score and also net change from birth to 36 weeks CGA were not statistically different. After excluding the 11 SGA patients, again there was no statistically significant difference between the two DBM groups except between week 4 and 36 weeks CGA where the length z-score continued to decrease by -0.14 in the >50% DBM group but increased by 0.12 in the <50% DBM group (p = 0.023) (Figure 4).
Discussion
NIR analysis revealed that the target-pooled DBM samples contained similar calories (18.7 vs. 18.0-18.7 kcal/oz) and protein concentrations (0.91 vs. 0.88-1.0 g/dL) compared to other recent analyses of multi-donor random-pooled DBM [18,19]. However, in these studies, samples were measured pre-pasteurization. MMBO's labeled pre-pasteurization measurements showed the calorically targeted-pools contained mean 21.13 kcal/oz and 1.02 g/dL, reflective of their particular technique designed to mimic hind milk. As there are no dedicated regulations currently in place regarding pooling, the techniques utilized by other banks, which may include protein targeting, could result in different macronutrient ratios.
Furthermore, the measured concentrations for calories were, across the board, less than indicated on the corresponding labels ( Figure 2), with one sample as low as 12.43 kcal/oz. Given that our NIR analyzer was calibrated utilizing milk and measurements provided by MMBO and that samples were collected after feeding preparations were completed for each shift, this suggests that nutrient loss likely occurred during preparation and handling. Handling from freezing and thawing of human milk has been shown to decrease caloric delivery [20], likely secondary to increased contact with plastic surfaces to which fat adheres, and the steps of feeding preparation, such as handhomogenization and pouring versus using a transfer pipette, may also yield uneven distribution of
Discussion
NIR analysis revealed that the target-pooled DBM samples contained similar calories (18.7 vs. 18.0-18.7 kcal/oz) and protein concentrations (0.91 vs. 0.88-1.0 g/dL) compared to other recent analyses of multi-donor random-pooled DBM [18,19]. However, in these studies, samples were measured pre-pasteurization. MMBO's labeled pre-pasteurization measurements showed the calorically targeted-pools contained mean 21.13 kcal/oz and 1.02 g/dL, reflective of their particular technique designed to mimic hind milk. As there are no dedicated regulations currently in place regarding pooling, the techniques utilized by other banks, which may include protein targeting, could result in different macronutrient ratios.
Furthermore, the measured concentrations for calories were, across the board, less than indicated on the corresponding labels ( Figure 2), with one sample as low as 12.43 kcal/oz. Given that our NIR analyzer was calibrated utilizing milk and measurements provided by MMBO and that samples were collected after feeding preparations were completed for each shift, this suggests that nutrient loss likely occurred during preparation and handling. Handling from freezing and thawing of human milk has been shown to decrease caloric delivery [20], likely secondary to increased contact with plastic surfaces to which fat adheres, and the steps of feeding preparation, such as handhomogenization and pouring versus using a transfer pipette, may also yield uneven distribution of
Discussion
NIR analysis revealed that the target-pooled DBM samples contained similar calories (18.7 vs. 18.0-18.7 kcal/oz) and protein concentrations (0.91 vs. 0.88-1.0 g/dL) compared to other recent analyses of multi-donor random-pooled DBM [18,19]. However, in these studies, samples were measured pre-pasteurization. MMBO's labeled pre-pasteurization measurements showed the calorically targeted-pools contained mean 21.13 kcal/oz and 1.02 g/dL, reflective of their particular technique designed to mimic hind milk. As there are no dedicated regulations currently in place regarding pooling, the techniques utilized by other banks, which may include protein targeting, could result in different macronutrient ratios.
Furthermore, the measured concentrations for calories were, across the board, less than indicated on the corresponding labels ( Figure 2), with one sample as low as 12.43 kcal/oz. Given that our NIR analyzer was calibrated utilizing milk and measurements provided by MMBO and that samples were collected after feeding preparations were completed for each shift, this suggests that nutrient loss likely occurred during preparation and handling. Handling from freezing and thawing of human milk has been shown to decrease caloric delivery [20], likely secondary to increased contact with plastic surfaces to which fat adheres, and the steps of feeding preparation, such as hand-homogenization and pouring versus using a transfer pipette, may also yield uneven distribution of macronutrients due to technician variation. The NIR-measured protein content was inconsistently matched with its label counterpart (Figure 2), potentially due to poor homogenization and compartmentalization. This carries implications in unequal delivery of nutrients between patients and also between feedings to individual patients, leading to unintended under-or over-nutrition. Developing consistent feeding preparation techniques to improve homogenization, minimize fat loss, and optimize nutrient delivery is an important focus for further research and quality improvement.
Growth parameters reached or approached goal velocities (15 g/kg/day for weight, 1 cm/week for length and HC) by weeks 3-4, but 71.0% of patients received additional fortification to maintain adequate growth (Tables 1 and 4). The clinical significance of the early weight velocity in SGA patients is unclear given the small subgroup size, though it could reflect a response to metabolic programming or a larger proportion of SGA infants in the <50% DBM group, though the latter was not statistically significant. The difference in weight velocity between the two DBM intake groups at week 2 and the narrowed gap at week 3 correlates, respectively, with infants approaching full enteral volumes with little or no parenteral nutrition and the point at which increased fortification occurred, supporting previous findings that acceptable growth velocities can be achieved on a DBM diet with appropriate fortification [14].
In addition, weekly Fenton z-scores suffered, and patients did not return to birth z-scores by 36 weeks CGA (Table 5, Figure 3). The greatest z-score change was seen in length, and the Olsen BMI z-score increased correspondingly. This suggests that monitoring z-scores in addition to growth velocities is necessary to determine whether weekly growth is adequate. Furthermore, despite the controlled caloric intake provided by target-pooled DBM, standard fortification to 24 kcal/oz alone does not provide adequate nutrition. Standard fortification of DBM increases the concentration from 0.9 g/dL to 2.42 g/dL, still below recommendations. Moreover, the switch to preterm formula at 30 days could not overcome the early growth faltering on DBM, highlighted by the 10/49 (20%) of AGA infants who developed postnatal growth failure (weight <10% at 36 weeks CGA). With studies associating poor linear growth and protein intake with worse neurodevelopmental outcomes [3,5], the persistent decreasing length z-score over time is particularly concerning. Thus, this population may benefit from earlier aggressive fortification of DBM with focused targeting of protein intake before growth faltering is demonstrated.
Though there appears to be a difference in weight and length at birth between the DBM subgroups, both groups actually had similar z-score trajectories over time. This is likely due to the increased percentage of infants who received additional fortification in the group that received >50% DBM. Colaizy et al. previously noted a net change in weight z-score from birth to discharge of −0.84 in infants who received >75% DBM [21]. In our >50% DBM subgroup, 32/35 patients received >75% DBM, and the net change in weight z-score was −0.49, an improvement possibly attributable to the target-pooling. Our net z-score changes for weight and length were also similar to findings of the DoMINO trial, the largest randomized controlled trial to date comparing DBM versus preterm formula as primary diet [13]. However, despite the improved growth potential that target-pooling may offer, the negative trends remain worrisome. Interestingly, over 95% of the study milk from the DoMINO trial was also purchased from MMBO. Providers may wish to inquire what pooling technique is utilized by the milk bank that provides their unit's donor milk, which may be different in content than the donor milk utilized in our study and the DoMINO study, thus limiting the generalizability of these findings.
Additionally, there was a gain in length z-score between week 4 and 36 weeks CGA for those who received <50% DBM and a decrease for those who received >50% DBM, though it was only statistically significant once the SGA infants were removed (Figure 4). Both of these groups transitioned to preterm formula as backup at 30 days, though many infants in the former likely continued to receive a larger percentage of maternal milk. Further investigation into the later feeding characteristics of these two cohorts and also comparison to infants who received almost exclusive maternal milk may provide additional insight.
One limitation of this study is the irregular sampling bias of DBM from leftover milk after feeding preparation, which may have affected our macronutrient analysis, but this poses new questions regarding human milk handling methods. Furthermore, while NIR human milk analyzers have been validated for precision in measuring protein and fat content, they are less accurate than mid-infrared analyzers [22,23]. A separate collaboration determined that the NIR analyzer used in this study may overestimate protein [24], suggesting that the protein content might be even lower than measured. Another limitation is the imprecision of length and head circumference measurements, and length boards had not been implemented as standard of care yet at the beginning of this study. We also sought to compare each infant's daily protein and caloric intake with weekly growth velocities. However, despite a protocol designed to identify all unique pools purchased by the NICU as shipments arrived, some shipments were missed, preventing us from capturing 50/146 (34%) of the unique pools that were utilized in these infants. Because bottles of DBM from the same pool may be dispersed among multiple patients, this unfortunately precluded us from calculating the enteral nutrient intake for the majority of the patients.
Conclusions
Target-pooling DBM to meet a caloric minimum alone does not meet recommended protein intake for VLBW infants. Infants fed calorically target-pooled DBM still demonstrate a disproportionate negative change in length z-score over time and would likely benefit from more aggressive and earlier fortification strategies that target protein as well. Whether target-pooled DBM offers improved growth compared to random-pooled DBM remains unknown. | 2019-08-16T13:04:03.692Z | 2019-08-01T00:00:00.000 | {
"year": 2019,
"sha1": "4e2b0049c45694250b06b97b542271a0e09ff730",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/11/8/1884/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "13c8c6bc151efeba32087f068dd539bd43eadf16",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
44155033 | pes2o/s2orc | v3-fos-license | The Canonical E2Fs Are Required for Germline Development in Arabidopsis
A number of cell fate determinations, including cell division, cell differentiation, and programmed cell death, intensely occur during plant germline development. How these cell fate determinations are regulated remains largely unclear. The transcription factor E2F is a core cell cycle regulator. Here we show that the Arabidopsis canonical E2Fs, including E2Fa, E2Fb, and E2Fc, play a redundant role in plant germline development. The e2fa e2fb e2fc (e2fabc) triple mutant is sterile, although its vegetative development appears normal. On the one hand, the e2fabc microspores undergo cell death during pollen mitosis. Microspores start to die at the bicellular stage. By the tricellular stage, the majority of the e2fabc microspores are degenerated. On the other hand, a wild type ovule often has one megaspore mother cell (MMC), whereas the majority of e2fabc ovules have two to three MMCs. The subsequent female gametogenesis of e2fabc mutant is aborted and the vacuole is severely impaired in the embryo sac. Analysis of transmission efficiency showed that the canonical E2Fs from both male and female gametophyte are essential for plant gametogenesis. Our study reveals that the canonical E2Fs are required for plant germline development, especially the pollen mitosis and the archesporial cell (AC)-MMC transition.
INTRODUCTION
Plant germline development, includes sporogenesis and gametogenesis, begins with the differentiation of a spore mother cell which produces haploid gametes through meiosis and mitosis (Schmidt et al., 2015). After meiosis, plant haploid spores undergo two (for sperm) or three (for egg) rounds of mitoses to form a multicellular gametophyte. These processes involve a number of cell fate determinations including cell division, cell differentiation and programmed cell death (PCD) (Drews and Yadegari, 2002;Berger and Twell, 2011;Daneva et al., 2016). The male gametogenesis takes place in anther, while the female gametogenesis occurs in ovule. An anther often produces numerous microspore mother cells. A diploid microspore mother cell is divided through meiosis into a tetrad of four haploid microspores (Preuss et al., 1994;Rhee and Somerville, 1998). Subsequently, microspore undergoes two rounds of mitoses: pollen mitosis I (PMI) and pollen mitosis II (PMII). PMI is an asymmetric mitosis producing a large vegetative cell and a smaller generative cell (Eady et al., 1995). In Arabidopsis, the generative cell undergoes a second mitosis (PMII) to give rise to two sperms, resulting in a three-celled male gametophyte (McCormick, 1993;Twell, 2011;Gomez et al., 2015). The tapetum is degenerated at the later stage of pollen development (Gomez et al., 2015). In contrast to an anther, an ovule often selects a single archesporial cell (AC) to develop into a diploid megaspore mother cell (MMC) which is divided into four haploid megaspores through meiosis (Drews and Koltunow, 2011). Three of the megaspores are degenerated. The chalazal-most megaspore undergoes mitoses and cellularization to form a seven-celled female gametophyte, composed of four types of cells: egg, synergid, central, and antipodal (Schneitz, 1999;Skinner et al., 2004;Yang et al., 2010).
In mammals, the E2F signaling pathway plays a key role in cell fate determination (Polager and Ginsberg, 2008). Plants have orthologs of all the core regulators in the E2F signaling pathway including cyclins, cyclin-dependent kinases (CDKs), CDK inhibitors (CKIs), retinoblastoma (RB), and E2Fs. Cyclins and CKIs are positive and negative regulators of CDK, respectively. RB binds E2F to inhibit its activity. CDK phosphorylates RB to release E2F. The transcription factor E2F activates genes involved in the G1-S phase transition (Polager and Ginsberg, 2009). These core regulators have been implicated in plant gametogenesis. Mutation of an A-type cyclin, CYCA1;2, leads to delayed and asynchronous cell divisions during male meiosis . Arabidopsis has only one A-type CDK, referred to as CDKA1, which is a homolog of yeast CDC2. In the cdka1 mutant, the female gametogenesis is not affected, whereas the male gametogenesis is significantly disrupted. As a result of the failure of PMII, a cdka1 mature pollen produces only a single sperm cell (Nowack et al., 2006). Arabidopsis also has a single copy of the RB gene, referred to as RB-RELATED 1 (RBR1) (Ebel et al., 2004). RBR1 is involved in both male and female gametogenesis. In the rbr1/RBR1 heterozygous anther, more than 40% of pollen contain two vegetative nuclei as a result of supernumerary mitosis. The rbr1 microspores undergo cell death after the unicellular stage (Johnston et al., 2008). Meanwhile, the rbr1 megaspores have more than three nuclear mitotic divisions, resulting in supernumerary nuclei (up to 15) (Ebel et al., 2004;Zhao et al., 2017). In terms of the mitotic division, the phenotype of cdka1 mutant is opposite to that of the rbr1 mutant as the cdka1 mutant undergoes hypoproliferation, whereas the rbr1 mutant does hyperproliferation. Consistently, the defects of rbr1 mutant are suppressed by the cdka1 mutant and vice versa (Chen et al., 2009;Nowack et al., 2012). The Arabidopsis E2F family consists of eight genes. In cell proliferation, it has been found that the canonical Arabidopsis E2Fs played an antagonistic role as E2Fa and E2Fb were positive regulators, whereas E2Fc was a negative regulator (del Pozo et al., 2002(del Pozo et al., , 2006Vandepoele et al., 2002;Magyar et al., 2005). Recently, we discovered that these E2Fs played a redundant role in plant fertility as the e2fa e2fb e2fc triple mutant (referred to as e2fabc) was sterile while their single and double mutants were fertile (Wang et al., 2014;Gu et al., 2016;Wang, 2017). These data suggest that the CDK-RB-E2F core cell cycle signaling pathway plays an important role in cell fate determination during plant germline development. However, the underlying mechanism of the regulation remains unclear. In this study, we further characterized the role of the E2Fs in plant germline development to better understand the regulation of this processes.
Plant Material and Growth Conditions
Arabidopsis plants used in this study are in the Columbia (Col-0) background. Mutants of e2fa (GK-348E09), e2fb (SALK_103138), and e2fc (GK-718E12) are as described (Wang et al., 2014). The condition of growth chamber was set at 22 • C under a 16-h light/8-h dark photoperiod.
Complementation of e2fabc Mutant
The E2F genes, including E2Fa (AT2G36010), E2Fb (AT5G22220) and E2Fc (AT1G47870), for complementation of the e2fabc mutant were amplified by PCR and integrated into the SalI site of the binary vector pCAMBIA1300 1 using the pEASY Uni-Seamless Cloning and Assembly Kit (TransGen Biotech, Beijing, China) to generate pCAMBIA1300-E2Fs. The primers used for construction of pCAMBIA1300-E2Fs are listed in Supplementary Table S1.
Analysis of E2F Expression
The reporters were used for analyzing the expression of E2F genes. The construct of pE2F:E2F-VENUS was a translational fusion of E2F to VENUS which was driven by its native promoter (∼2 kb). NOS terminator and VENUS were amplified by PCR and consecutively inserted into the PstI-HindIII site and the SalI-PstI site of pCAMBIA1300 to generate pCAMBIA1300-VENUS. Subsequently, the genomic DNA sequence of a E2F gene was amplified by PCR and integrated into the SalI site of pCAMBIA1300-VENUS using the pEASY Uni-Seamless Cloning and Assembly Kit (TransGen Biotech, Beijing, China) to generate pE2F:E2F-VENUS. The primers used for construction of pE2F:E2F-VENUS are listed in Supplementary Table S1. To visualize the expression pattern of a reporter, the fluorescence was excited at 488 nm and collected with a 515∼530 nm bandpass filter using a Zeiss LSM 5 Pascal Confocal Laser Scanning Microscopy (Germany).
Alexander Staining
The Alexander staining was performed as described (Alexander, 1969). Briefly, anthers were stained with the Alexander solution for 30 min and images were taken using an Olympus BX51 digital microscope (Japan).
Semi-Thin Section
Floral buds were fixed and embedded in the Spurr's epoxy resin as described (Zhang et al., 2007). The embedded materials were sectioned to 1-µm thick using an RMC Powertome XL Ultramicrotome (Tucson, AZ, United States). Semi-thin sections of anthers were stained with toluidine blue and photographed using an Olympus BX51 digital microscope (Japan).
Analysis of Female Gametophyte Development
The procedure used to analyze female gametophyte development was carried out as described (Christensen et al., 1997). Briefly, pistils were fixed in the fixative solution (4% glutaraldehyde and 12.5 mM cacodylic acid, pH 6.9) for 4 h. The tissues were dehydrated in a series of increasing concentrations of ethanol (10, 20, 40, 60, 80, and 95%, each for 10 min) and kept in 95% ethanol overnight. The tissues were then washed with 100% ethanol twice, each for 10 min. After dehydration, the tissues were cleared in the benzyl benzoate/benzyl alcohol (2:1) solution for 20 min. Ovules were dissected, mounted in immersion oil, and observed at the excitation wavelength 488 nm and the emission wavelength 515∼530 nm using a Carl Zeiss LSM 5 Pascal Confocal Laser Scanning Microscopy (Germany).
Quantitative PCR (qPCR)
The qPCR was performed as described (Ma and Wang, 2016). Briefly, Arabidopsis RNA was extracted using TRIzol Reagent (Invitrogen) and measured by NanoDrop 2000 Spectrophotometer (Thermo Fisher). Five µg of RNA was treated with DNase (Ambion TURBO DNA-free Kit, Thermo Fisher). Two µg of DNase-treated RNA was used to synthesize cDNA using the TransScript Fly First-strand cDNA synthesis SuperMix (TransGen Biotech, Beijing, China). The synthesized cDNA, diluted 5 times, was used as templates. qPCR was performed using the SYBR Green Realtime PCR Master Mix (Toyobo, Japan) in Mastercycler ep realplex (Eppendorf). All genes were normalized to TUBULIN BETA CHAIN 2 (TUB2). The primers used for qPCR are listed in Supplementary Table S1.
RESULTS
The Arabidopsis E2Fa, E2Fb, and E2Fc Are Canonical E2F Proteins There are three categories of E2Fs in both human and Arabidopsis based on their protein domain structures ( Figure 1A) (Vandepoele et al., 2002;Attwooll et al., 2004). The most conserved domain in E2F proteins is the DNAbinding domain (DBD), especially the core DNA-binding motif "RRxYD" which binds to the palindromic CGCGCG sequence ( Figure 1B) (Zheng et al., 1999). DBDs are classified into DBD1 and DBD2 . We found that two amino acids, glutamate at 261 and asparagine at 272 in Arabidopsis E2Fa protein, were highly conserved in DBD1 ( Figure 1B and Supplementary Figure S1). The first category, referred to as E2F, is the canonical E2Fs possessing a DBD1 and a dimerization domain (DD). In addition, these E2Fs have a RB-binding domain or a polycomb-group (PcG)-binding domain. The second category, referred to as DP-E2F-like 1 (DEL1), contains two DBDs: DBD1 and DBD2. The third category, referred to as dimerization partner (DP), has a DBD2 and a DD. Structural analysis demonstrated that among E2F proteins, there was a preference for heterodimers over homodimers in DNA binding (Zheng et al., 1999). Therefore, E2F and DP form a heterodimer through DD to bind DNA, whereas DELs, possessing both DBD1 and DBD2, bind DNA by themselves (Figure 1) (Logan et al., 2004;Lammens et al., 2009). There are eleven members of E2F family proteins in human (Figure 1). Based on their transcriptional properties, human E2F proteins are classified into activators, E2F1 through E2F3, and repressors, E2F4 through E2F8 (DeGregori and Johnson, 2006). Meanwhile, there are eight members of E2F family proteins in Arabidopsis (Vandepoele et al., 2002). The first category includes E2Fa, E2Fb, and E2Fc, which are the canonical E2Fs. The second category consists of DEL1, DEL2, and DEL3. The third category includes DPa and DPb (Figure 1). Both DBDs and three categories of E2Fs are highly conserved in plants across eudicot, monocot, lycopodiophyta, bryophyta and algae ( Figure
The Canonical E2Fs Play an Essential Role in Plant Fertility
Phylogenetic analysis showed that the sequences of three canonical Arabidopsis E2F proteins were similar to each other ( Figure 1C). Consistently, our genetic analysis revealed that these canonical E2Fs functioned redundantly to activate plant effector-triggered cell death and immunity (Wang et al., 2014). In addition, they played a redundant role in plant fertility as the e2fabc triple mutant was sterile, whereas single and double e2f mutants were all fertile (Figure 2 and Supplementary Figure S2) (Wang et al., 2014). The e2f mutant lines used for generating the e2fabc triple mutant are likely knockout lines as the insertion sites disrupt their DDs (Supplementary Figure S3). Although the reproductive development was severely compromised, the vegetative development of e2fabc mutant (germinated later than wild type plant) appeared normal (Supplementary Figure S4). We introduced the E2Fa gene, as well as E2Fb and E2Fc, into the e2fabc triple mutant and found that it fully restored the fertility of this mutant, confirming that mutations of the E2F genes are responsible for the sterility of e2fabc mutant (Figure 2, Supplementary Figure S5, and Supplementary Table S4).
The Canonical E2Fs Are Essential for Pollen Mitosis During Male Gematogenesis
To explore the role of the canonical E2Fs in gametophytic control, we first examined the male gametogenesis which occurs in the anther. The reporters of pE2F:E2F-VENUSs showed that E2Fs were expressed in microspores, with the peak at the bicellular stage (Supplementary Figure S6). This expression pattern suggests that E2Fs play a role in male gametogenesis. The Alexander staining showed that in an e2fabc anther, some of pollen were viable, whereas the majority (81%, n = 600) FIGURE 1 | Arabidopsis E2Fa, E2Fb, and E2Fc are the canonical E2F proteins. (A) Schematic representation of domains of Arabidopsis and human E2F family proteins, which are classified into three categories: E2F, DEL, and DP. DBD1 and DBD2, DNA-binding domain 1 and 2; DD, dimerization domain; RB, RB-binding domain; PcG, polycomb group protein-binding domain. At, Arabidopsis thaliana; Hs, Homo sapiens. aa, amino acid residues. (B) Alignment of DBD of the E2F proteins using ClustalX2 (http://www.clustal.org/). Red dots indicate the core DNA recognition motif RRxYD. Arrows indicate that two amino acids, glutamate (E) and asparagine (N), in DBD1 are shared between E2F and DEL proteins. Arabidopsis DEL1, DEL2 and DEL3 and human E2F7 and E2F8 possess two DBDs: DEL1/2/3-1 and E2F7/8-1, DBD1; DEL1/2/3-2 and E2F7/8-2, DBD2. (C) Phylogenetic tree of plant and human E2F family proteins is constructed by the Phylogeny.fr (http://www.phylogeny.fr/). The sequences of E2F proteins in fasta format were pasted and the software was performed in "One Click" mode to generate the phylogenetic tree. The number at each branch point represents the bootstrap values.
of pollen were aborted ( Figure 3A). The Arabidopsis anther development is divided into 14 stages (Sanders et al., 1999). The e2fabc mutant anther was not distinguishable from wild type anther until the stage 10, when the degeneration of tapetum initiated. The majority of the e2fabc microspores underwent cell death at the stage 11, when pollen mitosis I (PMI) initiated ( Figure 3B). Consistently, the unicellular microspores were uniformly formed in both wild type and e2fabc anthers. The degeneration of e2fabc microspore started from the bicellular stage. At the tricellular stage, 75.6% of microspores were degenerated, whereas 13.4% of microspores developed into tricellular microspores in the e2fabc mutant (Figures 3C,D). These data suggest that the canonical E2Fs are required for the microspore development during the PMI progression.
The Canonical E2Fs Are Essential for the Transition From Archesporial Cell to Megaspore Mother Cell During Female Sporogenesis
The female sporogenesis takes place in ovule. The AC is derived from a sub-epidermal somatic cell at the distal end of the ovule primordium. Usually, a single AC is selected to develop into a large MMC at the female gametophyte 0 (FG0) stage. Compared to the surrounding sporophytic cells, the MMC has a denser cytoplasm and a larger nucleus (Drews and Koltunow, 2011). Intriguingly, multiple MMCs (up to 5) were formed in an e2fabc mutant ovule. The majority of e2fabc mutant ovules produced 2-3 MMCs (Figure 4A). The subsequent female gametogenesis of e2fabc mutant was aborted (93.8%, n = 97) and the vacuole was severely impaired in the embryo sac ( Figure 4B and Supplementary Figure S7). The cell identities were confirmed by a MMC marker pKNU:KNU-VENUS ( Figure 4C) (Sun et al., 2014). The reporters of pE2F:E2F-VENUSs showed that the canonical E2Fs were expressed all over the ovule in the early development stages (before FG4 stage) (Supplementary Figure S8). Although about 5% of wild type ovules have two MMCs, it has never been observed that two female gametophytes are formed in one ovule in Arabidopsis, suggesting that the survival of one functional MMC per ovule is a strict rule required for the subsequent female gametophyte development (Drews and Koltunow, 2011). Occasionally, we observed a normal sevencelled female gametophyte formed in the e2fabc mutants (6.2% were normal at the FG7 stage, n = 97) (Supplementary Figure S7), which appears that the development of e2fabc ovule is delayed. This result is consistent with that the e2fabc mutant, especially during the late flowering stage, can set some seeds (about 20 seeds per plant, n = 50) (Supplementary Figure S9). These data suggest that the canonical E2Fs are required for the AC-MMC transition during female sporogenesis.
The Canonical E2Fs Are Required for Gametophytic Control of Plant Gametogenesis and Suppression of Cell Cycle-Related Gene Expression
The development of plant gametophyte is controlled by both gametophytic and sporophytic genes. To investigate the role of the canonical E2Fs in plant gametophyte development, we analyzed the genetic transmission via gametophyte through reciprocal crosses between wild type and e2fa +/− e2fb −/− e2fc −/− plants. The expected transmission efficiency for the normal gametes is 100%. As shown in Table 1, the transmission efficiency of e2fa e2fb e2fc triple mutant allele was 20.5 and 23.6% via male and female gametophytes, respectively, both of which were dramatically reduced as compared to the expected value, suggesting that the canonical E2Fs play a critical role in both male and female gametogenesis. Consistently, we observed that the majority of e2fa +/− e2fb −/− e2fc −/− plants produced abnormal pollen and ovules (Supplementary Table S5). The deficiency of gametophyte development varied dramatically as the ratio of abnormal ovules was from 0 to 90%. The MMC marker pKNU:KNU-VENUS showed that 20∼75% of ovules contained multiple MMCs in e2fa +/− e2fb −/− e2fc −/− plants. We further checked the e2fb e2fc double mutants and found that their gametophyte development was deficient to some extent, which was consistent with that more than 50% female gametophytes were defective in an e2fa +/− e2fb −/− e2fc −/− plant (Supplementary Table S6). Similarly, genetic analysis of tetraploid plants (rbr1/rbr1/rbr1/RBR1, triplex for rbr1) 1 | Test of transmission efficiency through reciprocal crosses between wild type and e2fa +/− e2fb −/− e2fc −/− plants. 29 123 152 23.6% <0.01 a ♀×♂, female × male. b WT, wild type plant. c TE, transmission efficiency. TE = number of e2fa +/− e2fb +/− e2fc +/− progenies/number of e2fa +/+ e2fb +/− e2fc +/− progenies × 100%. The expected TE for the normal gametes is 100%. d The p-value is calculated by the χ 2 test based on the expected TE of a 1:1 segregation ratio. χ 2 = (observed value-expected value) 2 /expected value.
found that regulation of the sporophytic development by RBR1 depended on the copy number of RBR1 (Johnston et al., 2010).
These data indicate that the gametogenesis is controlled by the canonical E2Fs from both male and female gametophyte in a dosage-dependent manner. E2Fs function as transcription factors to activates the expression of cell cycle regulators. To understand how E2Fs regulate plant germline development, we examine the expression of cell cycle-related genes. Genome-wide transcriptional profiling analysis revealed that a few of cell cycle regulators, including RBR1, ORIGIN OF REPLICATION COMPLEX 1B (ORC1B), MINICHROMOSOME MAINTENANCE 8 (MCM8), CYCLIN-DEPENDENT KINASE B1;1 (CDKB1;1) and CELL DIVISION CONTROL 6 (CDC6), were upregulated by co-overexpression of E2Fa with DPa or down-regulated by a dominant-negative truncated DP gene (Ramirez-Parra et al., 2003;Vandepoele et al., 2005;Naouar et al., 2009). As shown in Supplementary Table S7, qPCR analysis was carried out to demonstrate the influence of e2fabc mutant on the expression of these genes in anthers (stage 10 ∼ stage 12) and ovules (FG0 ∼FG7). TUBULIN BETA CHAIN 2 (TUB2) was used as an internal control. In addition, we first examined two other internal controls, TUBULIN BETA 8 (TUB8) and UBIQUITIN-CONJUGATING ENZYME 21 (UBC21), and validated that the expression of these internal controls was not altered by mutations of these E2F genes in anthers and ovules. Consistent to the phenotype, the expression of both male gametophyte-specific MICROSPORE-SPECIFIC PROMOTER 2 (MSP2) and female gametophyte-specific DOWNREGULATED IN DIF1 33 (DD33) was significantly downregulated in anthers and ovules, respectively, as both genes were about 10-fold reduced in e2fabc mutant (Honys et al., 2006;Steffen et al., 2007). Surprisingly, our results showed that the expression of all of the five E2F-target genes was upregulated in both anthers and ovules of e2fabc mutant, suggesting that the canonical Arabidopsis E2Fs play a negative role in transcription of these genes.
DISCUSSION
In mammals, the G1-S phase transition is controlled by the CDK-RB-E2F core cell cycle signaling pathway (Polager and Ginsberg, 2009). In the Arabidopsis genome, there are at least 50 cyclins, 12 CDKs, 18 CKIs and 8 E2Fs (Vandepoele et al., 2002;Wang G. et al., 2004). In contrast, the Arabidopsis genome only bears a single copy gene of CDKA1 and RB. All types of these cell cycle regulators have been implicated in plant gametogenesis (Twell, 2011;Zhao et al., 2012). During cell cycle progression, CDK and E2F function as positive regulators, whereas RB acts as a negative regulator. The phenotype of e2fabc mutant appears earlier than that of cdka1 and rbr1 mutant (Figures 3, 4) (Ebel et al., 2004;Nowack et al., 2006). The redundancy between multiple genes (cyclins, CKIs, CDKs and E2Fs) and the lethality of a single copy gene (RB and CDKA1) have long hampered the genetic study on their functions. Fortunately, the e2fabc mutant is severely but not completely sterile, especially during the late flowering stage. A little bit of fertility and the normal vegetative development in the e2fabc triple mutant provide us with an opportunity to have in-depth study of the CDK-RB-E2F signaling pathway in plant germline development. Plant E2Fs control cell cycle as their mammalian counterpart (Vandepoele et al., 2005;Sozzani et al., 2006;Cheng et al., 2013;Liu et al., 2016). The canonical E2Fs had been found to play a distinct role during cell cycle progression based on the ectopic studies. Co-overexpression of E2Fa with DPa leads to activation of both mitosis and endoreduplication, whereas E2Fb and E2Fc act antagonistically as both co-overexpression of E2Fb with DPa and down-regulation of E2Fc by RNA interference induced mitosis but reduced the endoreduplication Magyar et al., 2005;del Pozo et al., 2006). In addition, E2Fb is antagonistic to E2Fc in the transcription of the DEL1 gene (Berckmans et al., 2011). Our data demonstrated that the three Arabidopsis canonical E2Fs played a redundant role in plant fertility (Figures 1-3), in addition to plant effector-triggered cell death and immunity (Wang et al., 2014).
E2F acts as a transcription factor which is an executor of the CDK-RB-E2F signaling pathway on the expression of target genes. Our data showed that the canonical E2Fs were required for plant germline development, especially the pollen mitosis and the AC-MMC transition (Figures 3, 4). The underlying mechanism of how PMI is regulated remains unknown. It seems that the cytokinesis plays a crucial role in the asymmetrical cell division of PMI as most of the identified mutants with PMI defects are related to microtubule (Twell et al., 2002;Pastuglia et al., 2006). Plant RB, E2F and MYB3R are proposed to be the member of a DREAM complex which plays a critical role in maintaining cell quiescence (Magyar et al., 2016). Arabidopsis MYB3Rs were found to regulate cytokinesis through activation of the KNOLLE transcription (Haga et al., 2007), suggesting that E2F could be involved in cytokinesis through its partner MYB3R. In the meantime, the underlying mechanism of how the MMC develop also remains largely unknown. MAC1 (MULTIPLE ARCHESPORIAL CELLS 1) encodes a leucinerich repeat containing receptor-like kinase (LRR-RLK). MSP1 (MULTIPLE SPOROCYTE 1) encodes a putative ligand of MAC1. MAC1 and MSP1 control the transition from somatic to germline fate as mutation of maize MAC1 and rice MSP1 resulted in multiple ACs (Sheridan et al., 1996;Nonomura et al., 2003). Previously, a combined analysis of laser-assisted microdissection and microarray revealed that MNEME (MEM) was preferentially expressed in the MMC. MEM encodes an ATP-dependent RNA helicase. Mutation of MEM leads to the multiple MMCs. Like MMCs of the e2fabc mutant, those of the mem mutant are also aborted after the FG0 stage. The male gametophyte development of mem mutant is not as dramatically affected as that of e2fabc mutant. Intriguingly, the microarray data of microdissected cells show that all of the canonical E2Fs are preferentially expressed in MMCs as compared to ovules (Supplementary Figure S10) (Schmidt et al., 2011), which is in good agreement with our data that the canonical E2Fs play a crucial role in the MMC initiation. The stem cell regulator WUSCHEL (WUS) plays a role in the AC-MMC transition. RBR1, the repressor of E2Fs, was found to control the AC-MMC transition through repression of the WUS expression (Lieber et al., 2011;Zhao et al., 2017). It has also been shown that the MMC initiation is controlled by a clay of Argonaute (AGO) genes including AGO4, AGO6, AGO8 and AGO9 in Arabidopsis. Mutations of these genes give rise to multiple MMCs per ovule. Among the multiple MMCs, only one MMC is functional and further develops into a gametophyte (Olmedo-Monfil et al., 2010;Hernandez-Lagana et al., 2016). In contrast to the e2fabc mutants, these ago mutants are fertile, suggesting that the canonical E2Fs are required for not only the AC-MMC transition at the FG0 stage but also the gametophyte development after the FG0 stage.
AGO protein is an RNA Slicer that functions in epigenetic regulation through the RNA-dependent DNA methylation (RdDM) signaling pathway. It interacts with transcripts produced by Polymerase V (Pol V) to recruit de novo DNA methyltransferases, such as DOMAINS REARRANGED METHYLTRANSFERASE 2 (DRM2) histone methyltransferases and chromatin remodelers, to silence a gene (Law and Jacobsen, 2010). It has been observed that the AC-MMC transition in Arabidopsis was accompanied by a large-scale chromatin reprogramming (She et al., 2013), suggesting that control of the AC-MMC transition by AGOs may be attributed to the epigenetic regulation. E2F forms a complex with RB which represses the E2F activity through either physical interaction (masking activation domain) or epigenetic regulation. RB recruits chromatin-remodeling factors and chromatin modifiers such as the SWITCH/SUCROSE NON-FERMENTABLE (SWI-SNF) complex, HISTONE DEACETYLASES (HDACs), and SET-DOMAIN-CONTAINING HISTONE METHYLTRANSFERASES (HMTases) to epigenetically repress the E2F-target genes (Robertson et al., 2000;Zhang et al., 2000). Our data showed that the repression of E2F-target genes, which may be controlled by the RB-E2F complex, was released in e2fabc mutant (Supplementary Table S7). This is consistent with that mutations of both E2Fs and its repressor RBR1 in Arabidopsis lead to multiple MMCs (Figure 4; Zhao et al., 2017). These evidences strongly support that epigenetic regulation plays a critical role in the AC-MMC transition. The variation of female gametophyte deficiency in the progeny of an e2fa +/− e2fb −/− e2fc −/− plant may result from a combined effect of quantitative (copy of E2F) and epigenetic factors. | 2018-05-15T13:09:39.008Z | 2018-05-15T00:00:00.000 | {
"year": 2018,
"sha1": "a20aa55fe0690da8dbe1c0959947a51c875e4a6c",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2018.00638/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a20aa55fe0690da8dbe1c0959947a51c875e4a6c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
251918282 | pes2o/s2orc | v3-fos-license | Perioperative management of the thyrotoxic patients: A systematic review
Background Thyrotoxicosis is a clinical syndrome produced by a multitude of disorders. Thyrotoxicosis is a serious medical condition that, if left untreated, can lead to a fatal illness. This review of recent evidences give additional input for perioperative management of thyrotoxic patients. Methods The literatures were found with Boolean operators in the form of thyrotoxicosis AND anesthesia, antithyroid medications AND perioperative optimization AND beta blockers OR calcium channel blockers in electronic data base sources such as the Cochrane library, PubMed, and Google scholar. This review was carried out in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 statement. Conclusions and recommendations: Before surgery and anesthesia, manifestation of thyrotoxicosis including palpitation, irritability etc should be ruled out.
Background
Thyrotoxicosis is frequently confused with hyperthyroidism, however the two are not synonymous because hyperthyroidism is defined as an increase in thyroid hormone levels in the thyroid gland's secretion or synthesis whereas thyrotoxicosis is a group of signs and symptoms induced by thyroid hormone's improper activity in the tissue [1].
Thyrotoxicosis is a clinical syndrome produced by a multitude of disorders that causes an excess in thyroid hormone in the tissue or in the thyroid gland [2]. It is a serious clinical problem that can be caused by a variety of factors, including toxic multinodular goiter, toxic adenoma, gestational trophoblastic disease, drug-induced hyperthyroidism, grave disease, genetic, environmental, and endogenous factors may all play a role in the development of thyrotoxicosis [3].
Hyperthyroidism is a prevalent clinical condition that raises the risk of complications and necessitates of surgery with a cumulative incidence of 0.2-1.3% in iodine-rich areas, the prevalence may rise in iodine-poor areas, in addition to this a total frequency of 1.3% was found in a nationwide survey in the United States, while an average prevalence of 0.75% was found in a European study [4]. In Africa, the prevalence of endemic goiter ranges from 1% to 90%, while the incidence of hyperthyroidism ranges from 13% to 43.7%, when come to Ethiopia the prevalence of endemic goiter was 39.9%, with the most prevalent is thyrotoxicosis at 43.7% [5]. A study conducted at the University of Gondar compressive specialized hospital screened patients showed that 14.6% develop hyperthyroidism [6].
If left untreated, thyrotoxicosis can produce a variety of symptoms such as tachycardia, tremor, palmar sweating, eye problems, irritability, altered behavior, hot intolerance, fatigability, palpitation, increased appetite, and weight loss [7]. It also increase the complication rate including atrial fibrillation, ventricular dysfunction, heart failure and fasten morbidity and mortality of the patient [8]. Thyrotoxicosis may also end up with severe form called thyroid storm in uncontrolled patient and resulting impatient mortality, increased hospital stay, ventilation requirement, cardiovascular as well central nervous system complication [9]. Patients with thyrotoxicosis may require surgery and anesthesia, and they face additional risks and complications in the perioperative periods as a result of the increased thyroid hormone, which affects every body system which indicated that patients with suspected thyrotoxicosis require adequate perioperative preparation to mitigate the adverse effects and improve the patient's outcome [10].
Rationale
Thyrotoxicosis is a common finding during a preoperative assessment. Preoperative optimization of thyrotoxicosis patients is crucial for overcoming the difficulties/problems faced during perioperative period, and there is also unnecessary patient postponement, which has a direct influence on the patients' and their parents' economic, social, and psychological load. In addition, inadequate thyrotoxicosis perioperative optimization will result in unwanted complications, greater hospitalization, and a negative impact on patients, parents, and healthcare delivery. There is still debate about perioperative optimization of thyrotoxicosis patients undergoing surgery and anesthesia, so this review of recent evidences give additional input for perioperative management of thyrotoxicosis patients to achieve a uniform level of care for uncontrolled thyrotoxicosis patients.
Methods
The literatures were found with Boolean operators in the form of thyrotoxicosis AND anesthesia, antithyroid medications AND perioperative optimization AND beta blockers OR calcium channel blockers in electronic data base sources such as the Cochrane library, PubMed, and Google scholar. The literatures were extracted using preferred reporting item for systematic review and meta-analysis (PRISMA) format (Fig. 1). The literatures comprised thyrotoxicosis patients in English version from the last ten years were included in this document. Duplication was deleted from all of the materials and entered into the endnote version ×8 software and finally 35 pieces of literatures were incorporated into the final output. Following a thorough literature review, the following recommendations were derived from the degree and quality of evidence level using good clinical practice (GCP), world health organization (WHO), 2011 with 1a-meta-analysis, evidence-based guideline, and systematic review of randomized controlled trials (RCTs), 1 b-randomized controlled trial (RCT), 2a-systematic review of cohort and case control studies and 3a-case reports and case series ( Table 1). The evidence-based summary was created by examining the risk and benefit, cost, and available resources for thyrotoxicosis management and optimization. This review was carried out in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 statement [11]. This paper also registered in research registry with identifying number reviewregistry1408 with a link: https://www.rese archregistry.com/browse-the-registry#registryofsystematicreviewsme ta-analyses/ We evaluated our systematic review compliance by using AMSTAR 2 criteria and fell in moderate quality [12].
Areas of controversies
There are numerous debates over how to manage and optimize thyrotoxicosis patients. Thyrotoxicosis is caused by a combination of factors, from which an RCT found that continuing antithyroid drug (ATD) for 60-120 months is more effective than 12-18 months [12]. Another systematic review and meta-analysis of the effects of long-term ATD treatment (more than 24 months) found that it is effective and safe to use ATD for such a long time [13]. According to a nationwide survey conducted in Italy in 2014 on the length of ATD, the majority of patients in clinical practice are taking the medicine for 12-24 months to achieve euthyroid status prior to surgery [13].
According to a 2018 European thyroid association guideline on ATD for adult patients is a therapeutic option, if relapse occurs even after ATD is completed, revising the treatment regimen or changing the type of drug is a good option, and if the above treatment modality is not working, definitive therapy is the mainstay of treatment of choice [14].
The length of ATD is determined by the patient's condition and the normalization of thyroid function tests, according to a recommendation published by the American thyroid association in 2016 the drug should be continued for 12-18 months, but if the patient is pregnant, the drug should be stopped and replaced with propylthiouracil (PTU), if the patient has a recurrence, ATD must be continued for another 18 months [15]. A randomized control trial on the effect of amiodarone on amiodarone-induced thyrotoxicosis found that amiodarone should be stopped to make the patient euthyroid due to the increased severity and time it takes to become euthyroid [16].
The preferred treatment option for amiodarone caused thyrotoxicosis is glucocorticoid, according to a European thyroid association recommendation published in 2018. Based on a risk-benefit analysis and the severity of cardiac dysfunction in this patient, the decision to discontinue amiodarone was made [17].
Discussion
A study done on preoperative evaluation in patients with subclinical hyperthyroidism, and the patient has signs and symptoms of thyrotoxicosis, a full preoperative evaluation of each organ system is required, including laboratory and imaging tests such as thyroid function tests, complete blood counts, and ECG (2a), [15].
Frans Brandt and colleagues conducted a review and meta-analysis of case control and cohort studies in Denmark on the association between overt hyperthyroidism and mortality which found that there is a devastating complication of thyrotoxicosis, which increases the patient's mortality rate by 20% when compared to euthyroid patients (1a) [16].
Yang LB and colleagues conducted a meta-analysis of cohort studies to determine whether subclinical hyperthyroidism increases the risk of cardiovascular complications, mortality, and morbidity found that patients with subclinical hyperthyroidism have a 19% risk of cardiovascular disease, 52% risk of cardiovascular mortality, and 25% risk of cardiovascular morbidity (1a) [18].
A systematic review and meta-analysis on the length of ATD found that extended administration of ATD for more than 24 months is related with a lower rate of relapse and complication as compared with shorter duration of taking ATD [19] (1a).
A retrospective cohort study conducted on thyrotoxicosis patients at University of Gondar Comprehensive and Specialized Hospital found that patients receiving ATD for a short period of time do not fully return to normal, thyroid function test (TFT), but as the duration of ATD use increases TFT fully normalizes, due to this, patients who take ATD for more than one year had better TFT normalization than those who take it for less than one year [20] (2a). Jackie Gilbert conducted a review on the optimization of high-risk patients, including increasing age, male sex, and underlying cardiovascular illness which found that atrial fibrillation is a significant risk factor for increased mortality and in order to avoid a crisis medications such as propranolol, atenolol, and calcium channel blockers such as Diltiazem and verapamil are used as an alternative if beta blockers are not tolerated (2a) [21].
A review done by Carina P. Himes' on the optimization of thyrotoxicosis patients, If the surgery is elective, it should be optimized until euthyroid, but if it is an emergency, intravenous beta blockers, corticosteroids, oral ATD, and antihypertensive medication should be on hand (2a) [22].
A review done by Maguy Chiha and colleagues in the United States found that around 10% of thyroid storm deaths were documented and this is strongly linked to uncontrolled hyperthyroidism that requires surgery and to reduce the mortality and morbidity early diagnosis and early treatment before surgery is necessary (2a) [23].
A study done in Denmark on the comparison of treated and untreated hyperthyroidism patient states that well controlled hyperthyroidism has a significant reduction of mortality when compared with untreated patients (2a) [24].
Rodolfo J Galindo and colleagues found that patients who develop thyroid storm have a 12-fold increased mortality when compared to thyrotoxicosis without thyroid storm, this shows that patients with thyrotoxicosis need to be treated to avoid life-threatening complications (2a) [25].
Claire L Wood et al. conducted on drug dose titration and block and replace on thionamide for patients identified and treated for thyrotoxicosis found that there is no difference in biochemical stability between block and replace and dose titration (1c) [26].
Eskes SA et al. conducted a multicenter RCT on the continuation of amiodarone in amiodarone-induced thyrotoxicosis and the treatment modality utilized, which was divided into three groups: prednisone, sodium perchlorate, and perchlorate. Finally, the results suggest that euthyrodism can be achieved even if amiodarone is continued in patients with thyrotoxicosis caused by amiodarone, and that prednisone, a regularly used medicine, is preferred over the other two medications (1c) [27].
A study on the continuation of amiodarone in amiodarone induced thyrotoxicosis type two patients treated with prednisone for cardiovascular system abnormalities, there is recurrence and severity of thyrotoxicosis due to amiodarone continuation, so it conclude that treatment with prednisone for amiodarone induced thyrotoxicosis patient and continuing amiodarone in this patient, it delay normalization period of thyrotoxicosis (1c) [16].
A randomized controlled trial conducted by Tetsuya Tagami et al. on the effect of beta blockers and antithyroid drugs in new onset thyrotoxicosis for grave disease randomized on 28 patients adding beta blockers to ATD has no effect on thyroid function reduction but it does stabilize sympathetic hyperactivity (1c) [28].
Another study using beta blockers to manage thyrotoxicosis patients undergoing non-thyroid and thyroid surgery has a significant effect in reducing sympathetic activity in the cardiac system, particularly in the preoperative period when ATD is not being used (2a) [29].
A review on emergency thyroid storm management, a combination of treatment approaches is required to reduce patient morbidity and mortality including supportive care, antithyroid medication, adrenergic blocker, corticosteroid, paracetamol, and treatment of the cause is the mainstay of thyroid crisis management (2a) [30].
Marcia Rashelle Palace study on perioperative optimization of thyrotoxicosis patients, who are scheduled for surgery should be preoperatively prepared for euthyroid to reduce thyrotoxic crisis, including thyroid storm and cardiac problems and thyrotoxic patients were given ATD, such as PTU 100-150 mg every 6-8 h, propranolol 10-40 mg, and calcium channel blockers as an alternative. Both ATD and beta blockers were continued postoperatively, but ATD was discontinued following thyroidectomy, in addition to this Lugols solution or potassium iodide, beta blocker, and glucocorticoid are the drugs of choice if the surgery is emergency to make the patient hemodynamically stable (2a) [31].
A study done on thyroid storm management states that giving ATD like PTU 600 mg loading and 200-300 mg maintenance every 6hr and propranolol 40-80 mg every 4hr and hydrocortisone 100 mg every 8hr and as well as hemodynamic support [32].
A study states that thyrotoxicosis causes catastrophic cardiac complications that are difficult to manage, such as atrial fibrillation and heart failure, so preoperative optimization of thyrotoxic patients is required to reduce this problem (3a) [33].
A study done on anesthesia management of thyrotoxicosis patient come for surgery incorporated under consideration including reduction of stress response condition, reducing sympatitic hyper activity drugs, making anesthesia deep, using standard monitoring if available invasive monitoring and reserve intensive care bed (3a) [34].
A substantial cardiac consequence involving shortness of breath, abrupt heart failure, dilated cardiomyopathy, atrial fibrillation, and cardiomegaly was reported with uncontrolled toxic goiter without a previous history of cardiac problems (3a) [35].
A review on the optimization of thyrotoxic patients undergoing surgery on the day before surgery, patient must be euthyroid in order to lower the risk of thyrotoxicosis complications by using antithyroid medicine, beta blockers, radioiodine, and potassium iodide (2a) [36].
A case report and literature review on the anesthesia implications of severe hyperthyroidism secondary to molar pregnancy found that preoperative optimization of uncontrolled thyrotoxicosis patients secondary to molar pregnancy is necessary, and regional anesthesia is the most important anesthesia technique used to overcome this complication (3a) [37]. The results of the reviewed literatures were summarized below ( Table 2).
Conclusions and recommendations
Thyrotoxicosis is a serious medical condition that, if left untreated, can lead to a fatal illness. Before surgery and anesthesia, manifestation of including palpitation, irritability, lack of concentration, hot intolerance, weight loss, increased appetite, and signs such as tachycardia, atrial fibrillation, palmar sweating, ataxia tremor, pulmonary embolism, stroke, and hypertension must all be ruled out. In addition to the history and physical examination, a patient-centered laboratory and imaging modality, such as a whole blood count, thyroid function tests (TSH, T3 and T4), ECG and x-ray, and organ function tests, are required, depending on the patient's condition (age, comorbidity). Surgery and anesthesia should be postponed in elective thyrotoxicosis patients, and the patient should be optimized with PTU 100-200 mg for 12-18 months, propranolol 10-40 mg until sympathetic activity stabilizes, Carbimazole 10-40 mg, and potassium iodide for 10 days, so thyroid function tests must be repeated every 4-6 weeks, followed by three-month and six-month intervals; if toxic manifestations are reduced, ATD must be titrated to reduce drug side effects, in addition to this patient counseling, psychological reassurance, and follow-up are required to achieve good thyrotoxicosis control. If the situation is an emergency, adequate preparation for a thyrotoxicosis crisis is required. Premedicate with propranolol 0.1-0.15 mg/kg iv, PTU 200-400 mg oral, diazepam 5-10 mg, and prepare hydralazine, Lidocaine, corticosteroid such as hydrocortisone 100 mg or dexamethasone 2 mg every 4-6hr for 72 h.
If the type of surgery allows it, regional anesthesia and peripheral nerve block are preferable to general anesthesia. If not, proceed with general anesthesia at a deep and smooth level using fentanyl 1-2μg/kg, thiopental 3-5 mg/kg, or propofol 2-3 mg/kg, Lidocaine 1.5 mg/kg, halothane, and muscle relaxant with suxamethonium and vecuronium. Avoid drugs or using with caution that has symptomimic effect. If a thyrotoxicosis crisis such as a thyroid storm occurs manage it accordingly. First, determine whether or not there is thyrotoxicosis by using the Akamizu diagnostic criteria for thyroid storm, then follow the ABCDE approach, call for help, give paracetamol 325-650 mg every 6hr, hydrocortisone 2-4 mg/kg, propranolol 0.15 mg/kg or 40-80 mg, PTU 400-600 mg, cooling and hydration is required, and if convulsion thyrotoxicosis occurs, a good approach is to use diazepam 10 mg, ventilation, oxygenation, and mechanical support in the intensive care unit. If you experience a cardiac arrhythmia, follow the cardiac life support guidelines and antiarrhythmic management protocol. Patients with thyrotoxicosis symptoms and elevated thyroid function tests should be optimized for 12-18 months, or until the patient is in a euthyroid state. Consider the patient's overall condition, as well as riskbenefit analysis, when performing perioperative optimization (Fig. 2).
Ethical approval
Not required.
Author contribution
This work was carried out in collaboration among all authors. Misganew Terefe and Debas Yaregal Melesse contributed to the conception of the review and interpreted the literatures based on the level of evidence and revised the manuscript. Yosef Belay Bizuneh and Yonas Addisu Nigatu participate in reviewing preparation of the manuscript. Both authors participate in preparation and critical review of the man- | 2022-08-30T15:03:34.052Z | 2022-08-28T00:00:00.000 | {
"year": 2022,
"sha1": "af14cc9e5c76cfba73e41344926e4353f742f88d",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.amsu.2022.104487",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f5a26a0b80869f9bf9d947e77cf475373d12418f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255446399 | pes2o/s2orc | v3-fos-license | Extracellular Microvesicles (MV’s) Isolated from 5-Azacytidine-and-Resveratrol-Treated Cells Improve Viability and Ameliorate Endoplasmic Reticulum Stress in Metabolic Syndrome Derived Mesenchymal Stem Cells
Extracellular vesicles (EVs), a spherical membrane fragments including exosomes, are released from several cell types, including mesenchymal stromal cells (MSCs), constitutively or under stimulation. As MVs cargo include DNA, RNA, miRNA, lipids and proteins their have gain special attention in the field of regenerative medicine. Depending on the type of transferred molecules, MVs may exert wide range of biological effects in recipient cells including pro-inflammatory and anti-apoptotic action. In presented paper, we isolated MVs form adipose derived mesenchymal stem cells (ASC) which underwent stimulation with 5-azacytydine and resveratrol (AZA/RES) in order to improve their therapeutic potential. Then, isolated MVs were applied to ASC with impaired cytophysiological properties, isolated from equine metabolic syndrome diagnosed animals. Using RT-PCR, immunofluorescence, ELISA, confocal microscopy and western blot, we have evaluated the effects of MVs on recipient cells. We have found, that MVs derived from AZA/RES treated ASC ameliorates apoptosis, senescence and endoplasmic reticulum (ER) stress in deteriorated cells, restoring their proper functions. The work indicates, that cells treated with AZA/RES through their paracrine action can rejuvenate recipient cells. However, further research needs to be performed in order to fully understand the molecular mechanisms of these bioactive factors action. Graphical Abstract Graphical abstract of presented study Graphical abstract of presented study
Introduction
Regenerative medicine therapies based on the application of stem cells hold grate promise not only for the treatment of musculoskeletal system disorders but also in the course of endocrine diseases [1,2]. Sedentary life style, obesity, insulin resistance or insulin dysregulations are common features associated with type II diabetes (T2D) or syndrome X (MetS) development. Recently, due to increased prevalence, MetS have become a subject of extensive research in human as well as in veterinary endocrinology. Equine metabolic syndrome (EMS) is characterized by insulin resistance, past or chronic laminitis and adiposity in specific location such as around eyes or on the base of the tail [3]. Nowadays, EMS is frequently diagnosed disease affecting horses population worldwide and if not treated properly, may lead to the development of laminitisa life-threatening disease [3]. Interestingly, laminitis can be partially compared to cardiovascular complications occurring in humans during metabolic disorders. Furthermore, more and more research have proven that horse model have been applied for the study of certain diseases in humans thus Electronic supplementary material The online version of this article (https://doi.org/10.1007/s12015-020-10035-4) contains supplementary material, which is available to authorized users. possess great potential for translational medicine [4][5][6]. Therefore, an equine model of metabolic syndrome is proposed for translational research in humans regarding metabolic disorders and their consequences.
Recently, more and more attention has been paid toward application of stem cells for treatment of endocrine disorders including T2D or EMS. Due to their unique properties, abundance and ease of isolation, mesenchymal progenitor cells (MSC) from bone marrow (BMSC) and adipose tissue (ASC) are under intensive investigation in multiple clinical trials all over the world [7][8][9][10][11]. MSCs are characterized by multilineage differentiation potential, anti-inflammatory as well as immunomodulatory properties which are responsible for therapeutic potential of MSC in the course of different disorders including T2D and EMS [1,[12][13][14][15][16]. As recently showed, the plausible mechanism of MSC action can be at least partially explain by their paracrine activity. MSCs secrete extracellular microvesicles (MVs)-a spherical membrane fragments including exosomes which carry different type of biological cargo, including proteins, peptides, mRNA, lipids and miRNA [17]. For that reason, MVs similar to the cells of origin, are characterised by great therapeutic potential and since now have been successfully applied in the treatment of multiple disorders including liver, kidney, lung myocardial injuries or EMS [18][19][20]. Numerous studies showed, that MV's are rich in growth factors which induce and mediate regeneration process e.g. vascular endothelial growth factor (VEGF), insulin-like growth factor 1 (IGF-1), basic fibroblast growth factors (bFGF), interleukin 6 (IL-6), chemokine (C-C motif) ligand 2 (CCL-2) and hepatocyte growth factor (HGF) [17,[21][22][23]. It was shown, that MVs are able to modulate immune response, diminish inflammation and modulate regenerative properties of recipient cells [17,23]. However, the pro-regenerative properties of MV's strongly depends on the physiological condition of MSCs from which they originate. In our previous research, we demonstrated, that EMS derived ASC suffer from reduced proliferative activity, enhanced apoptosis and abundant oxidative stress factors accumulation which leads to their senescence [24,25]. Moreover, it was showed, that insulin resistance impairs multilineage differentiation potential of EMS derived ASCs due to amelioration of mitochondrial biogenesis and dynamics [26]. As a result of insulin resistance, impairment of autophagy and mitophagy occurs, leading to deterioration of ASCs proregenerative potential. In consequence application of autologous ASC during EMS is limited and may not exert expected therapeutic outcome. However, our previous studies have shown that a combination of 5-azacytydine (AZA) and resveratrol (RES) is able to reverse aged phenotype of these cells. It was shown, that AZA/RES increase proliferative potential, reduces apoptosis and improve multilineage differentiation potential of ASC derived from EMS individuals [27][28][29]. Rejuvenated ASCs has more abundantly produced MVs rich in immunomodulatory factors which serves as anti-oxidative defense against free radicals produce under EMS condition and modulate activity of immune cells [29]. Therefore, in presented study we decided to further investigate the biological activity of MVs isolated from AZA/RES treated cells. It was investigated, whether similar to AZA/RES, MVs are originating from rejuvenated cells can modulate apoptosis, oxidative stress and mitophagy in recipient progenitor cells isolated from EMS horses. As it was demonstrated, that the combination of AZA/RES abolish negative consequences of free radicals accumulation and rejuvenate impaired cells, we hypothesized that MVs originating from these cells are also able to improve cytophysiological properties of recipient cells.
Evaluation of Cellular Viability
Scheme of MVs isolation is shown at Fig. 1a. In order to select most beneficial concentration, Alamar blue assay was performed ( Fig. 1b). Cells were cultured for 24 h with five different concentration of MVs and their viability was established after 24, 48, 72 and 96 h of culture. MVs concentration equalled 25 μg/ml was shown to significantly enhance cellular viability. For that reason,it was selected and applied in further experiments. Obtained results indicated on decreased proliferation in ASC EMS however it was enhanced after treatment of cells with MVs AZA/RES (Fig. 1c). Treatment of cells with MVs AZA/RES reduced amount of NO (Fig. 1d) and ROS (Fig. 1e) while increased activity of SOD (Fig. 1f).
Assessment of Apoptosis
Live, dead cells and those accumulating β-galactosidase was visualized using specific stainings (Fig. 2a). Apoptosis was also investigated using TUNEL staining which indicated on increased number of dead cells in ASC EMS however that phenomenon was reversed after treatment of cells with MVs AZA/RES (Fig. 1a). Propidium Iodide staining (Fig. 2b) and β-galactosidase (Fig. 2c) was further quantified and presented as a percentage of positive cells. Obtained results shown that treatment of cells with MVs AZA/RES reduce number of dead and senescent cells. Expression of p53 was significantly Fig. 2f) and BCL2 associated X protein (BAX, Fig, 2G) in ASC EMS. Expression of BCL2 apoptosis regulator (BCL-2) was decreased in ASC EMS while comparing to control group, however MVs treatment significantly enhanced its expression (Fig. 2h).
Evaluation of ER Stress
Protein disulphide-isomerase A3 (PDIA3) was visualized in cells using immunofluorescence (Fig. 3a). Its levels was increased in ASC EMS and diminished in cells after MVs treatment. MitoRed staining (
Autophagy and Inflammation
ASC EMS were characterised by increased expression of beclin ( Fig. 4a) and Lysosome-associated membrane protein 2 (LAMP-2, Fig. 4b), however application of MVs did not influenced their expression. Significant increase of Phosphoinositide 3-kinase (pi3K, Fig. 4c) expression was observed in ASC EMS and application of MVs decreased its mRNA levels. ELISA results revealed increased levels of extracellular tumor necrosis factor α (TNFα, Fig. 4d) in ASC EMS however MVs application reduced its levels. ELISA for interleukin 10 (IL-10, Fig. 4e) revealed that MVs treatment resulted in its increased secretion by cells. Western blot for interleukin 6 revealed its increased levels in ASC EMS (Fig. 4f).
Discussion
Due to increased prevalence of metabolic disorders in humans and domestic animals, development of novel and effective therapies have become a major goal in the field of regenerative medicine. Among other, adult stem cell therapies hold great promise in the treatment of insulin resistance and obesity-related inflammation. However, multiple research have indicated, that the donors age or health status are directly correlated with so called "pro-regenerative" potential of adult, mesenchymal stem cells [24,31]. Our and other research findings have indicated, that T2D, MetS as well as EMS negatively affects ASCs multipotency, expansion properties or immunomodulatory effect which undermines their clinical utility [25, [32][33][34]. Moreover, in our previous research we demonstrated, that ASCs isolated form EMS horses are characterized by increased apoptosis and senescence together with mitochondria deterioration induced by enhanced systemic inflammation and excessive oxidative stress [25,27]. Moreover, the paracrine activity of ASCs isolated form EMS is seriously deteriorated, which disturbs intercellular communication and therefore limits s "stemness" status of cells [25,27,29]. So far, several strategies has been proposed to reverse unfavourable phenotype of MSCs affected by dieses including their preincubation with chosen agents before its clinical application. For that purposes, growth factors, vitamins, amino acids or peptides has been proposed [35,36]. Recently, we showed, that the combination of AZA/RES reversed aged phenotype of ASCs isolated from EMS (ASCs/EMS) horses-treatment resulted in increased proliferative potential, reduced oxidative stress and improved immunomodulatory properties [27,29]. In the present study, we showed, that ASCs/EMS preincubated with AZA/RES produce MVs (MVs AZA/RES ) of unique biological features. We have found, that MVs AZA/RES positively affect cell viability and improve proliferative activity of ASCs/EMS. Furthermore, the TUNEL staining showed reduced number of dead cells among ASCs/EMS population. We have observed the significant upregulation of antiapoptotic transcript BCL-2 and at the same time reduction of pro-apoptotic gene BAX on mRNA level. This supports the hypothesis, that MVs AZA/RES not only inhibit apoptosis but also improve viability of physiologically impaired progenitor cells of EMS horses. Similar biological phenomenon was shown in a model of renal ischemia/reperfusion injury where MVs inhibited apoptosis and stimulated cellular proliferation [37]. On the other hand, Herrera et al. [38] demonstrated that in human and rat hepatocytes, MVs enhanced proliferation and decreased apoptosis through mRNA shuttled into recipient cells. Moreover, we have showed, that accumulation of senescence-associated β-galactosidase is reduced in cells treated with MVsAZA/RES. This stands in good agreement with Bruno et al. [39] who demonstrated, that MSC-derived MVs increases the proliferation rate of tubular epithelial cells after in vitro injury. Moreover, we have demonstrated reduced expression of CASP3 and CASP9, which are the crucial mediators of programmed cell apoptosis especially in the cells affected by hyperinsulinemia. It was demonstrated by Radziszewska et al. [40], that CAP3 knock-out mice were protected from streptozotocin-induced diabetes together with inhibition of B-cells proliferation through inhibition of p27 transcripts.
C o m m o n f e a t u r e a s s o c i a t e d w i t h o b e s i t y , hyperinsulinemia and insulin resistance in EMS horses is accumulation of oxidative stress factors which leads to excessive, systemic inflammation. In our previous research we showed, that adipose tissue as well as ASC residing within it suffer frm enhanced oxidative stress and inflammation, which impairs their biological , EIF2 (f) and PERK (g). Results expressed as mean ± SD. *P < 0.05; **P < 0.01; ***P < 0.001 functions [25,41]. In this study, we showed, that MVsAZA/RES reduce secretion of pro-inflammatory IL-10 (e) established with ELISA. Western blot for IL-6 (f). Results expressed as mean ± SD. *P < 0.05; **P < 0.01; ***P < 0.001 showed, that both local and systemic administration of MSC derived MVs efficiently suppress detrimental immune response in inflamed tissues and promote survival and regeneration of injured parenchymal cells. Additionally, by transfer of mRNA and miRNA for target cells, extracellular microvesicles promote cells survival and regenerative properties by reducing necrosis associated with oxidative stress [43]. As recently showed by our group, ASC/EMS displayed decreased proliferation rate, increased apoptosis and senescence together with mitochondria deterioration [27]. The impairment of mitochondrial biogenesis and dynamics is directly associated with deterioration of cellular function including multipotency and immune modulation [28,29]. Numerous studies including ours, have shown that autophagy as well as mitophagy, serve as a protective mechanisms, allowing deteriorated ASC/EMS survive and maintain basic cellular functions under stress condition [27,44,45]. Here, for the first time we have shown that MVsAZA/RES reduce expression of transcripts involved in autophagy including LAMP-or Beclin-1. It might be explained by their anti-oxidative activity, since we observed elevated amount of SOD together with reduced ROS and NOthe master inducers of oxidative stress. Similar effect was noted by Harrell et al. [43] who showed, that MVs transferred to the target cells-injured hepatocytes, neurons and lung cells activate autophagy, inhibits apoptosis, necrosis and oxidative stress and therefore promote cellular survival and regeneration. Furthermore, it was showed, that excessive oxidative stress combined with lipo-and glucotoxicity significantly contribute to the development of ER stress [31,46]. In the present study, we showed, that MVsAZA/RES reduce ER stress as we observed decreased expression of following transcripts ATF-6, IRE-1, PERK, EIF2 and CHOP. Similar effect was found by Liao et al. [47], who showed, that MVs could attenuate ER stress-induced apoptosis by activating AKT and ERK signaling. More recently, Chen et al. [48], showed that MSC derived MVs protects beta cells against hypoxia-induced apoptosis via miR-21 by allevia t i n g E R s t r e s s a n d i n h i b i t i n g p 3 8 M A P K phosphorylation.
Recent findings in stem cells have shed a promising light for the application of their extracellular vesicles in the treatment of different disorders. In present study we showed, that ASCs/EMS preincubated with AZA/RES promote secretion of MVs that are able to reduce oxidative stress, inflammation and ER stress and thus protect recipient cells against apoptosis and senescence. However, further research are strongly required to understand the mechanism involved in regenerative processes and therapeutic potential of MV's in the course of different disorders including EMS, T2D and MetS.
Materials and Methods
All reagents and chemicals used in this research were purchased from Sigma-Aldrich (Poznań, Poland), unless indicated otherwise.
Tissue Harvesting
Adipose tissue samples were harvested from the group of healthy horses (n = 15), and horses diagnosed with EMS. Animals were mixed sex and agematched (8-12 years). Qualification of animals to selected group was performed on the basis of following parameters: body weight, body condition score, cresty neck score, existing laminitis, resting insulin levels, blood glucose levels, oral sugar test. Detailed characterisation of horses can be found in our previous paper [27].
Cell Isolation and Characterisation
Samples of subcutaneous adipose tissue were harvested from the animal's tail base. ASCs were isolated as descried previously [27] using enzymatic method (collagenase type I in a concentration of 1 mg/mL for 40 min at 37°C) from healthy (ASC HE) and EMS horses (ASC EMS). Cells were cultured in Dulbecco's Modified Eagle's Medium (DMEM) Low Glucose supplemented with 10% of foetal bovine serum (FBS) and 1% of penicillin/streptomycin (PS) solution. Media were changed every 2-3 days. Cells were passaged after reaching 80% confluence using trypsin solution (TrypLE; Life Technologies, Carlsbad, CA, USA). Cells were characterised by the presence of CD44, CD45 and CD90 surface antigens using Becton Dickinson FACS Calibur Flow Cytometer as shown previously [27]. Additionally, osteogenic, chondrogenic and adipogenic differentiation of isolated cells was confirmed as well [27].
Isolation of MVs
MVs were isolated with ultracentrifuge as described elsewhere [30]. Procedure scheme is shown in Fig. 1. In order to isolate MVs from ASC-EMS, cells were cultured with 0.5 μM of AZA and 0.05 μM of RES for 24 h as described previously [27]. Next, the medium was replaced with serum free culture medium supplemented with 1% of PS for an additional 24 h. After that, medium was collected and subjected to centrifugation at 300 xg for 10 min. Supernatant was collected and centrifuged at 2000 xg for 10 min. After that, supernatant was collected again and centrifuged in ultracentrifuge at 20000 xg for 30 min. The amount of MVs in obtained pellet was verified with BCA Protein Assay.
Experimental Setting
In order to perform the experiments, cells were seeded in cells were seeded 24well plates at the density of 3 × 10 4 per well. After 24 h, in the experimental group medium was replaced for culture medium supplemented with MVs derived from ASC-EMS AZA/RES treated cells at the concentration of 25 μg/ml. After 24 h of incubation cells were collected and subjected for further analysis.
Proliferation Rate
In order to select most potent concentration of MVs, cells were cultured with 5 different MVs concentration for 24 h. Cell viability was evaluated using 10% resazurin-based dye-TOX-8 in accordance with manufacturer protocol, after 24, 48, 72 and 96 h of propagation. In order to perform the assay cells were incubated with dye in a CO 2 incubator, 37°C for 2 h and then supernatants absorbance was measured (Epoch, BioTek) at a wavelength of 600 nm for resazurin and 690 nm reference wavelength. Proliferation potential was established by the analysis of BrdU incorporation using BrdU Cell Proliferation ELISA Kit (Abcam) in accordance with manufacturer's instructions. Briefly, cells were first incubated with anti-BrdU antibody and then with horseradish peroxidase (HRP)-conjugated goat anti-mouse antibody. Colorimetric reaction was induced by the conversion of the chromogenic substrate tetra-methylbenzidine (TMB) and the absorbance was measured using the spectrophotometer (Epoch, BioTek) at 450 nm and 550 nm as the length for the reference wave.
Evaluation of Apoptosis and Senescence
Live and dead cells in cultures were visualized using Cellstain Double Staining Kit in accordance with manufacturers protocol. Viable cells were stained with Calcein-AM (green fluorescence), whereas dead cells' nuclei were stained with Propidium Iodide (orange fluorescence). Cells were then observed using fluorescence microscopy (Zeiss, Axio Observer A.1). The percentage of dead cells was calculated.
Prior identification of senescence associated βgalactosidase (β-gal), cells were stained using a Senescence Cells Histochemical Staining Kit (Sigma Aldrich) in accordance with manufacturer's protocol. Cells were then observed under an inverted microscope (Zeiss, Axio Observer A.1) and percentage of β-gal (stained blue) positive cells in regard to βgal negative cells was calculated.
Evaluation of Oxidative Stress Factors
Nitric oxide (NO) concentration was assessed using commercially available Griess reagent kit (Life Technologies). Superoxide dismutase (SOD) activity was measured using a SOD Assay kit (Sigma Aldrich). Reactive oxygen species (ROS) were estimated by incubating cells with an H2DCF-DA (Life Technologies) solution. All procedures were performed according to manufacturer's protocols.
Evaluation of TNF-α and IL-10
The amounts of extracellular TNF-α and IL-10 in culture media was investigated with ELISA assays-horse tumour necrosis factor (TNF superfamily, member 2) ELISA Kit (MyBioSource, San Diego, CA, USA) and horse Interleukin-10 ELISA Kit (MyBioSource, San Diego, CA, USA). All procedures were performed in accordance with manufacturers protocols.
Visualization of Mitochondrial Net and ER
Mitochondrial network inside the cells was visualized using MitoRed staining. Briefly, dye solution (1:000) was added to culture media and cells were incubated for 30 min in CO 2 incubator. Then specimens were fixed with 4%PFA and nuclei were counterstained with DAPI.
Gene Expression
Total RNA was isolated from cells using TriReagent in accordance with manufacturers protocol. RNA concentrations and quality were evaluated using nanospectrophotometer (Epoch, BioTek). Total RNA was reversely transcribed into cDNA using the First Strand cDNA Synthesis Kit was used (Thermo Fisher Scientific, USA). Gene expression was evaluated using SensiFast SYBR &Fluorescein Kit (Bioline, UK). T100 Thermal Cycler (Bio-Rad, USA) was used to carry out all the amplifications and detections. The 2 − ΔΔCT algorithm was used to calculate the value transcripts in relation to the expression of the reference gene -GAPDH. Primers are listed in supplementary file.
Western Blotting
Cells were rinsed in ice-cold PBS and extracts were prepared in RIPA buffer supplemented with protease inhibitor (1:1000). Then lysates were subjected to SDS-PAGE and transferred to polyvinylidene difluoride (PVDF) membrane (BioRad). The following antibodies were used for immunoblotting: B-Actin (1:1000, Sigma Aldrich) and IL-6 (1:250, Abcam) at dilution for in in 5% non-fat milk in Tris/Nacl/ Tween buffer (TBST). Then membranes were incubated with anti-rabbit and anti-mouse horseradish peroxidase-conjugated secondary antibodies. Reactions were developed using Western HRP Substrate (Millipore Corporation). Chemiluminescent signals were detected using ChemiDoc MP Imaging System (Bio-Rad, USA) and quantified with Image Lab Software (Bio-Rad, USA).
Statistic
All experiments were performed at least in three replicates. Differences between experimental groups was estimated using the one-way ANOVA with Tukey's test. Statistical analysis was conducted with GraphPad Prism Software (La Jolla, CA, USA). Differences were considered statistically significant at *p < 0.05, **p < 0.01, and ***p < 0.001.
Acknowledgements This project was financed within the framework of a grants entitled "Modulation mitochondrial metabolism and dynamics and targeting DNA methylation of adipose derived mesenchymal stromal stem cell (ASC) using resveratrol and 5-azacytydin as a therapeutic strategy in the course of Equine metabolic syndrome (EMS)." (grant no. 2016/21/B/NZ7/01111) and "Inhibition of tyrosine phosphatase as a strategy to enhance insulin sensitivity through activation of chaperone mediated autophagy and amelioration of inflammation and cellular stress in the liver of equine metabolic syndrome (EMS) horses." (grant no. 2018/29/B/ NZ7/02662) financed by The National Science Centre in Poland.
Authors' Contributions K.M. and K.K.G designed the research. K.K., M.M, PS, CW conducted the research. P.S. C.W. M.M analyzed data. K.M., CW and K.K. wrote the paper and prepared the figures K.M. contributed reagents/materials/analysis tools. All authors read and approved the final manuscript.
Conflict of Interest Authors declare that there is no conflict of interest
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. | 2023-01-06T15:33:25.137Z | 2020-09-03T00:00:00.000 | {
"year": 2020,
"sha1": "15a2e409d5f96d20008f3e01052c24a6a7642080",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12015-020-10035-4.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "15a2e409d5f96d20008f3e01052c24a6a7642080",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
256014569 | pes2o/s2orc | v3-fos-license | Tissue-specific expression of histone H3 variants diversified after species separation
The selective incorporation of appropriate histone variants into chromatin is critical for the regulation of genome function. Although many histone variants have been identified, a complete list has not been compiled. We screened mouse, rat and human genomes by in silico hybridization using canonical histone sequences. In the mouse genome, we identified 14 uncharacterized H3 genes, among which 13 are similar to H3.3 and do not have human or rat counterparts, and one is similar to human testis-specific H3 variant, H3T/H3.4, and had a rat paralog. Although some of these genes were previously annotated as pseudogenes, their tissue-specific expression was confirmed by sequencing the 3′-UTR regions of the transcripts. Certain new variants were also detected at the protein level by mass spectrometry. When expressed as GFP-tagged versions in mouse C2C12 cells, some variants were stably incorporated into chromatin and the genome-wide distributions of most variants were similar to that of H3.3. Moreover, forced expression of H3 variants in chromatin resulted in alternate gene expression patterns after cell differentiation. We comprehensively identified and characterized novel mouse H3 variant genes that encoded highly conserved amino acid sequences compared to known histone H3. We speculated that the diversity of H3 variants acquired after species separation played a role in regulating tissue-specific gene expression in individual species. Their biological relevance and evolutionary aspect involving pseudogene diversification will be addressed by further functional analysis.
Background
Genomic DNA in eukaryotes is stored in nuclei as a highly packed structure called chromatin. The basic unit of chromatin is the nucleosome, in which DNA is wrapped around combinations of the core histone proteins H2A, H2B, H3 and H4. Each histone protein has several variants based on amino acid substitutions. In mice, canonical histone H3.1 and H3.2 are encoded by multiple genes gathered in three histone clusters on chromosome 3, 11 and 13 [1]. In addition, some open reading frames (ORFs) similar to histone H3 have been annotated as pseudogenes because expression from these genes has not been determined. H3.1 coding genes produce RNA with a stem-loop in the 3′-end of the RNA structure instead of a poly-A tail and express no introns. In contrast, H3.3 is encoded by two genes, H3f3a on chromosome 1 and H3f3b on chromosome 11. These genes have a poly-A tail (signal), are located away from the histone cluster and expressed throughout the cell cycle in a replication-independent manner [2]. Specific histone variant incorporation into chromatin has been shown to play important roles in gene regulation during development and differentiation [3,4].
The differential functions of individual histone variants are characteristic of various processes, such as nucleosome stability, protein binding, and chromatin modification. For example, H3.3 is generally distributed on transcriptionally active genes and harbors modifications associated with activation, such as H3K4me3, while H3.1 and H3.2 are distributed throughout the rest of the genome and are associated with inactivation modifications [5][6][7][8]. H3.3 is also known to be incorporated into promoter regions prior to transcriptional activation in cell differentiation by histone chaperone complexes, including HIRA and Chd1 [4,9,10]. Interestingly, H3.3 is also involved in genome silencing by being incorporated into pericentromeric heterochromatin and telomeric regions, in association with DAXX/ATRX [9,11,12]. In these cases, the selective incorporation of histone variants could be a molecular platform for downstream modifications and chromatin remodeling to acquire differentiation potential.
The discovery of new histone variants has been ongoing in numerous species [13,14], and many histonelike sequences have been annotated as pseudogenes in genome databases [15]. Here, we report the identification and characterization of previously unknown histone genes in the mouse genome. By cross-hybridization analysis in silico (in silico hybridization), we have identified 14 uncharacterized histone H3 genes and 1 uncharacterized histone H2A gene that potentially encode histone proteins with core domains. Most of the new variants were not conserved in human, suggesting that these minor variants diverged after species separation. The expression of some of the new variants at the mRNA level was confirmed by 3′-seq analysis. When expressed as GFP-tagged forms in mouse C2C12 skeletal myoblasts, some variants were incorporated into chromatin and others were not. Whole transcriptome analysis revealed that the forced expression of any variant did not affect global transcription in undifferentiated myoblasts, but did upon myoblast differentiation. These diverse histone variants might play a role in regulating tissue-specific gene expression.
Fourteen novel H3 genes identified by in silico hybridization
To identify all genes encoding histone variants, we searched the mouse genome database for histone genes by in silico cross-hybridization screening (Fig. 1a). From the H3.2 amino acid sequence (CAA56577.1), eight amino acid sequence blocks (129 in total) were generated by shifting the sequence one amino acid (Fig. 1a). Each amino peptide sequence was reverse-translated based on mammalian codon usage, and 4,162,752 DNA sequences 24 nucleotides (nt) long were determined. Each DNA sequence was mapped onto the mouse genome (mm9) using Bowtie (with options: −a), and a total of 168,299 sequences (4.04 %) were successfully mapped, including multi-hit sequences. The mapped reads were considered concatenated sequences that encode histone proteins when two or more different sequences were mapped within 90 nt of each other. The connected sequences were filtered by eliminating sequences encoding peptides of less than 10 amino acids. This resulted in 87 genomic sequences that potentially represent histone H3 coding genes. These sequences contained known H3.1, H3.2 and H3.3 coding genes, 17 computationally predicted genes and 26 unannotated histone H3 pseudogenes and disrupted open reading frames (ORFs) (Additional file 1: Table S1). All previously identified histone H3 genes, regardless of the presence or absence of introns, were included. Because the core domain is essential for forming the nucleosome, we excluded ORFs that had out-of-frame core domains. This analysis resulted in the identification of 14 genes that potentially encode histone H3-like proteins ( Fig. 1b; see Additional file 2: Figure S1 for DNA sequences). To assess whether these histone H3 sequences are conserved between mouse and human, we repeated the screen on the human genome (hg19). We extracted 24 ORFs that potentially encode a total of 11 histone H3 or H3-like proteins, but all 24 ORFs were previously known or predicted genes [16] and none were identical to the new mouse genes (Additional file 3: Table S2 for human genome screening results). We also examined the screen on the rat genome (rn5), which is taxonomically close to that of mouse. We extracted ORFs that potentially encode histone H3 or H3-like proteins (Additional file 4: Table S3). Only Hist3h3, which is deposited as provisional gene on rat genome, was extracted as the homolog of mouse H3:00036 (prediction ID), while other rat homologs of mouse H3 variants were not identified. Since the identical homologs at amino acid level among species, we further performed phylogenic analysis to evaluate conservation of histone H3 variants amino acid sequences, including those newly identified mouse H3 sequences. The phylogenetic tree showed that the 14 novel mouse histone H3 variants are categorized into two well-delineated clades (H3.1/H3.2 clade and H3.3 clade) (Additional file 2: Figure S2A). Thirteen novel H3 variants were categorized into the H3.3 clade (Additional file 2: Figure S2B) and only H3:00036 was placed in the H3.1/H3.2 clade (Additional file 2: Figure S2C). The 13 H3.3-related variants were not identical to any known histone genes in other species, including newly determined sequences in human and rat. Phylogenetic analysis also indicated that there is no obvious human ortholog of any of the mouse variants (Additional file 2: Figure S2B). These results suggested that the novel histone genes might be mouse specific; therefore, we named them H3mm (H3 Mus musculus) with a numbered suffix (i.e., H3mm6-H3mm18) to avoid future confusion, as the phylogeny-based nomenclature system [17] cannot be applied. Mouse H3:00036 and rat Hist3h3, however, might be counterparts of human H3T/H3.4 (see Additional file 2: Figure S3A for the sequence alignment of H3.1, H3T and H3t). We named this gene H3t, as both the genome structure and the characteristics of the encoded protein were similar to human H3T (as indicated below), in addition to the relevance in phylogeny [17]. While H3mm6-H3mm18 were similar to H3.3 and their genes were scattered outside the histone gene clusters, H3t was similar to H3.1 and its gene was located in histone cluster 2. H3t also had a stem-loop sequence in its 3′-UTR (see summary in Table 1). The putative histone H3 proteins were categorized into four groups based on the edit distance (Levenshtein distance), which is a similarity measure defined for two sequences (words) that requires a minimum number of operations (delete, insert or substitute) to transform one into the other. All proteins showed high homology with H3.3 (percent identity ranged from 76 to 98.5 %; see also Additional file 5: Table S4 for amino acid sequences and DNA sequences similarities), but could be separated into four groups. One group, the H3.1/2 group, included H3.1, H3.2 and H3t and contained the SAVM (87-90) motif in the histone core domain. The other three groups (H3.3, distant A, and distant B) all had the AAIG (87-90) motif, which is recognized by ASF1/HIRA or Daxx [18][19][20], except for H3mm10 and H3mm17, which instead have AVIG and SAIG sequences, respectively. Compared with H3.3, the number of different amino acids was less than five in the H3.3 group, but was 11-23 or 37-51 amino acids in the distant A or B groups, respectively. (Figure 1c; see also Additional file 5: Table S4).
Lysine residues in the histone H3N-terminal tail region are known to be post-translationally modified as part of the mechanism for chromatin regulation [21].
The important lysines that are subjected to acetylation and methylation, including K4, K9, K27 and K36, were conserved among all histone H3 genes. Similarly, phosphorylatable serine residues (S10 and S28) were conserved among all genes, except H3mm9, where amino acid 28 was arginine. Variations in the N-terminal tail region were found around these critical lysine and serine residues and may regulate the modification levels in specific variants. On the other hand, variation in the core domain may alter the nucleosome structure and/or stability.
The edit distance at the nucleotide level unveiled extremely high homology between H3f3a and some of the newly identified genes (73.7-99.3 %; Additional file 5: Table S4). For example, the edit distance was 3 nt for H3mm7, 4 nt for H3mm11 and H3mm13, and 5 nt for H3mm15. Such high similarity may have hindered previous attempts to identify such genes. The edit distance from H3f3b demonstrated that H3mm8 is more similar to H3f3b (68 nt) than H3f3a (141 nt). These similarities indicate that the novel H3 genes are potentially derived from either H3f3a or H3f3b. The genomic structure of the novel genes predicted a polyadenylation signal and no stem-loop structure, except for H3t.
We also applied the same strategy to the other core histones, H2A, H2B and H4. H2A genes were screened with H2A type1B protein (NP_835489.1). The screening revealed one uncharacterized gene that encodes a protein similar to H2A.J (Additional file 2: Figure S3B; Table S5). H2B genes were screened with H2B type1P (NP_835509.2) and H4 genes with H4 (NP_78583.1), but neither screen resulted in the identification of a previously unknown ORF. These results suggest that histone H3 genes are more diverse than the other histone genes in the mouse genome.
The expression of novel H3 genes in mouse tissues
To evaluate the expression level of each H3 gene, we first analyzed the standard mRNA-Seq data obtained from either public data sets, including those from ENCODE, and local data sets. However, those data sets were not adequate for quantification because the coding regions of the H3 variants were very similar and the number of uniquely mapped reads was very low (Additional file 2: Figure S4, 5). Such high similarities prevented us from performing reliable RT-PCR. ChIP-seq data for active histone marks and RNA polymerase II could have been useful to evaluate the transcription level of each variant; however, this was also difficult because the promoter regions were also very similar among the different variants, and the depths of uniquely mapped sequences were not sufficient for quantification (Additional file 2: Figure S5A-P). Consequently, we performed 3′-seq to identify their 3′-UTRs [22] (Fig. 2a), because 3′-UTRs showed relatively greater difference in nucleotide sequence compared with coding sequence (Additional file 2: Figure S6). 3′-seq expression profiles in mouse tissues (testis, liver, skeletal muscle and brain) showed that H3mm7, H3mm8, H3mm13 and H3mm15 were expressed in all four tissues, whereas the expression of H3mm6, H3mm11, H3mm12, H3mm14 and H3mm18 was biased for specific tissues ( Fig. 2b; see also Additional file 2: Table S6). The expression level of H3t was low, but specifically detected in the testis. The expression levels of H3mm7, H3mm8, H3mm13 and H3mm15 were in fact similar or higher (1.2-to 16-fold) compared with those of H3f3a and H3f3b (Additional file 2: Figure S7). We next used liquid chromatography tandem mass spectrometry (LC-MS/MS) to investigate the expression of new H3 variants at the protein level. Histones were acid-extracted from mouse adult tissues and separated by SDS-PAGE, before in-gel digestion and LC-MS/ MS analysis (Additional file 2: Figure S8A). 72 peptides were identified to be derived from histone H3. Although most were shared among multiple variants, 14 peptides were specific to one of the novel H3 variants (H3mm6, H3mm7, H3mm9, H3mm13, H3mm17 or H3t). We then quantified the variant specific peptides corresponding to H3t and H3mm7, which were detected with high confidence (false discovery rate <0.05), as the area of the precursor ion chromatogram normalized to the area of a common histone peptide ( . These data suggested that the novel variants were significantly expressed both at mRNA and protein levels.
Some novel H3 variants are stably incorporated in chromatin
To elucidate the functions of the novel histone H3 proteins described above, we forced all H3 variants in Fig. 1b in mouse C2C12 myoblast cells that endogenously express H3mm7, H3mm8, H3mm13 and H3mm15 (Fig. 2a, b; Additional file 2: Table S6). We established stable cell lines in which the expression of each N-terminal GFP-tagged histone H3 protein can be induced by doxycycline (Dox).
To examine whether H3 variants are incorporated into chromatin in a replication-dependent or -independent manner, we performed cell fusion assays [23]. C2C12 cells expressing GFP-H3 were fused with HeLa cells expressing mCherry-PCNA. One hour later, cells were fixed, and the distribution of GFP-H3 was analyzed by confocal microscopy. If the GFP-H3 that entered into recipient HeLa nuclei in heterokaryons was incorporated into replicated chromatin, the distribution should be associated with mCherry-PCNA-positive replication foci [24]. In contrast, if chromatin incorporation of GFP-H3 was replication-independent, the distribution should not be associated with replication foci. As shown in Additional file 2: Figure S11, GFP-H3.1 and GFP-H3t were concentrated in mCherry-PCNA foci in recipient nuclei, whereas GFP-H3.3 and -GFP-H3mm7 were not. These results are consistent with the similarity of H3t and H3mm7 with H3.1 and H3.3, respectively, indicating that H3t is incorporated into chromatin in a replication-dependent manner, like H3.1, and that H3mm7 incorporation is replication-independent, like H3. 3 Testis Liver SKmuscle Brain Interestingly, these results were consistent with the edit distance analysis. H3 variants classified into the H3.1/2 group (i.e., H3.1, H3.2, and H3t) were incorporated into chromatin, including heterochromatin. H3 variants classified into the H3.3 group were incorporated into euchromatin, except H3mm15, which is mostly mobile. In contrast to these variants, which are close to the major H3 variants (H3.1, H3.2 and H3.3), those classified into the distant A and B groups (Fig. 1c) were not stably incorporated into chromatin.
During the above microscopy analyses, we noticed that fluorescence intensities varied among different variants: those variants incorporated into chromatin generally gave bright fluorescence signals. Immunoblotting with anti-GFP confirmed this observation (Additional file 2: Figure S12A). H3 variants that were incorporated into chromatin were readily detected, but nucleosome-free variants were not, whereas their transcript levels were similar or higher when evaluated by RT-qPCR using a primer set that amplified the shared GFP region (Additional file 2: Figure S12B). These data suggested that nucleosome-free histones undergo rapid turnover [25][26][27][28]. This notion was confirmed by proteasome inhibitor treatment. When cells were incubated with a proteasome inhibitor, MG132, for 6 h, the levels of nucleosome-free H3 variants, such as GFP-H3mm14 and GFP-H3mm18, were massively increased (Additional file 2: Figure S13). Compared with GFP-H3.3, the levels of GFP-H3.1 and GFP-H3t were also increased. This could be because the cell cycle-independent expressions of these GFP-H3 proteins, unlike the endogenous H3.1; GFP-H3.1 and GFP-H3t expressed in non-S-phase cells, perhaps undergo degradation, as their chromatin incorporation is limited without DNA replication.
The novel H3.3-type histones were preferentially enriched at active genes
To gain insight into the function of novel H3 variants, we analyzed the genome-wide distribution of the histone variants that are incorporated into chromatin (H3t, H3mm7, H3mm11, H3mm12, H3mm13 and H3mm16) by ChIP-Seq using GFP antibody and then compared these distributions with those of H3.1, H3.2 and H3.3. The genome-wide distributions of the GFP-tagged histone H3 were evaluated by calculating the proportion of peaks detected on each category (promoter, gene body and inter-gene in Fig. 4a) by MACS software [29] with the relaxed threshold and with the broad-calling option as previously utilized by Hussein et al. [30] to call dispersed histone modification peaks. In the mouse genome, the effective mappable genome size of mm9 is 1,865,500,000 bp defined in MACS, with promoter regions (within 2 kb of a transcription start site; TSS) occupying 2.52 % and gene body regions occupying 51.60 % (962,779,619 bp; 23,460 genes defined in refFlat) (Fig. 4a, top lane). Peak call data obtained from ChIP-Seq revealed that H3t was distributed uniformly, much like the distribution of H3.1 and H3.2, because the proportions of peaks in the promoter regions (2.52-3.80 %) and gene bodies (45.99-50.67 %) of these three variants were similar to the proportions in randomly chosen genomic regions (Fig. 4a, top lane). The uniform distributions of H3.1, H3.2 and H3t probably represent the replicationcoupled chromatin assembly. In contrast, H3mm7, H3mm11, H3mm12, H3mm13 and H3mm16 were specifically localized in gene loci with peaks in promoter regions (5.94-9.64 %) and gene bodies (57.13-61.35 %). This property is similar to that of H3.3, which is specifically localized in gene loci [9,31].
To evaluate the local distribution of the novel H3 variants in gene loci, aggregation plots were created (Fig. 4b).
Compared to input control data, none of H3.1, H3t or H3.2 accumulated around TSSs. H3mm7, H3mm11, H3mm12, H3mm13 and H3mm16 were enriched near TSSs, similar to H3.3. These results suggest that these H3.3-type variants may have a similar role to H3.3 in selective gene expression. To investigate this possibility, we assessed the biased localization of the ChIP-Seq signal within ±5000 bp at all gene TSSs in the growth states by hierarchical clustering (Fig. 4c). None of H3mm7, H3mm11, H3mm13 or H3mm16 showed any remarkable exclusivity in signal localization compared with H3.3. H3mm12 was less concentrated in gene loci, but still had an incorporation pattern similar to that of H3.3.
Overexpression of novel histone H3 variants modulates gene expression patterns during differentiation
To evaluate the function of chromatin-incorporated H3 variants with respect to gene expression during differentiation, we performed mRNA-Seq analysis before and after the differentiation of C2C12 cells that stably express the variants. During the growth state, gene expression profiles were similar, with correlation coefficients 0.88-0.99 (Additional file 2: Table S7). In contrast, when cells were differentiated, the profiles were more diverse, depending on the variant (correlation coefficient 0.79-0.98; Additional file 2: Table S7), indicating that the overexpressed specific variants have the potential to alter gene regulation during differentiation, as do H3.1 and H3.3 [10].
We next classified the gene expression profiles of different cell lines under growth and differentiation conditions by principal component (PC) analysis (Fig. 5a) whereas a negative score indicates the expression of cell growth-related genes based on gene set enrichment analysis (GSEA) of the top 100 high-scored genes using the REACTOME database [32,33]. PC1 scores of all cells, including wild type, increased upon differentiation. Positive PC2 scores indicate higher expression of ER-stressrelated genes, while negative PC2 scores indicate higher expression of extracellular matrix-related genes, according to GSEA. Group D1/2 (differentiated cells expressing H3.1 and H3.2; PC2 negative scores) included wild type and cells expressing H3t, H3mm12, H3mm13 and H3mm16, while group D3 (differentiated cells expressing H3.3; PC2 positive scores) included those expressing H3mm7 and H3mm11 (Fig. 5a). These data suggest that overexpression of any particular H3 variant has little effect on gene expression in the undifferentiated state but that upon differentiation overexpression of some variants will alter gene expression patterns. Because ER-stressrelated genes are thought to have a special role in the efficient formation of myofibers during skeletal muscle differentiation [34], the D3 group might represent histone variants involved in the maturation of skeletal muscle differentiation.
To evaluate the expression levels of genes that contributed highly to PC scores (top four genes), we performed RT-PCR amplicon sequencing with three biological replicates for each gene (Fig. 5b). In all cells after differentiation, PC1-positive Tnnc2, a skeletal muscle differentiation-related gene, was upregulated, whereas PC1-negative Pttg1, a cell growth-related gene, was downregulated (Fig. 5b). Other PC1-contributing genes behaved similarly, consistent with cell cycle arrest upon differentiation [35]. We confirmed the statistical significance by a two-sided Student's t test between average expression levels of growth and differentiation ( Fig. 5b; p value <0.001 for all PC1 genes). PC2-positive Avil was upregulated in group D3. PC2-negative Mgp, however, did not show differential expression between the D3 and D1/2 groups, except for down-regulation of H3mm7 (p value <0.001; D1/2 vs. H3mm7-D), which may reflect milder negative PC2 scores ( Fig. 5b; two-sided Student's t test between the D1/2 and D3 groups). The results from PC analysis indicate that the expressions of H3.3, H3mm7 and H3mm11 during cell differentiation lead to changes in gene expression patterns that enhance differentiation.
The H3 variants in the D3 group could stimulate the expression of PC2-positive genes by being specifically incorporated into these genes. To test this possibility, we evaluated the level of incorporation of each variant around the TSSs (TSS ± 2 kb) of the top 40 PC2+ contributing genes (Additional file 2: Figure S14A). PC2+ genes were largely divided into two large clusters of H3.3-type histone incorporated or non-incorporated patterns, with the former not showing substantial differences between variants in the D1/2 and D3 groups. Nearly identical distributions of different variants on a specific PC2+ gene locus (Cdsn) were observed before and after differentiation (Additional file 2: Figure S14B). Thus, upon differentiation, the altered levels of expression could be involved in the incorporation of each variant, which depended on small amino acid differences.
The characterization of the novel mouse H3 variants is summarized in Table 2.
Discussion
We have identified 14 novel mouse histone H3 variants by in silico hybridization. Most of the H3 genes we identified could be computationally predicted genes using NCBI's GNOMON pipeline [36] (GNOMON IDs have a "GM" prefix followed by a number, e.g. GM12260, which is equivalent to our H3t) and some of them are deposited as pseudogenes (H3t, H3mm7). This software uses a strategy similar to the one we employed; it splits known cDNA or peptide sequences into short fragments and scans the genome. The detection of H3f3a and H3f3b in the mouse genome confirmed that the software was applicable for genes with exon-intron structure. Our Blue points indicate the growth state; red points indicate the differentiated state. The distance between two points reflects the dissimilarity in gene expression patterns between cells. The higher PC1 score (PC1+) indicates a higher expression of muscle differentiation-related genes and lower expression of cell growth-related genes, as illustrated in the top bar. Similarly, higher PC2 scores (PC2+) indicate higher expression of ER-stressrelated genes and lower expression of extracellular matrix (ECM)-related genes. Clusters of cells that have similar gene expression patterns, G (green), D1/2 (blue) and D3 (purple), are highlighted. b Gene expression levels of each H3 variant confirmed by RT-PCR amplicon-Seq. Expression levels of representative genes chosen from the top four contributors (genes) for each PC direction (PC1± and PC2±) are shown as boxplots calculated from three replicates. The illustration below shows the order of each H3 variant-expressing cell line in the growth and differentiated state. The color of each box corresponds to the expression groups shown in a: blue, undifferentiated wild-type (WT); red, differentiated WT; green, growth (G); blue, D1/2; and purple, D3. Two-sided Student's t test was performed on group-average expression levels between "WT" vs. "Differentiated" for PC1± genes, and between "D1/2" vs. "D3" groups for PC2± genes approach is more straightforward and comprehensive, and can be applied to any protein.
We also comprehensively identified histone H3 genes both in human and rat genomes. Phylogenetic analysis revealed that most histone H3 variants, except H3t, were not conserved even between mouse and rat. Histone genes have been suggested to have a various pseudogenes [15]. In our screen, many typical pseudogenes, which obviously lack the whole open reading frame by frameshift or insertion of a stop codon, were identified. Although some H3 variants reported here have been annotated as "pseudogenes", at least H3mm7 and H3t should not be categorized as pseudogenes because their protein products were confirmed by LC-MS/MS. Recent studies have shown that some "pseudogenes" are constitutively or conditionally expressed at RNA level, which might also be translated [37,38]. It is therefore questionable whether these are non-functional pseudogenes, newly evolved genes or DNA elements with specific functions. Although it has been difficult to determine the function of such potential pseudogenes, such as by making knockout mice or knockout cell lines, using the recently developed CRISPR/Cas9 genome-editing technology will allow us to address this question, in addition to determining the function of the variant per se. The set of new histone H3 variants will be a good target for understanding the mechanism of molecular evolution because a variety of (pseudo)gene types are present, including those with protein expression, those with RNA expression, those without expression, and those with ORF truncation. A deeper analysis of 3′-seq may also reveal the expression states of other "pseudogenes" than histones.
To characterize the properties of the individual variants, we established C2C12 cells expressing all 14 variants. FRAP analysis revealed that six variants were assembled into chromatin. One variant showed high similarity to H3.1 at both the DNA and amino acid sequence levels (H3t). The H3t gene is located in a histone cluster and has a stem-loop sequence in the 3′-UTR, similar to H3.1. In addition, GFP-H3t is distributed throughout the genome, again much like GFP-H3.1. Cell fusion analysis confirmed that H3t is incorporated in a replication-dependent manner, as is H3.1 [15]. Other chromatin-incorporated variants are similar to H3.3 in terms of gene structure (i.e., no stem-loop), amino acid sequence, and distribution. However, they can be separated into two distinct groups based on their effect on gene expression in differentiated cells. H3mm7 and H3mm11 alter gene expression patterns and increase the levels of ERstress-related genes, much like H3.3, suggesting that they can contribute to gene selection and lineage potential, similarly to H3.3 [4,10]. The functional difference between groups D1/2 and D3 might be explained by the unique amino acids in the N-terminal tail of H3 that affect post-translational modifications and/or structural differences in the nucleosome. A number of new H3 variants did not appear to be incorporated into chromatin. In two variants (H3mm10 and H3mm17), amino acid substitutions in the histone chaperone-binding domain may explain the lack of chromatin incorporation because of poor binding to chaperones [18][19][20]. In contrast, the chaperone-binding domains are conserved in other non-incorporated variants, suggesting that they can interact with chaperones but that stable nucleosomes are not formed. Indeed, H3mm8 lacks the amino acids required for the C-terminal helix (α3 in Fig. 1b), and H3mm9 and H3mm14 have extended C-terminal amino acids, which may disrupt nucleosome structure. Although other variants (H3mm6, H3mm15 and H3mm18) do not have such large deletions or additions, amino acid substitutions in the histone fold domain may drastically alter the nucleosome stability, as has been shown for human H3T [39]. The function of these non-incorporated variants remains unknown. One possibility is that their transient interaction with chromatin may mediate chromatin remodeling. Another possibility is that they may sequestrate histone binding proteins, competing with chromatin-incorporated variants, like, for example, influenza virus protein NS1, which has a C-terminal tail that contains an ARSK sequence similar to the ARTK sequence in the N-terminal of histone H3 [40]. This NS1 region competes with histone H3 for interaction with transcription elongation factors to suppress the expression of anti-virus-related genes. Similar regulation might occur when unincorporated histones are expressed.
The novel histone variant genes except H3t were not conserved between human and mouse, unlike H3.1, H3.2 and H3.3. H3t has a high similarity to human H3T, sharing two common amino acids (Val 24 and Ser 98), and its expression in either species is testis specific [14,39]. However, in contrast to human H3T, which forms unstable nucleosomes [39], FRAP analysis indicates that H3t stably assembles into nucleosomes. Moreover, amino acids that cause instability (Met 71 and Val 111 in human H3T) are not conserved in H3t. Rat Hist3h3 encodes a protein with an identical amino acid sequence to mouse H3t. Further biochemical, structural, and genetic studies are required to elucidate the function of H3t. Other novel H3 variants do not have counterparts in human, suggesting that these minor histone variants were acquired after species separation. This theory supports the idea that H3 genes evolve according to a birth-and-death process [41]. It may be that the species-specific variants contribute to the establishment of species-specific gene regulation. Thus, functional differences among individual H3 variants should be addressed to understand the evolution of chromatin dynamics.
Conclusions
We identified novel H3 variant genes in the mouse genome. Thirteen out of the 14 genes that appear to be derived from H3.3 are not conserved among species, including human and rat, even though tissue-specific expression was confirmed for some variants. Another one, H3t, an H3.1 type, showed replication-dependent chromatin incorporation, and appears to have human and rat counterparts. Forced expression of novel histone H3 variants affected gene expression patterns during myogenesis. Although the functions of these variants remain unknown, constructing knockout mice and cell lines will address their biological relevance and provide insight into the molecular evolution of pseudogene diversification.
Identification of novel H3 variants by in silico hybridization
Histone H3 variant genes in mouse were explored as shown in Fig. 1. First, 136 amino acids (a.a.) of the histone H3.2 sequence (CAA56577.1) were divided into 8 a.a. sequences in 1 a.a. iterations. The obtained 129 a.a. sequences were converted into all possible combinations of 24 nt DNA sequences based on mammalian codon usage. This conversion resulted in 4,162,752 DNA sequences that potentially code histone genes. The obtained DNA sequences were mapped onto the mouse genome (mm9) by Bowtie (version 0.12.7 with option −a to report all candidates). Ultimately, 168,299 DNA sequences were mapped, including multi-hit reads. The mapped DNA sequences were concatenated if more than two reads were mapped within 90 nt of each other. Eighty-seven regions shared homology to the H3.2 coding sequence, yielding 16 genes that potentially encode H3 histones.
3′-seq and 3′-seq data analysis
Sample preparations and data analysis for 3′-seq were performed as previously reported [22] using total RNA extracted from 8-week-old C57BL/6 male mouse tissue, including testis, liver, brain and skeletal muscle. Deep sequencing was performed using the Illumina Hiseq 1500 system. The 3′-seq yielded total reads of 26,880,145,105 for the tissue samples and 13,206,076 and 20,670,900 for C2C12 growth and differentiated cells. The uniquely mapped reads were 3,602,579-14,060,394 (11.23-36.65 %) for tissue samples and 539,990-3,952,088 (4.09-19.12 %) for C2C12 growth and differentiated cells. The number of unique mapped reads of ~1 to ~10 million were comparable to that reported by Lianoglou et al. [22]. Reads were mapped to the mouse genome (mm9) with STAR alignment software [42] and the parameter "-outFilterMultimapNmax 1 -alignIntron-Max 1" (no multi-hit reads, no splice prediction) to treat poly-A containing reads. Quantification of each gene was performed by counting the number of reads that were mapped in the 3′-UTR region and then normalizing the number as reads per million (RPM) per region. In the case of novel histone H3 variant genes, the region within 3 kb from the end of a coding sequence was defined as the putative UTR.
In-gel digestion
Proteins were fractionated by SDS-PAGE (10-20 %) and stained with Coomassie Brilliant Blue G250 (CBB G250). Protein bands were cut out and subjected to in-gel digestion as described previously [43]. Obtained peptides were dried and stored at −80 °C.
LC-MS/MS analysis
Liquid chromatography tandem mass spectrometry (LC-MS/MS) was performed on an LTQ Orbitrap Velos Pro mass spectrometer (Thermo Fisher Scientific, Waltham, MA, USA) coupled with a nanoLC instrument (Advance, Michrom BioResources, Auburn, CA, USA) and an HTC-PAL autosampler (CTC Analytics, Zwingen, Switzerland). Collision-induced dissociation (CID) spectra were acquired automatically in the data-dependent scan mode with the dynamic exclusion option. The CID raw spectra were extracted using Proteome Discoverer 1.4 (Thermo Fisher Scientific) and subjected to database searches using the Sequest algorithm. Peak list was compared with the Mouse International Protein Index version 3.84 database (European Bioinformatics Institute) including sequences of histone variants with the use of the Sequest algorithm. Additional details can be seen in Additional file 2: Supplemental Methods.
GFP-fused histone H3.1 variant constructs and cell line selection
All cDNAs for histone H3 variants were purchased (Eurofins Genomics, Tokyo, Japan). The coding sequences are shown in Additional file 2: Figure S1. cDNAs were ligated into the Bidirectional Tet expression vector, pT2A-TRETIBI (modified Clontech Tet-On system), which contains TolII transposon elements and an EGFP cDNA located upstream of the cDNA sequence, and which was modified from pT2AL200R150G. pT2A-TRETIBI/EGFP-H3.1 transfection was performed using Lipofectamine 2000 (Life Technologies, Carlsbad, CA, USA). C2C12 cells at 20-30 % confluence were transfected with an expression vector (4 μg plasmid DNA per 100-mm plate), pCAGGS-TP encoding transposase (kindly provided by Dr. Kawakami, National Institute of Genetics, Japan), and pT2A-CAG-rtTA2S-M2 and incubated for 24 h. To create cell lines stably expressing each GFP-tagged histone variant, transfected cells were cultured for 14-21 days in the presence of 1 μg/ml doxycycline and 1 μg/ml G418. Finally, GFP-positive cells were selected using fluorescence activating cell sorting.
Cells
C2C12 cells or stable clones were grown in Dulbecco's modified Eagle's medium (DMEM) supplemented with 20 % fetal bovine serum. Undifferentiated cells were harvested at 60-70 % confluence. Differentiated cells were transferred to DMEM containing 2 % horse serum upon reaching confluence and harvested 48 h later.
Quantitative RT-PCR
Total RNA was isolated and reversed-transcribed using PrimeScript Reverse Transcriptase (Takara Bio) and an oligo dT primer, as previously described [4]. qPCR was performed using Thunderbird qPCR Mix (Toyobo). Primers used are listed in Additional file 2: Supplementary Information. qPCR data were normalized to Gapdh expression levels and presented as the mean ± standard deviation of three independent experiments.
Chromatin immunoprecipitation
Cultured cells were cross-linked in 0.5 % formaldehyde and suspended in ChIP buffer (5 mM PIPES, 200 mM KCl, 1 mM CaCl2, 1.5 mM MgCl 2 , 5 % sucrose, 0.5 % NP-40, and protease inhibitor cocktail; Nacalai Tesque). Samples were sonicated for 5 s three times and digested with micrococcal nuclease (1 μl; New England Biolabs, Ipswich, MA, USA) at 37 °C for 40 min. The digested samples were centrifuged at 15,000×g for 10 min. Supernatant containing 4-8 μg DNA was incubated with a rat monoclonal antibody against GFP (1A5, 2 μg, Bio Academia) pre-bound to magnetic beads at 4 °C overnight with rotation. The immune complexes were eluted from the beads using 1 % SDS in TE, followed by washing with ChIP buffer and TE buffer (both twice). Cross-links were reversed, and DNA was purified using a Qiaquick PCR purification kit (Qiagen, Valencia, CA, USA).
ChIP sequencing, read alignments and ChIP-Seq data analysis
ChIP sample preparations from GFP-tagged histone H3 variant-expressing cells were performed as described above. The ChIP library was prepared according to the Illumina protocol and sequenced on the Illumina HiSeq 1500 system. The sequence reads for GFP and Input were aligned to the reference mouse genome (mm9, build 37) using Bowtie 2 software (version 2.2.2) [45]. PCR duplicates were removed from uniquely mapped reads using samtools (version 0. 1.19). To call peaks, we used MACS (version 2.0.10) and the parameters: callpeak-gsize mmnomodel-broad -extsize fragment-size-to-large-pvalue 1e-3 [29]. We defined "ChIP-Seq signal intensity" as described below. First, mapped reads on the genome in a defined window size (in the case of an IGV; Integrated Genome Browser screenshot: 10,000 bp windows by 1000 bp intervals or 1000 bp windows by 100 bp intervals; in other cases: 2 kb from the TSS) were counted and then normalized as RPKM (reads per kilobases per million reads) [46]. The ChIP-Seq signal intensities were then calculated as RPKM differences between ChIP and input DNA control data (i.e., ChIP-control) for each window.
mRNA-Seq and mRNA-Seq data analysis
Total RNAs from growth and differentiated (i.e., postdifferentiation) state C2C12 cells were obtained as previously described [4]. Library preparation was performed according to the protocol developed by Illumina. Sequenced reads of GFP-tagged H3 variant-overexpressed cells were mapped onto the mouse genome (mm9) using Tophat (version 2.0.8) [47]. Gene expression levels (FPKM; Fragments per kilobase of exon per million mapped sequence reads) were estimated using the cuffdiff program in Cufflinks (version 2.0.1) [47] using mapped reads and the software's default parameters. Principal component (PC) analysis was performed against an FPKM matrix of gene expression profiles with rows of genes and columns of samples (H3 variant-expressing cells). The matrix was log 10 transformed and the column scaled as mean = 0 and standard deviation = 1. The expression profiles (log 10 transformed FPKM matrix) of wild-type (WT) cells at growth state (0 h) and differentiated state (48 h) were orthogonally projected onto the plane spanned by the 1st and 2nd PCs to compare with profiles of H3 variant-expressing cells.
RT-PCR amplicon-Seq data analysis
The expression levels of PC contributing genes were evaluated by counting the amplicons of specific primers for each gene (amplicon sequencing) with three biological replicates. All sequenced reads were mapped on mouse transcript references converted from refFlat's GTF file using the gffread command in Cufflink. The primer list is shown in Additional file 2: Supplementary Information.
Data access
All deep-sequencing data in this study including ChIP-Seq, mRNA-Seq and 3′-seq were submitted to DDBJ Sequence Read Archive with the accession number [DDBJ:DRA002463]. The processed data including gene expression tables and ChIP-Seq track data (bigWig file used for IGV screen shot) are also accessible through GEO Series accession number [GEO:GSE63890]. | 2023-01-20T14:08:00.732Z | 2015-09-17T00:00:00.000 | {
"year": 2015,
"sha1": "ba2c52698b4ce7d804c9051469d14bc6589f4eef",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s13072-015-0027-3",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "ba2c52698b4ce7d804c9051469d14bc6589f4eef",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
3332622 | pes2o/s2orc | v3-fos-license | Enhanced n-Doping Efficiency of a Naphthalenediimide-Based Copolymer through Polar Side Chains for Organic Thermoelectrics
N-doping of conjugated polymers either requires a high dopant fraction or yields a low electrical conductivity because of their poor compatibility with molecular dopants. We explore n-doping of the polar naphthalenediimide–bithiophene copolymer p(gNDI-gT2) that carries oligoethylene glycol-based side chains and show that the polymer displays superior miscibility with the benzimidazole–dimethylbenzenamine-based n-dopant N-DMBI. The good compatibility of p(gNDI-gT2) and N-DMBI results in a relatively high doping efficiency of 13% for n-dopants, which leads to a high electrical conductivity of more than 10–1 S cm–1 for a dopant concentration of only 10 mol % when measured in an inert atmosphere. We find that the doped polymer is able to maintain its electrical conductivity for about 20 min when exposed to air and recovers rapidly when returned to a nitrogen atmosphere. Overall, solution coprocessing of p(gNDI-gT2) and N-DMBI results in a larger thermoelectric power factor of up to 0.4 μW K–2 m–1 compared to other NDI-based polymers.
D oping of organic semiconductors is essential for the optimization of a number of electronic components, ranging from the hole and electron blocking layers used in organic solar cells 1−3 and organic light-emitting diodes (OLEDs) 1,4,5 to trap filling in organic field-effect transistors (OFETs) 6−8 and the legs of thermoelectric generators. 9,10For many of these applications, conjugated polymers are particularly intriguing because they permit one to adjust the rheological properties of processing solutions and the mechanical properties of the final (flexible) thin film architectures.Doping can be achieved through electron transfer between the semiconductor and a molecular dopant via a redox reaction.Alternatively, a proton/hydride (H + /H − ) can be transferred from an acid/base to the semiconductor. 11In the case of p-doping, positive charge carriers are introduced, whereas n-doping refers to the addition of extra electrons to the conjugated system.It is desirable that each dopant molecule that is added to the semiconductor material introduces as many charges as possible.Therefore, the presence of unreacted dopant should be avoided in order to maximize the amount of conducting material.−14 Hence, it is critical that the doping efficiency, i.e., the fraction of dopants that ultimately create a charge on the organic semiconductor, be as high as possible. 15o realize thermoelectric generators, both p-and n-type materials are needed.They should display a high figure of merit ZT = α 2 σ•T/κ, where α is the Seebeck coefficient, σ the electrical conductivity, T the absolute temperature, and κ the thermal conductivity.If the thermal conductivity, which is challenging to measure for thin film architectures, is unknown, the power factor α 2 σ is instead used to compare the thermoelectric efficacy of different materials.−20 In contrast, n-doping continues to pose a formidable challenge because of very low doping efficiencies as well as poor stability of the doped state. 21−45 We have compiled data from the literature to compare the dopant fractions that are required to achieve the maximum conductivity σ max through n-doping of various semiconductors (Figure 1; Supporting Information Table S1).It is evident that n-doping of NDI-based polymers is limited by a too low doping efficiency.The result is either a low maximum electrical conductivity of less than 10 −2 S cm −1 at low dopant fractions (Figure 1a; bottom left) or the need for a large dopant fraction of more than 30 mol % to achieve a higher electrical conductivity (Figure 1a; top right).For example, Schlitz et al. investigated n-doping of the high-mobility naphthalenediimide−bithiophene copolymer p(NDI2OD-T2) 46 with the commonly used n-dopant N-DMBI (see Figure 2 for the chemical structure) and reached an electrical conductivity of about 10 −3 S cm −1 at a dopant fraction of 9 mol %. 14 The insolubility of N-DMBI in the host polymer leading to segregation of the dopant was noted to be a limiting effect for the electrical properties.Naab et al. studied doping of several NDI-based polymers with a dimer version of DMBI and found that a dopant fraction of up to 43 mol % was required to maximize the electrical conductivity, 26 despite a higher doping efficiency, because each dimer can create two charges. 42ne emerging tool to increase the doping efficiency is the replacement of nonpolar alkyl side chains with more polar oligoethylene glycol side chains, which enhances the compatibility of semiconductor/dopant pairs. 17Li et al. have observed that the common p-dopant F4TCNQ more readily diffuses into a polythiophene that carries oligoethylene glycol side chains as well as a sulfonate group, as compared to poly(3-hexylthiophene) (P3HT), which indicates that polar side chains can improve dopant miscibility. 49As a result, polar side chains can lead to complete p-doping efficiency of polythiophenes by F4TCNQ, resulting in both a σ max ≈ 100 S cm −1 for a low dopant fraction of 10 mol % as well as enhanced thermal stability. 13Likewise, fullerenes that carry oligoethylene glycol side chains feature enhanced compatibility with N-DMBI and therefore a high doping efficiency of about 18%, which yielded a maximum conductivity of about 2 S cm −1 and power factor of up to 19 μW m −1 K −2 . 39,43 this work, we explore n-doping of the naphthalenediimide−bithiophene copolymer p(gNDI-gT2) (for details on synthesis and characterization, see the Supporting Information and Figures S1 and S2), a structural analogue of p(NDI2OD-T2) with polar oligoethylene glycol-containing side chains on both the NDI acceptor and the bithiophene donor unit, which has proven to be a promising material for organic electrochemical transistors (OECTs).50−52 We anticipate that the structural alteration from nonpolar alkyl side chains to more polar oligoethylene glycol side chains will aid doping of the polymer backbone through enhanced dopant miscibility.We chose to investigate n-doping with N-DMBI, which is thought to donate a hydride (H − ), 27,42,53,54 and found that our best results in terms of doping efficiency and maximum conductivity are superior to previous results that have been reported for other n-type polymers (Figure 1a, bottom right, green).
In a first set of experiments, we recorded UV/vis spectra of p(gNDI-gT2) solutions (Figure 3a) and films (Figure 3b) before and after addition of N-DMBI.The thin film spectrum of the pristine polymer consists of a peak at around ∼440 nm and a broad spectral feature between 600 and 1500 nm, which we attribute to the π−π* transition and a strong intramolecular charge transfer complex as a consequence of strong donor− acceptor interactions. 50,55Because Giovannitti et al. observed 14,[22][23][24]26,27,29 other (e.g DPP-or NTDI-based) polymers (▼), [30][31][32]47,48 fullerene derivatives ( ⧫ ), 8,34,[36][37][38][39][40][41][42][43]45 and p(gNDI-gT2) (★, this work); (b) corresponding Seebeck coefficient (α) at maximum electrical conductivity; empirical relation α ∝ σ −1/4 . 10 very litle variation of the higher-energy absorption peak upon electrochemical doping, we chose to normalize all spectra to this peak for comparison.We note that for slight doping with 10 mol % N-DMBI the low-energy absorption peak slightly increases.Instead, upon additional doping, the broad spectral feature at higher wavelengths diminishes, while the absorption at around 600 nm increases relative to the peak at 440 nm after doping with N-DMBI. The latter trend is in full agreement with the study by Giovannitti et al. and previous literature on ndoping. 26,28,50 Doping results in a gradual red shift of the lowenergy absorption peak from 1016 nm for the pristine polymer to 1040 nm for p(gNDI-gT2) doped with 50 mol % N-DMBI.We tentatively assign this red shift as well as the slight increase in absorption upon doping with 10 mol % N-DMBI to planarization of the polymer backbone.Interestingly, we note that the addition of N-DMBI has seemingly no effect on the solution spectra of dissolved p(gNDI-gT2).Thus, we conclude that doping of the polymer is likely to occur during the film formation step upon solvent removal.
To obtain an estimate of the charge carrier density (n), we used the change in the activation energy of the conductivity upon doping.The estimation is based on the extended Gaussian disorder model (EGDM) 56 as reported by Liu et al. 39 The model yields a general relationship of the chargecarrier density and E a /E 0 , where E a and E 0 are the activation energies at a certain doping fraction and at a low carrier density (pristine material) for a specific disorder parameter, respectively.The activation energies of pristine and doped p(gNDI-gT2) were extracted from variable-temperature electrical conductivity measurements by fitting an Arrhenius temperature dependence (Figure 3c) where E a is the activation energy, k b the Boltzmann constant, and σ 0 a pre-exponential factor that does not influence the activation energy.We obtained activation energies of E 0 = 290 meV and E a = 130 meV for the pristine polymer and a sample doped with 20 mol % N-DMBI, respectively.We extracted a disorder parameter of 90 meV and hence estimated a charge carrier density of 1.5 × 10 19 cm −3 , assuming an average hopping distance for conjugated polymers of 1 nm 57,58 and an overall density of states of 10 21 cm −3 (see Supporting Information Figure S3 for details).Note that we can produce good fits for nearest-neighbor hopping as well as 1-, 2-, and 3D variable range hopping, which prevents us from determining the transport mode based on our data (see Supporting Information Figure S4).
To corroborate the estimated charge carrier density of N-DMBI doped p(gNDI-gT2), we employed electron paramagnetic resonance (EPR) spectroscopy (Figure 3d).In the case of (negative) polarons as predominant charge carrier species, the electron spin density acquired by measurement against a known reference sample is directly equivalent to the charge carrier concentration.The lack of an EPR signal for the pristine polymer indicates that the number of unpaired electrons is low.In contrast, for a sample doped with 20 mol % N-DMBI, we readily observe an EPR signal, indicating that n-doping of the polymer has indeed taken place.Quantification of the spectra yields a spin density of ∼1.0 × 10 19 cm −3 (±0.3 × 10 19 cm −3 ).This value is consistent with our estimate for the charge carrier density from the EGDM model, which indicates that polarons are the predominant type of charge carriers because bipolarons would not give rise to an EPR signal.We explain the absence of an EPR signal for the neat polymer despite considerable background doping (cf.discussion below), with the 50 times lower conductivity and hence polaron concentration, which means that our measurement is not sensitive enough.
Comparison of the number of charge carriers n and the total number of N-DMBI molecules n N-DMBI allows us to estimate the doping efficiency, i.e., the ratio n/n N-DMBI .A dopant concentration of 20 mol % translates into 1.3 × 10 20 cm −3 N-DMBI molecules, assuming a density of 1 g cm −3 .Hence, we estimate an approximate doping efficiency of about 13% for p(gNDI-gT2) doped with 20 mol % N-DMBI.In comparison, Schlitz et al. have deduced a more than 10-times lower N-DMBI doping efficiency of only 1% for the nonpolar p(NDI2OD-T2). 14In analogy to several studies of polythiophenes 13,49 and fullerenes 39,43 decorated with more polar oligoethylene glycol moieties, we attribute the higher doping efficiency of N-DMBI-doped p(gNDI-gT2) to enhanced miscibility of the polymer/dopant pair.
The low doping efficiency of polymers such as p(NDI2OD-T2) results in the formation of numerous N-DMBI aggregates on the film surface, which become clearly visible for a doping fraction as low as 9 mol %. 14 We therefore anticipate that the superior doping efficiency of p(gNDI-gT2) reduces the tendency for N-DMBI aggregation.We employed atomic force microscopy (AFM) and scanning electron microscopy (SEM) to study the surface topography of p(gNDI-gT2) thin films (Figure 4a ) obtained by integration along the (e) out-of-plane (q z ) and (f) in-plane (q xy ) direction.Scattering from lamellar and π-stacking is indicated with (h00) and (0k0); scattering marked with an asterisk (*) is associated with the neat dopant.2D grazing-incidence wide-angle X-ray scattering images of (g) pristine p(gNDI-gT2) and (h) the polymer doped with 20 mol % N-DMBI.
quantity and size with an increasing amount of N-DMBI.The surface roughness (Supporting Information Figure S8) changes only slightly from 2 nm for the pristine film to 6 nm after up to 20 mol % N-DMBI is added but increases sharply for 30 mol % and more.Intriguingly, the surface roughness in the regions between the aggregates is not significantly affected by doping, even at higher doping fractions, which suggests that the nanostructure of the pristine polymer is largely maintained.
To further elucidate the effect of the dopant on the nanostructure of the polymer, we obtained a series of scattering diffractograms in the out-of-plane and in-plane directions (Figure 4e,f) through integration of grazing-incidence wideangle scattering (GIWAXS) images of pristine and heavily doped p(gNDI-gT2) (Figure 4g,h; Supporting Information Figure S10).The pristine polymer features distinct scattering peaks from lamellar stacking at q 100 ≈ 0.27 Å −1 and q 200 ≈ 0.54 Å −1 and from π-stacking at q 010 ≈ 1.6 Å −1 .Further, in the inplane scan, two additional peaks are present at q xy ≈ 0.45 Å −1 and q xy ≈ 0.9 Å −1 .We assign these peaks to the repeat distance along the backbone and argue that, similar to p(NDI2OD-T2), 59−62 two polymorphs are present.The diffraction peaks that we observe for pristine p(gNDI-gT2) are not altered upon doping with 20 mol % N-DMBI.Addition of 50 mol % dopant results in the appearance of a new out-of-plane scattering peak at q z ≈ 1 Å −1 and in-plane at q xy ≈ 1.3 Å −1 as well as q xy ≈ 1.75 Å −1 , which we explain with the presence of unreacted excess dopant.Further, annealing of the films does not alter the diffraction from the polymer but results in a slight shift of the peaks associated with excess N-DMBI, as well as a decrease in scattering intensity (Supporting Information Figure S11).We conclude that significant segregation only takes place for a dopant concentration above 20 mol %.Note that a few isolated aggregates are already visible in the AFM images of p(gNDI-gT2) doped with 20 mol % N-DMBI, which are weakly visible in the GIWAXS measurements.Comparison with the nonpolar p(NDI2OD-T2) (cf.study by Schlitz et al. 14 ) indicates that the polar oligoethylene glycol side chains largely suppress N-DMBI aggregation up to a concentration of about 20 mol %, which is consistent with our picture of enhanced polymer/dopant miscibility.
In a further set of experiments, we characterized the electrical properties of p(gNDI-gT2) ≈ 60 nm thin films doped with various amounts of N-DMBI (Figure 5).The pristine polymer features an electrical conductivity of 6 × 10 −3 S cm −1 , which arises due to background doping.In a first regime up to 20 mol %, the addition of N-DMBI is concomitant with an increase in electrical conductivity.We reach a value above 10 −1 S cm −1 , which is more than 2 orders of magnitude higher than p(NDI2OD-T2) doped with N-DMBI (Supporting Information Figure S12a) due to the here-reported higher doping efficiency in the case of p(gNDI-gT2).At the same time, for a dopant concentration up to 20 mol %, the Seebeck coefficient decreases from 359 to 93 μV K −1 .Upon further doping, we observe a substantial drop of the electrical conductivity by nearly 2 orders of magnitude.In contrast, in this second regime, the Seebeck coefficient only slightly decreases to, e.g., 70 μV K −1 for 30 mol % N-DMBI, indicating that the number of mobile charge carriers is not strongly enhanced upon further addition of N-DMBI.We rationalize this behavior with gradual disruption of the polymer nanostructure by excess unreacted dopant, which coincides with the appearance of N-DMBI aggregates (cf. Figure 4).We chose to compare the thermoelectric performance of N-DMBI-doped p(gNDI-gT2) with the empirical correlation that Glaudell et al. have proposed for the thermoelectric power factor of not mobility-limited p-doped semiconductors: α 2 σ ∝ σ 1/2 . 10We observe a good correlation for a doping concentration of up to 20 mol % but a considerable deviation for higher amounts of N-DMBI.This behavior corroborates our picture that excess dopant interrupts the nanostructure of the polymer, causing a considerable reduction in mobility and hence electrical conductivity at high dopant fractions.Overall, we obtain a maximum thermoelectric power factor of 0.4 μW K −2 m −1 in the case of doping with only 10 mol % N-DMBI, which is much higher than the highest value of 0.02 μW K −2 m −1 measured for p(NDI2OD-T2) (Supporting Information Figure S12c).
For p(gNDI-gT2) doped with up to 20 mol % N-DMBI, we anticipate that the electrical conductivity is not limited by the bulk electron mobility.To gain a more complete picture of charge transport in the here-studied system, we estimate the electron mobility μ according to σ = nqμ, where q is the elementary charge, i.e., 1.6 × 10 −19 C. For a dopant concentration of 20 mol %, for which we have deduced the charge carrier density from EGDM as well as EPR, we obtain a value of μ ≈ 0.2 cm 2 V −1 s −1 .This value is considerably higher than the electron field-effect mobility μ FET ≈ 10 −5 cm 2 V −1 s −1 reported for the pristine polymer, which may arise due to the low degree of polymerization of not more than seven repeat units 50 or due to the presence of polar side chains attached to the backbone of the copolymer.In contrast, the here-studied case of highly doped p(gNDI-gT2) does not appear to suffer from a low electron mobility.This observation is consistent with our recent study on p-doping of P3HT, where we likewise concluded that the molecular weight does not influence the conductivity at high dopant levels. 63inally, we investigated the air stability of the electrical conductivity of a doped and a pristine thin film of p(gNDI-gT2) by exposing freshly prepared samples to air while measuring the current−voltage (I−V) behavior at various times.The nonlinear behavior of doped p(gNDI-gT2) samples after 30 min in air prevented us from extracting the electrical conductivity.Instead, we chose to plot the electrical current at 0.5 V (Figure 5c; cf.Supporting Information for I−V curves, Figure S13).The doped and pristine samples show a markedly different response to air exposure.For the pristine polymer, we observe an immediate drop of the current.In contrast, N-DMBI-doped p(gNDI-gT2) is able to maintain a similar current (and hence electrical conductivity) for the first 20 min of air exposure, which suggests that the doped polymer is more air-stable and hence can be handled outside of a protective atmosphere for at least a short period of time.However, after 30 min of air exposure, the current likewise drops by several orders of magnitude.After returning the samples to the glovebox, the current measured for the doped and pristine polymer quickly recovers.Subsequent annealing at 80 °C for 10 min almost restores the current (and hence the electrical conductivity) to the initial value.We tentatively explain this behavior with adsorption of, e.g., oxygen and water from the ambient atmosphere introducing charge traps, which are subsequently desorbed from the film upon re-exposure to a protective atmosphere and annealing. 21To demonstrate the negative influence of water, we compared the conductance of the doped polymer at ambient conditions before and after placing a water droplet onto the film, which caused a 5-fold decrease in conductance (Supporting Information, Figure S14).
We have studied n-doping of the polymer p(gNDI-gT2), which bears oligoethylene glycol-based chains, with the hydride dopant N-DMBI.The polar side chains facilitate more effective doping of the semiconducting polymer by increasing the miscibility with the dopant, resulting in a doping efficiency of ∼13% for a sample doped with 20 mol % N-DMBI.We were able to prepare films with a conductivity above 10 −1 S cm −1 and obtained a thermoelectric power factor of up to 0.4 μW K −2 m −1 .Additional doping leads to segregation of the dopant, which ultimately results in a drastic reduction in the thermoelectric performance caused by a less optimal nanostructure due to excess unreacted dopant.Moreover, we found that N-DMBI-doped p(gNDI-gT2) displays improved air stability as compared to the pristine polymer.We conclude that polar side chains are a powerful tool for the design of more conductive and stable n-type materials.*E-mail: christian.muller@chalmers.se.
Notes
The authors declare no competing financial interest.
−d; Supporting Information Figures S5−S7 ).Both AFM and SEM images indicate formation of dopant aggregates on the surface of the blend films that increase in
Figure 5 .
Figure 5. (a) Electrical conductivity (σ) and Seebeck coefficient (α); dashed lines are a guide to the eye.(b) Thermoelectric power factor (α 2 σ) as a function of the electrical conductivity at various dopant fractions; the dashed line represents the empirical relationα 2 σ ∝ σ 1/2 .10(c) Air stability of pristine and N-DMBI-doped p(gNDI-gT2): the current at 0.5 V was extracted from I−V curves recorded in nitrogen, in air, and finally again in nitrogen; note that the nonohmic behavior of several samples prevented us from extracting the electrical conductivity.A contact geometry with a channel length of 1000 μm and a channel width of 30 μm was used for air stability measurements of doped samples, which resulted in similar currents for the pristine and doped sample.
■
ASSOCIATED CONTENT * S Supporting Information The Supporting Information is available free of charge on the ACS Publications website at DOI: 10.1021/acsenergylett.7b01146.Experimental methods; synthesis of p(gNDI-gT2); description of determination of the disorder parameter and charge carrier density; Figure S1: 1 H NMR spectrum of p(gNDI-gT2); Figure S2: MALDI ToF of p(gNDI-gT2); Figure S3: fits of the activation energy to the charge carrier mobility; Figure S4: fits of the variabletemperature dependency; Figure S5: 3D topography images from AFM scans; Figure S6: AFM height images; Figure S7: SEM images; Figure S8: mean surface roughness plot; Figure S9: TEM images; Figure S10: 2D grazing-incidence wide-angle X-ray scattering images; Figure S11: X-ray diffractograms of annealed films; Figure S12: thermoelectric properties of p(NDI2OD-T2) and p(gNDI-gT2); Figure S13: air stability of the I− V behavior; and Figure S14: influence of water droplet on the conductance (PDF) ■ AUTHOR INFORMATION Corresponding Author | 2018-04-03T00:53:01.387Z | 2018-01-05T00:00:00.000 | {
"year": 2018,
"sha1": "63be027a0d4e5e9cac1a3bb410eaba0137fa84e5",
"oa_license": "publisher-specific-oa",
"oa_url": "https://doi.org/10.1021/acsenergylett.7b01146",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "4dbfd7a610aebae432e3303be85ec72a8a13053b",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
229717940 | pes2o/s2orc | v3-fos-license | All night long : an assessment of the cognitive effects of night shift work in anaesthesiology trainees
Sleep, although poorly understood in its function, is important in maintaining cognitive and psychomotor abilities.1 Insufficient sleep is becoming more prevalent and has been labelled by the Centers for Disease Control and Prevention in the United States as a public health problem.2 This increase in prevalence is associated with a twenty-four hour modern society.2 However, the impacts are most notable and relevant in industries which require an uninterrupted service, such as industrial production, transportation and public safety.3 These impacts become even more critical in operations which require high level cognitive performance, including healthcare and the military.3
Introduction
Sleep, although poorly understood in its function, is important in maintaining cognitive and psychomotor abilities. 1 Insufficient sleep is becoming more prevalent and has been labelled by the Centers for Disease Control and Prevention in the United States as a public health problem. 2 This increase in prevalence is associated with a twenty-four hour modern society. 2 However, the impacts are most notable and relevant in industries which require an uninterrupted service, such as industrial production, transportation and public safety. 3 These impacts become even more critical in operations which require high level cognitive performance, including healthcare and the military. 3 Excessive working hours and fatigue in medical training have been a source of increasing concern. 4,5 In the South African context, the South African Society of Anaesthesiologists provided a position statement on workload and fatigue as part of their Practice Guidelines 2018 Revision. This statement was in response to concerns raised about the working conditions of junior doctors and trainees in the public health setting. 6 The recommendations from these guidelines are that continuous on-call duty of less than 12.5 hours is suggested, more than 17 hours should be discouraged and excess of 24 hours should be condemned. 6 Further, consecutive duties should allow for an adequate rest period in proportion to the hours worked. However, even these guidelines highlight that these recommendations and suggestive corrective strategies are often disregarded in the supposed interest of patient care and that the information available in this arena is limited. 6 Therefore, solutions for managing the problem in the public sector in South Africa remain elusive.
The aims and primary outcome of this study were firstly to quantify the impact that shift work has on multiple domains of cognitive function in anaesthesiology trainees at Tygerberg Academic Hospital. This was evaluated using both subjective and objective measurement tools. A secondary outcome was to identify strategies to ameliorate the effects of shift work-related fatigue.
Methods
This was an analytical observational study, in which each participant was assessed prior to and following the completion of a 14-hour night shift. Each participant completed a written questionnaire and a battery of cognitive tests which were administered on a personal laptop computer and data were captured electronically.
Each participant completed a process of informed consent and was allocated a study number so as to de-identify their test results and other data. Testing began with the completion of a written questionnaire. The questionnaire included demographic data, the Karolinska Sleepiness Scale as a subjective measure of fatigue, an assessment of activities on the day preceding the night shift and information regarding the number of night duties completed in the week preceding the night of testing. The Karolinska Sleepiness Scale is demonstrated in Figure 1. 1. Extremely alert 2. Very alert 3. Alert 4. Rather alert 5. Neither alert nor sleepy 6. Some signs of sleepiness 7. Sleepy, but no effort to keep awake 8. Sleepy, some effort to keep awake 9. Very sleepy, great effort to keep awake, fighting sleep The battery of cognitive tests then proceeded. The computerised test battery software was provided by Cogstate®. Cogstate® is a cognitive science company based in the United States of America. They design computerised cognitive tests to be used commercially and at the time of this study they provided these test batteries without charge to those undertaking academic research with affiliation to a university or other recognised body. The battery can be customised to suit the requirements of the study and the specific tests which were used in this study were selected by the primary researcher.
Four tests were included in the cognitive test battery and all used a playing card interface. Each test focussed on a particular cognitive domain and the response from the participant was measured in terms of both speed and accuracy. The tests used are described in Table I. Each test continued for approximately four minutes and required the participant to use the mouse to click right for a 'yes' response and left for a 'no' response.
The same cognitive test was then performed after the completion of the 14-hour night shift, which ran from 17:00 to 07:00. This was also accompanied by a paper-based questionnaire, which included the Karolinska Sleepiness Scale; an assessment of whether the participant had opportunity to rest during their shift; and a subjective assessment of the difficulty of the shift. Both questionnaires used before and after the night shift were designed by the author. (See Appendix A and B.) In the process of data analysis, the Biostatistics Unit at the Faculty of Medicine and Health Sciences, University of Stellenbosch, were consulted and their participation was integral to collection, management and statistical analysis of the data.
IBM SPSS version 25 was used to analyse the data. A p-value < 0.05 indicated statistical significance. Paired t testing and repeated measures analysis of variance testing was used to compare the data obtained from the cognitive tests performed prior to and following a night shift. Factors in the model included gender, age, use of stimulants and behavioural factors, such as pre-call activities and pre-call naps. A full factorial model was used. The effect of time was the main exposure of interest, whilst interactions between time and the various factors indicated significant confounding.
Results
A description of the demographics and experience of the 29 participants in the study is summarised in the accompanying Table II. Experience is described both in terms of general anaesthesiology exposure as well as duration of experience at Tygerberg Academic Hospital. The accuracy with which all four tests were completed showed no significant difference between pre-call and post-call test-ing. Both accuracy and speed results are denoted in Figures 2 and 3.
Secondary outcomes were sought by searching for associations between the primary tests which showed statistically significant changes and the participants' questionnaire answers. Therefore, associations were only pursued in relation to reaction speed for two of the tests; namely detection and identification.
Associations were assessed by multivariate analysis of variance testing and were considered significant if the Wilk's lambda test showed a significant value with p < 0.05, as previously stated. The variables considered were all those which participants answered to in the pre-and post-call questionnaires. These were: gender; age; marital status; activities partaken on pre-call day; napping on pre-call day; use of stimulants (e.g. caffeine) on pre-call day and during night shift; number of night shifts in preceding week; level of experience in anaesthesia and at Tygerberg Academic Hospital; pre-and post-call self-assessment of fatigue using the Karolinska Sleepiness Scale; duration of breaks and sleep during night shift if any; and self-assessment of night shift difficulty on a Likert-type scale. Using this method, no statistically significant correlations with deterioration in reaction time were found.
Subjective assessment of fatigue using the Karolinska Sleepiness Scale did demonstrate a perceived decline in wakefulness by participants. The median score on this scale had a statistically significant increase from 3 to 6 using the Wilcoxon signed-rank test (p < 0.001), with a large effect size, denoted by a ranked pairs matched biserial correlation of 1. However, this did not have statistical correlation with the objective cognitive test battery.
Sustained wakefulness of greater than 17 hours can be ascertained to have occurred in 10.3% of the participants in this study. These participants reported having no pre-call nap, as well as no sleep during their night shift. Other participants may have been exposed to a similar period of wakefulness, but as the time of pre-call naps was not obtained in the questionnaire, this could not be confirmed.
Discussion
After a 14-hour night shift, cognitive testing in anaesthesiology trainees at Tygerberg Academic Hospital demonstrated a statistically significant decline in response time in tests of the following cognitive domains: psychomotor function and attention. However, response time in testing of the domains of visual learning and working memory showed no such significant change. Further, the participants showed no change in accuracy in any of the testing domains.
This speed-accuracy trade off, where speed of response is foregone in order to maintain accuracy or vice versa, is an example of heuristics or mental shortcuts. 8 Heuristics are cognitive strategies used in decision making to obtain adequate solutions while minimising systematic processing, and may be more prevalent in the setting of fatigue. 8 The choice of whether to forego speed or accuracy is determined by circumstance and what is perceived by the individual to maximise reward rate. 9 In this study, the decline in response speed ranged from 13.4-17.8%, dependent on the cognitive domain being tested. The affected cognitive domains, in this study, were psychomotor function and attention. Both these domains constitute a large part of the characteristics of the workload during anaesthesia. Attention or vigilance may be defined as 'the ability to remain alertly watchful especially to avoid danger' . 10 While psychomotor function, the relationship between cognitive function and physical movement, is demonstrated through tasks such as dexterity, tracking and reaction time. 10 Other studies investigating decline in cognitive performance related to fatigue and sustained wakefulness in medical trainees have shown similar declines in speed, ranging from 4.64-40%. 4,11,12 These studies have varying study design and used an array of cognitive tests.
It is important to note, that studies have shown that sustained wakefulness of more than 17 hours has a similar impact on psychomotor performance as a blood alcohol concentration of 0.05%. 4,6,[12][13][14][15][16] This is equal to the legal driving limit for blood alcohol concentration in South Africa 17 and such a period of wakefulness could be confirmed in 10.3% of the participants in this study. As such, using alcohol as a comparator in this study, may have provided an interesting qualitative parallel with which to assess the decline in response times seen here due to the fatigue associated with night shift work and prolonged wakefulness. Even the World Health Organization recommends that the behavioural effects of drugs can be measured using alcohol as a comparison. 18 However, in order to correlate alcohol intoxication with fatigue, one would further have to consider intra-individual and within-group variability of response to alcohol intoxication. 19 This is an interesting avenue to pursue in further studies of fatigue in medical registrars and medical officers.
The impact of fatigue, which has been demonstrated in this study by a decline in reaction speed in two cognitive domains, manifests itself in both risks to the patient and the medical trainee. It has been demonstrated that drug errors are four times more likely to occur when the medical trainee is fatigued. 13 Epidurals placed at night are six times more likely to result in unintentional dural puncture than those placed during the day. 13 Needlestick injuries increase between three-to five-fold at night time across various disciplines. 13 Medical trainees are twice as likely to be involved in a road traffic accident than the general population. 13,15 This is particularly concerning as 93.1% of participants in this study planned to drive themselves home after completing their night shift. While only 6.89% would have rested at work before driving home. In addition, consideration must also be given to the 'second victim' effect, when an error that results in patient harm, leads to devastating psychological consequences for the trainee. 20 Up to 86% of medical trainees responding to survey studies have reported committing a medical error where they attribute All night long: an assessment of the cognitive effects of night shift work in anaesthesiology trainees fatigue as a causative factor. 10,13 This is despite the limitation placed on the trainee's ability to judge their own level of impairment, as self-awareness is reduced in the setting of fatigue. 4,13 Trainees have also reported adaptation to the effects of chronic sleep loss as their experience increases. 4 However, studies have shown no objective evidence that such acclimatisation exists, and no association was found in this study between age or experience and improved outcomes. 4,12 Self-assessment of cognitive performance is often based on the management of critical tasks, but such tasks are associated with arousal which may overcome fatigue-induced cognitive impairment. Therefore, trainees who base the assessment of their own fatigue on their management of clinical crises, will likely overestimate their abilities. 12 This is, again, in keeping with this study's finding that the majority of participants planned to drive home despite subjectively perceiving a decline in wakefulness.
This study also attempted to identify strategies to ameliorate fatigue-induced effects on cognition. Participants were questioned both before and after their night shift. Prior to the call, participants were asked to submit the following information: However, the preventive and corrective strategies recommended by the South African Society of Anaesthesiologists must be supported in this regard. 6 These were based on a series of studies on the impact of long work hours on healthcare providers and patients including the work of Lockley et al. 21 The guidelines include: • Daytime sleeps before a night shift.
• Naps of at least 40 minutes when excessively fatigued and prior to driving home.
• Improved structure of call and shift rosters.
• Caffeine consumption improves alertness but may impair rest and nap breaks.
• Continuous on-call duty of less than 12.5 hours is suggested, more than 17 hours is to be discouraged, and excess of 24 hours is to be condemned when the main activity is provision of anaesthesia.
• Work schedule must provide for non-clinical activities.
• Scheduling plan to ensure availability and appropriate supervision of junior providers.
• Adequate personnel to workload ratios. 6 Several international anaesthesiology bodies have also published guidelines and recommendations regarding fatigue and the anaesthesiologist. The Association of Anaesthetists of Great Britain and Ireland have recommendations which include that job plans should be constructed such that they are not likely to lead to predictable fatigue. 22 They also highlight individual strategies to reduce and mitigate the effects of fatigue, such as good sleep hygiene. 22 The Australian and New Zealand College of Anaesthetists have guidelines which suggest that anaesthesiologist working time should not exceed 12 hours and where this is not feasible, due to staffing or hospital coverage requirements, shift duration should be kept below 16 hours. 23 The World Federation of Societies of Anaesthesiologists, in collaboration with the World Health Organization, recommend that 'a sufficient number of trained anaesthesia providers should be available so that individuals may practice to a high standard without undue fatigue or physical demands' . 24 As well as recommending that 'time should be allocated for education, professional development, administration, research and teaching' . 24 Certain limitations were present in the undertaking of this study. The population used had a sample size of only 29 participants. As such the study was underpowered to establish important differences, particularly in identifying effective strategies to modify post-call fatigue, in order to make evidence-based recommendations. Using a control group in the study design and considering a non-medical cohort may also have been desirable. These shortcomings should be weighed in the design of future similar studies.
Further, this sample was taken from a single tertiary academic centre, focussed on the speciality of anaesthesiology only and specifically included 14-hour night shifts while other call types were excluded. A wider diversity of specialities and call types at a variety of training centres would have provided a broader perspective of the problem of fatigue in medical trainees.
This study highlights that in providing quality health care to our patients, which remains paramount, we must not lose sight of the importance of maintaining practitioner well-being. As diurnal creatures, the provision of uninterrupted emergency healthcare services will always create obstacles to healthcare providers. However, strategies to minimise the risks associated with practitioner fatigue can be instituted at the individual and organisational level, with adequate staffing being a pre-eminent factor in this resource-limited setting. | 2020-12-24T09:10:08.934Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "941df9b2e94a067539b0460cc932a77116d6755b",
"oa_license": null,
"oa_url": "https://journals.co.za/doi/pdf/10.36303/SAJAA.2020.26.6.2361",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "fea4eae0a6d9fb3baa82e36ff3eec7b7e543e709",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": []
} |
153423570 | pes2o/s2orc | v3-fos-license | Women Entrepreneurship and Innovations in India: An Exploratory Study
Increased female entrepreneurial activity heralds a progress for women's rights and optimization of their economic and social living index. Women entrepreneurship is synonymous with women empowerment. Parallel to the male counterparts, female entrepreneurs are catalytic in job creation, innovation and more than tangible contribution to the GNP of the country. An economy thrives when women get a level playing field as men. Innovation works as a catalyst or an instrument for Entrepreneurship. Indian Women, despite all the social hurdles stand tall from the rest of the crowd and are applauded for their achievements in their respective field. The transformation of social fabric of the Indian society, in terms of increased educational status of women and varied aspirations for better living, necessitated a change in the life style of Indian women. This paper endeavors to explore studies related to Women Entrepreneurship and Innovation in India. Few examples from Gujarat, India have been taken to understand the study in a better way.
WOMEN ENTREPRENEURSHIP AND INNOVATION IN INDIA: AN EXPLORATORY STUDY
Lazear (2005) Entrepreneurship as (as cited by Al-Sadi, R. et al. (n.d.))"the process of assembling necessary factors of production consisting of human, physical, and information resources and doing so in an efficient manner" and entrepreneurs as those who "put people together in particular ways and combine them with physical capital and ideas to create a new product or to produce an existing.
Montanye (2006) Entrepreneurship is considered (as cited by Al-Sadi, R. et al. (n.d.))" as a factor of production, linked to innovation and risk taking, where entrepreneurial compensations are tied to uncertainty and profits). Women Entrepreneurship has a tremendous potential in empowering women and transforming society. It has been recognized as an important source of economic growth. Women entrepreneurs create new job for themselves and others, thus contributing to the solution to organization and business problem. Rao et al. (n.d.) the emergence of women on the economic scene as entrepreneurs is a significant development in the emancipation of women and securing them a place in the society, which they have all along deserved. The hidden entrepreneurial potential of women has gradually been changing with the growing sensitivity to the role of economic status in the society. Premalatha, U. M. (2010) Women are the architects of human society. Women are a significant force in the entrepreneurial world, as they make a noteworthy contribution to the economic development, and women-owned businesses are critical to economic prosperity. A Women Entrepreneur is the one who starts business and manages it independently and tactfully, taking all the risks at the same time facing the challenges boldly with the determination to be successful. Women Entrepreneurship is an economic activity who think of a business enterprise, initiate it, organize and combine all the factors of production, operate the enterprise and undertake the risks and handle the economic uncertainty involved in running a business enterprise.
Women Entrepreneurship has crossed the stage of transition and it is finally in flight, but it still has a long way to go and emerge as successful business giant. Ganesamurthy, V. S. (2007) in his book "Economic Empowerment of Women", defines women entrepreneur as "a confident, innovative and creative women capable of achieving selfeconomic independence individually or in collaboration, generates employment opportunities for others though initiating, establishing and running the enterprise by keeping pace with her personal family and social life". The Economist notes that "educating more women in developing countries is likely to raise the productive potential of an economy significantly".
According to, The Female Poverty Trap 2001, Women Entrepreneur"s means making the women self-reliant giving her the liberty to make choices in her life and providing her with information and knowledge to take decisions. Education and employment are the only two methods that can empower women.
Objectives of the Study
The major objective of this article was to explore the studies related to Women Entrepreneurship and Innovation in India and also to understand how innovation in Entrepreneurship leads to success and growth of an enterprise.
To understand how innovation in Entrepreneurship leads to the success of an enterprise, certain examples from the city of Surat, Gujarat, India have been taken.
The complete research work leading to the paper is based on secondary data. For secondary data, relevant Books, Journals, Magazines, Internet, Newspaper have been used.
Literature Review
Women Entrepreneurship is an essential part of the Human resource development. Women have become aware of their existence, their rights and their work situation due to the growing industrialization, urbanization and social legislation and with the spread of higher education & awareness, the emergence of women owned businesses are speedily increasing in the economies of almost all countries. The examples assumes that women explore the prospects of starting a new enterprise, undertake risks, introduce new innovation, coordinate administration & control of business & provide effective leadership in all aspects & have proved her footage in the male dominated business arena of textile. The disconnect between the two spheres of everyday existence, the proliferation of loci of identity on one hand and the endeavor to combine so many elements (times, relational style, etc.) on the other, is depicted as an identity resource for female entrepreneurs because it give rise to opportunities and the ability to developing specific organizational, relational and institutional skills . (Bruni, Gherardi, Poggio, 2004). What we need is an entrepreneurial society in which innovation and women entrepreneurship are normal, steady and continuous. Just as management has become the specific organ of all contemporary institutions, and the integrating organ of our society of organizations, so innovation and entrepreneurship have to become an integral life-sustaining activity in our organizations, our economy, our society (Drucker, 1985).
Sr. No
Author ( The study suggested that if the marginal innovation is done under pressure from outside, a better venture capital must increases the innovation rate. If the marginal innovation would have been implemented without outside pressure, a better venture capital, by decreasing the rents of being the incumbent firm, decreases the rate of Innovation.
2.
Bowen and Hisrich The study gives information mainly exploratory related to support activities to convert Techno-Innovation into Techno-Entrepreneurship by keeping main focus on Technology Business Incubation approach in India 5. Cohoon, Wadhwa and Mitchell (2010) A detailed Exploration of Men &Women Entrepreneur"s Motivations, Background and Experiences The study identifies top five financial & psychological factors motivating women to become entrepreneurs. These are desire to build the wealth, the wish to capitalize own business ideas they had, the appeal of start-up culture, a long standing desire to own their own company and working with someone else did not appeal them. The challenges are more related with entrepreneurship rather than gender 6. Darrene, Harpel and Mayer (2008) Finding the Relationship between Elements of Human capital and Self-Employment among Women The study showed that self-employed women differ on most human capital variable as compared to the salary and wage earning women. The study also revealed the fact that the education attainment level is faster for self employed women than that for other working women 7. Das (2000) Women Entrepreneurs of SMEs in two states of India, viz, Tamilnadu and Kerala The initial problems faced by women entrepreneurs are quite similar to those faced by women in western countries. However, Indian women entrepreneurs faced lower level of workfamily conflict and are also found to differ from their counterparts in western countries on the basis of reasons for starting and succeeding in business.
Women Entrepreneurs in India -A Socio Economic study of Delhi
Studying the women entrepreneurs of India has reported that women lacked confidence to study their own ventures: social pressure restricting freedom of movement and financial organizations not encouraging the women entrepreneurs have the reasons for women"s unwillingness to come forward to take up entrepreneurship 9.
Erik Stam (2008) Entrepreneurship and Innovation Policy The paper discusses the nature of Entrepreneurship and its relation to innovation. It also provides an overview of theory and empirical research on the relation between Entrepreneurship, Innovation and economic growth. The paper continues with a study of Entrepreneurship and innovation in Netherlands in international and historical perspectives.
Evaluate the Research & Publication Contribution in the area of Women Entrepreneurship
The study categorized various journal & resources of research on the basis of certain parameters concerned with women entrepreneurship like gender discrimination, personal attributes, financing challenges, business unit, context and feminist perspectives. 11. Halifax (2008) Micro Credit for Women Entrepreneurs The study showed the flow of micro credit is a pushing factor for the promotion of micro enterprises. This is evidenced by the fact that women Self-Help Groups (SHGs) are the purveyors of major credit requirements of new micro entrepreneurs and also the existing micro entrepreneurs. 12. Hasemi(1996) Women Entrepreneurship In India Review Women exercise over loans and have arrived at the conclusion that micro credit had a negative (www.siteresources.worldbank.org) impact on women"s empowerment. They found that less than 18 % of the women in the sample studies retained full control over the loans they availed from credit programmes. Thirty-nine per cent of the respondents were judged to have very little control over the loans. 13. Jalbert (2000) To Explore the Role of Women Entrepreneurs in a Global Economy The study has shown that the women business owners are making significant contributions to global economic health, national competitiveness and community commerce by bringing many assets to the global market. 14. Kumari, S. (2012) Challenges and Opportunities for Women Entrepreneurship in India Under Globalisation The micro finance programmes targeting women are often promoted as a component of packages to absorb the shock of structural adjustment programmes and globalisation, with macroeconomic and social policy prescriptions which seriously disadvantage women, decrease public sector availability of complementary services and remove any existing welfare nets for the very poor 15. Lall and Sahai (2008) Conduct The study suggested that though, there has been considerable growth in number of women opting to work in family owned business but they still have lower status and face more operational challenges in running business.
16.
Malhotra, Anju, Schuler Sidney and Boender Carol (2002) Measuring Women"s Empowerment as a Variable in International Development They have reviewed the many ways that empowerment could be measured and have suggested that the researchers should pay attention to the process in which empowerment occurred.
Women"s Empowerment in Rural India
They have reviewed the many ways that empowerment could be measured and have suggested that the researchers should pay attention to the process in which empowerment occurred. 18. Rao et al. Women Entrepreneurship in India (A Case Study in Andhra Pradesh) The study relating to women entrepreneurs in rural areas further reveal that training and awareness regarding different agencies have proved beneficial for women entrepreneurs in building confidence.
19.
Singh, N.P., Sehgal, P., Tinani, M. and Sengupta, R. (1986) Successful Women Entrepreneurs -Their Identity, Expectations and Problems: An Exploratory Research Study The reasons for the choice of business are in the order of high demand for product, processing skills, ready market, future prospects and creativity. The reasons for women to become entrepreneurs were to keep busy, to earn money on their own, to pursue hobby as an earning activity, by accident and circumstances beyond control. 20. Singh, N.P., and Sengupta,R.(1985) Potential Women Entrepreneurs: Theory Profile, Vision and Motivation : An Exploratory Study The study revealed that educationally more qualified women perceived entrepreneurship as a challenge, ambition, and for doing something fruitful, whereas those educationally less qualified entrepreneurs perceived the EDP training as only a tool for earning quick money. The majority of the potential entrepreneurs had clarity about their projects but needed moral support from males and other family members for setting up their enterprise
Shyamala (1999) Entrepreneurship Development for Women
Entrepreneurial development is a complex phenomenon.
Entrepreneurs play a key role in the economic development of a country. Entrepreneurship may be regarded as a powerful tool for economic development of a predominantly agricultural country like India. 22. Surti, K. and Sarupriya, D. (1983) Psychological Factors Affecting Women Entrepreneurs: Some Findings Results indicated that unmarried women entrepreneurs experienced less stress and self-role distance than married women entrepreneurs. Women entrepreneurs from joint families experienced less stress, probably because they share their problems with other family members. External focus of control was significantly related to the stress role and fear of success was related to result-inadequacy and role-inadequacy dimensions of stress. While many entrepreneurs used intrapersistent coping styles, such as taking action to solve problems, avoidance was more common than approachoriented styles of coping. 23. Singh (2008) Identifies This study found that in Asian developing countries SMEs are gaining overwhelming importance; more than 95% of all firms in all sectors on average per country. the study revealed that most of the women entrepreneurs in SMEs are from the category of forced entrepreneurs seeking for better family incomes 25.
Vishwanathan Renuka (2001) Opportunities and Challenges for Women in Business The strategy of self-help groups was used to empower the vulnerable and powerless poor women through DWCRA. Awareness programmes and group activities were provided and emphasis was made on setting up of local "skill exchanges" that helped women to improve their economic status. The author cited Indira Mahila Yojanawhich had the basic principle in its scheme that would lead to economic empowerment which would improve the family relationship and domestic work culture leading to social empowerment, more equitable participation of women in family decision making helping them acquire leadership qualities and political empowerment 26. Yusuf, S. (2007) From Creativity to Innovation The paper suggested that development and commercialization calls for expertise, ingenuity and entrepreneurial creativity to achieve success. It is the developmental efforts, organisational capabilities and resources which ultimately ensure that the innovation generated by a creative society leads to economic growth
WOMEN ENTREPRENEURSHIP IN INDIA
Ganesamurthy, V. S. (2007), according to government of India, a women entrepreneur is defined as an enterprise owned and controlled by women and having a minimum financial of 51 per cent of the capital and giving at least 51 per cent of the capital and giving 51 per cent of the employment generated in the enterprise of women. It has been globally recognized that women"s empowerment can be well paying strategy for overall economic and social development. This has resulted insignificant changes in the approach to assist, women in continuum ranging from welfare to development.
Entrepreneurship development among women is one activity that promises encouraging results. By motivating, training and assisting women towards forming and running business ventures, it may be possible to tackle many of gender issues. Jahanshahi et al. (2010) Economic globalization has encouraged the expansion of female business ownership . Women owned businesses are highly increasing in the economies of almost all countries. The hidden entrepreneurial potentials of women have gradually been changing with the growing sensitivity to the role and economic status in the society.
"Women Entrepreneur" is a person who accepts challenging role to meet her personal needs and become economically independent. A strong desire to do something positive is an inbuilt quality of entrepreneurial women, who is capable of contributing values in both family and social life. With the advent of media, women are aware of their own traits, rights and also the work situations. Women able (2010) Innovation is defines as "the implementation of a new or significantly improved product (good or service) or processes, a new marketing method or a new organizational method in business practices, work place organization or external relations". Halifax (2008) Numerous statics show that even during the years of economic crisis and recession, the one robust sector providing economic growth, increased productivity and employment has been that of Small Sized Enterprises (SMEs).
Women Entrepreneurship and Innovation
Schumpeter (as cited by Erik Stam, 2008) defines Entrepreneurs as individuals that carry out new combinations (i.e. innovations). Schumpeter distinguishes four roles in the process on innovation; the inventor, who invents a new idea, the entrepreneur who commercializes this new idea; the capitalist, who provides the financial resources to the entrepreneur (and bears the risk of the innovation project); the manager who takes care of the routine day to day corporate management.
Shahid Yusuf (as cited in PaulRomer, 2007) predicts that the country which will lead in the 21 st century will be one which implements innovations-Meta ideassupporting the production of new ideas in the private sector. Bulsaraet al. (2009) Innovation is the introduction of new ideas, goods, services and practices which intended to be useful(though a number of unsuccessful innovations can be found throughout history). The main driver for innovation is often the courage and energy to better the world. An essential element for innovation is its application in a commercially successful way. Innovation has punctuated and changed human history (consider the development of electricity, steam engines, motor vehicles etc.). Orhan et al. (2001) Academics and government appear to be concentrating and encouraging entrepreneurship because it symbolizes innovation and a dynamic economy. Female entrepreneurs have been identified as a "major force for innovation and Job creation" (organization for economic cooperation and development, 1997) and therefore much research about women business owners has concentrated on their motivation to become entrepreneurs. N. S. Nagar in his book "Women and Employment" (2008), Countries which do not capitalize on the full potential of one half of their societies are misallocating their human resources and compromising their competitive potential. Women entrepreneurs are reported to be growing at a faster rate than the economy as a whole in several countries. Their contribution could become even more significant if their potential is fully tapped and it is possible only when various obstacles and restrictions are removed. India stands as one of the most developing country across the world, the economist to a great extent, have realized the potentialities of women"s. One such state which welcomes women entrepreneur and its innovation is the state of Gujarat.
The women of Gujarat have flourished with hard work, dedication and innovation. Lijjat Papad (a handmade thin, crisp circular shaped Indian food, served as an accompaniment in Indian meals) is the classic example of a small group of women (7) coming together to start a venture for sustainable livelihood using the only skill they had, i.e cooking. It is considered as one of the most remarkable entrepreneurial organization to have built up and sustained the trust, productivity and the expectations of the customers. The other worth observing examples, as to how hobbies when added with innovation can lead to a full time business are given as follows: Example 1: Phoenix Soft Toys Creation describes a case, about a young women entrepreneur from Chorwad, Saurahstra, India who used to make toys as hobby, then moved to puppet making and converted these skills into business. Business for her was not only profit maximization but also giving something more to the society by women empowerment, education, art and making difference. Using innovativeness she converted her hobby into full time career and employing other women. The present case also assumes that changes in demand conditions (e.g. technological, market, demographic, political, institutional and cultural developments) create opportunities that are not equally obvious to everyone, but are discovered and exploited because some individuals have an advantage in discovering specific opportunities. This advantage is provided by these individuals" access to idiosyncratic information and resources, an advantage generated by their prior experiences and their position in the social networks. Finally, the world needs to unleash the power of women"s entrepreneurship to make our economies and societies stronger and sustainable.
Ironically, traditional measures of economic development and business performance do not often capture the true transformational benefits of these change inducing enterprises. Lerner (2002) found that innovations" (as cited by Womenable) is higher in growth oriented firms, meaning that owner intent and motivation plays a role in a firm"s innovation behavior. In this case, it was the intent and motivation of the women entrepreneur that gave her the growth and success.
Example 2: Rink's Creation
This particular case reflects the journey of a tenacious woman, who withstood and sustained societal and familial norms and made her dreams spectacularly tangible. Entrepreneur"s path rarely a straight line is more convoluted for women. Rinku Lakdawala, hailing from a traditional & conservative large Gujarati family comprising of five siblings, four sisters & one brother. From a modest financial background, Rink believed to constantly improve, educate and update herself and be among the best. She started her career as a dress designer in her husband"s garage. She believed that great attention should be paid to technology up gradation and modern manufacturing practices. To have a near-perfect production set-up, every year there is a need to invest in facilities and renewal of existing machines, and the drive to modernize the procurement procedures which is very important for the design and development.
Initially limiting to merely hand embroidery work, she later diversified into machine embroidery, after procuring two automatic embroidery machines. Currently her unit has 7 automatic embroidery machines. Competition in this type of business is cut-throat. Being a woman, she continues facing these challenges to a more acute extent. Competitive edge in the fashion industry can be achieved only by continuous investment in technology and manpower that will deliver greater productivity and result in higher quality outputs. Innovation, creativity and product design are becoming the key prerequisites for success. None of this would result in success without proper market segmentation and focused orientation on profitable market niches. Overcoming all the challenges and upgrading herself with the most innovative ideas, Rinku is one of the most successful Women Entrepreneur in the city of Surat, Gujarat. She has been awarded "Bhaskar Women of the Year Award, 2012. She has also won the "L. P. Savani Women Entrepreneur Award, 2012", this award is an appreciation of who achieved extraordinary success and done commendable work in their respective field. Rinku represents those enterprises that are managed by women and are done so extraordinarily with them as the decisionmakers. She represents a group of women entrepreneurs who have broken away from the beaten track and have explored new avenues of economic participation. She has competed with man and successfully stood up with them in every walk of life and business is no exception for this. These women leaders are assertive, persuasive and willing to take risks. They managed to survive and succeed in this cut throat competition with their hard work, diligence and perseverance. She has unquestionably established the fact that women can be as capable and successful entrepreneurs as men in business and industry.
Example 3: Designz Boutique
Women entrepreneurs are creating jobs, innovation and contributing to the GNP of various economies as much as their male counterparts. There is growing evidence that women are more likely to reinvest their profits /surpluses in education, their family and their community. The present case study justifies the above, where Bhavna Kikla started her journey of entrepreneurship as a passion but eventually became a reason for social and financial sustenance for her family.
Women choose self-employment over other possibilities on the labour market, such as being paid or an unpaid family worker. Born in an extremely well off family of Surat, she was exposed to Hifashion early in life and she took to it like fish to water. However, her interests in fashion were not limited to her and family members benefit, but she wanted to further explore this exciting new world.
Entrepreneurial opportunities are not equally obvious to everyone, but the present case assumes significance that they are equally available to anyone with the knack and the where withal of searching for them. Opportunities themselves are unstructured, and the plusses and minuses of opportunities are largely dependent of idiosyncratic individual differences in perceptions due to experience, education and upbringing. Having tested the waters and gained valuable confidence, in 2001, Bhavna started her journey of entrepreneurship as a passion. Due to financial business losses her husband and her in-laws also joined her in her business. With new ideas, innovation and support of her family, Bhavna expanded her business and is running her business successfully in the city of Surat.
Example 4: Ravi Fashion's
This particular case reflects the journey of a woman, who not only created a special niche market for her products but also set a trend of successful enterprises for others to emulate, her youth and enthusiasm providing a fresh impetus for others to follow. Asha Nakrani , completed her schooling and thereafter took up a three year Diploma in fashion designing from NIFD (National Institute of Fashion Designing). She was a talented, sincere and enthusiastic student who could effectively use her knowledge to augment her imagination & skills. Starting one"s own business is all about having a dream and then concrete steps is taken to ensure the business a successful start. Asha considered these factors that would determine her business trajectory before launching her new venture. She is talented in developing good relations in the market; she can also see the opportunity to expand further through her intense prescience.
Asha being a self-motivated and strong person seized every opportunity to make her trade larger and all-encompassing the value chain. Setting up Embroidery machines at Kapodara (Surat) and stitching units at Udhna (Surat), in 2011-12 required more capital, her father readily made available the required financial assistance.
For the Embroidery business, she partnered with the relative where she is the sole working partner. She also owns the stitching unit where she has employed a working partner to look after the unit throughout the day. She shares the profit so that he works with full dedication and motivation. When workload increases in stitching units she herself becomes the working partner. She has established her office in New Bombay Textile Market (one of the leading textile market in Surat) since four years which serves as a functional base. A successful fashion entrepreneur needs innovation to be able to identify opportunities in a climate of ambiguity and chaos, together with passion and enthusiasm for her output, to provide impetus to her drive to constantly improve her products' features. She also needs determination and persistence to drive ideas through the many obstacles and challenges she comes across. Alertness and a sharp eye about the contemporary fashion & embroidery trends have kept Asha on the forefront of her business.
CONCLUSION AND DISCUSSION
Today we are in a better position wherein women participation in the field of entrepreneurship is increasing at a considerable rate. Efforts are being taken at the economy as brought promise of equality of opportunity in all spheres to the Indian women and laws guaranteed equal rights of participation in political process and equal opportunities and rights in education and employment were enacted. But unfortunately, the government sponsored development activities have benefited only a small section of women i.e. the urban middle class women. Women sector occupies nearly 45% of the Indian population. At this juncture, effective steps are needed to provide entrepreneurial awareness, orientation and skill development programs to women. The role of Women entrepreneur in economic development is also being recognized and steps are being taken to promote women entrepreneurship.
Resurgence of entrepreneurship is the need of the hour emphasizing on educating women strata of population, spreading awareness and consciousness amongst women to outshine in the enterprise field, making them realize their strengths, and important position in the society and the great contribution they can make for their industry as well as the entire economy. Women entrepreneurship must be molded properly with entrepreneurial traits and skills to meet the changes in trends, challenges global markets and also be competent enough to sustain and strive for excellence in the entrepreneurial arena. If every citizen works with such an attitude towards respecting the important position occupied by women in society and understanding their vital role in the modern business field too, then very soon we can pre-estimate our chances of out beating our own conservative and rigid thought process which is the biggest barrier in our country"s development process | 2019-05-15T14:31:29.412Z | 2014-02-06T00:00:00.000 | {
"year": 2014,
"sha1": "cf71f9f2b0ec1df41220b5b1a746c45e32a94aeb",
"oa_license": null,
"oa_url": "https://doi.org/10.5585/iji.v2i1.2",
"oa_status": "GOLD",
"pdf_src": "ElsevierPush",
"pdf_hash": "6617f88e3796812517b5e1dda1aa8278982d65ca",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
17547218 | pes2o/s2orc | v3-fos-license | Novel Conopeptides of Largely Unexplored Indo Pacific Conus sp.
Cone snails are predatory creatures using venom as a weapon for prey capture and defense. Since this venom is neurotoxic, the venom gland is considered as an enormous collection of pharmacologically interesting compounds having a broad spectrum of targets. As such, cone snail peptides represent an interesting treasure for drug development. Here, we report five novel peptides isolated from the venom of Conus longurionis, Conus asiaticus and Conus australis. Lo6/7a and Lo6/7b were retrieved from C. longurionis and have a cysteine framework VI/VII. Lo6/7b has an exceptional amino acid sequence because no similar conopeptide has been described to date (similarity percentage <50%). A third peptide, Asi3a from C. asiaticus, has a typical framework III Cys arrangement, classifying the peptide in the M-superfamily. Asi14a, another peptide of C. asiaticus, belongs to framework XIV peptides and has a unique amino acid sequence. Finally, AusB is a novel conopeptide from C. australis. The peptide has only one disulfide bond, but is structurally very different as compared to other disulfide-poor peptides. The peptides were screened on nAChRs, NaV and KV channels depending on their cysteine framework and proposed classification. No targets could be attributed to the peptides, pointing to novel functionalities. Moreover, in the quest of identifying novel pharmacological targets, the peptides were tested for antagonistic activity against a broad panel of Gram-negative and Gram-positive bacteria, as well as two yeast strains.
Introduction
The existence of venomous animals represents a unique starting point for bio-discovery and drug design. Over millions of years, nature has optimized the constituents of venoms (i.e., peptide toxins) as the most selective and potent tools on Earth [1,2]. Therefore, such toxins can be used as lead compounds for a novel generation of drugs. The venom peptides from cone snails (genus Conus) are generally small cysteine-rich peptides with the unique feature of being highly selective and potent ligands for a wide range of ion channels and receptors [3]. Consequently, they are recognized as lead
Isolation of Novel Conotoxins from C. longurionis, C. asiaticus and C. australis
Venom glands of three largely unexplored cone snail species, C. longurionis, C. asiaticus and C. australis, were investigated. Samples were purified via a series of HPLC purification steps. Amino acid sequences of the purified compounds were determined via N-terminal Edman degradation, revealing five new conotoxin sequences (Table 1). Table 1. Overview of the peptides discussed in this work.
Name
Amino Acid Sequence Cysteine Arrangement
The first peptide, Lo6/7a is a novel 24-residue conotoxin with a molecular mass of 2583.0 Da (folded). This is in perfect agreement with the mass of the unfolded synthetic peptide (2589.0 Da), determined by LC-MS and the theoretically calculated masses for oxidized (2583.0 Da) and reduced (2589.0 Da) Lo6/7a. According to its cysteine pattern, C-C-CC-C-C, Lo6/7a belongs to framework VI/VII, covered by different superfamilies: I, O and M.
The second peptide, Lo6/7b, has a molecular mass of 2775.1 Da (folded peptide) determined by MALDI-TOF. This mass resembles the mass of the unfolded synthetic peptide (2781.1 Da), determined by LC-MS and the theoretically calculated masses of oxidized (2775.1 Da) and reduced (2781.1 Da) Lo6/7b. The cysteine pattern of Lo6/7b is the same as for Lo6/7a, C-C-CC-C-C and therefore also belongs to superfamilies I, O and M and framework VI/VII. The RP-HPLC purification chromatogram of C. longurionis venom is shown in Figure 1. Asi3a and Asi14a were retrieved from C. asiaticus, a worm hunting cone snail species found in the Indian Ocean at the coast of Tamil Nadu (India) and represent the first conopeptides isolated from this species. The molecular mass of Asi3a is 1697.6 Da (folded) obtained by MALDI-TOF which Asi3a and Asi14a were retrieved from C. asiaticus, a worm hunting cone snail species found in the Indian Ocean at the coast of Tamil Nadu (India) and represent the first conopeptides isolated from this species. The molecular mass of Asi3a is 1697.6 Da (folded) obtained by MALDI-TOF which is in perfect agreement with the mass of the synthetic unfolded peptide 1703.0 Da and the theoretical masses of folded Asi3a (1696.1 Da) and unfolded (1702.1 Da) peptide. Asi3a has cysteine framework CC-C-C-CC, characteristic for framework III, found in the M-superfamily.
Asi14a has a molecular mass of 1697.6 Da (folded), determined by MALDI-TOF. The mass of the unfolded synthetic peptide is 1700.0 Da. The calculated mass of oxidized Asi14a is 1695.9 Da, and the reduced calculated mass is 1699.9 Da. Asi14a has a framework XIV cysteine arrangement (C-C-C-C), as found in the A, L and J superfamilies. The purification of the crude venom of C. asiaticus is shown in Figure 2. The RP-HPLC chromatogram of the first purification step is shown in Figure 2A. In Figure 2B, the ion exchange chromatogram is shown, whereas a third purification via RP-HPLC revealed the purified peptides Asi14a ( Figure 2C) and Asi3a ( Figure 2D).
Mar. Drugs 2016, 14, 199 4 of 17 is in perfect agreement with the mass of the synthetic unfolded peptide 1703.0 Da and the theoretical masses of folded Asi3a (1696.1 Da) and unfolded (1702.1 Da) peptide. Asi3a has cysteine framework CC-C-C-CC, characteristic for framework III, found in the M-superfamily. Asi14a has a molecular mass of 1697.6 Da (folded), determined by MALDI-TOF. The mass of the unfolded synthetic peptide is 1700.0 Da. The calculated mass of oxidized Asi14a is 1695.9 Da, and the reduced calculated mass is 1699.9 Da. Asi14a has a framework XIV cysteine arrangement (C-C-C-C), as found in the A, L and J superfamilies. The purification of the crude venom of C. asiaticus is shown in Figure 2. The RP-HPLC chromatogram of the first purification step is shown in Figure 2A. In Figure 2B, the ion exchange chromatogram is shown, whereas a third purification via RP-HPLC revealed the purified peptides Asi14a ( Figure 2C) and Asi3a ( Figure 2D).
Figure 2.
Purification of crude venom of C. asiaticus. (A) RP-HPLC chromatogram of C. asiaticus venom. The peak indicated in the box was collected for further purification; (B) Ion exchange chromatogram of the peak fractions collected in the first purification step. The indicated peaks were subjected to another RP-HPLC purification step; (C) RP-HPLC1 chromatogram as indicated in (B). Edman degradation of the first peak, Asi14a, revealed the amino acid sequence as described; (D) RP-HPLC2 chromatogram, as indicated in Figure 2B. Edman degradation of this peak, Asi3a, revealed the amino acid sequence as described above. The brown lines in (A,C,D) are acetonitrile gradients. The line in (B) shows the ion exchange gradient.
The last peptide was found in the venom of C. australis and was named AusB. It has a molecular mass of 2030.8 Da, determined by MALDI-TOF. The molecular mass of the unfolded synthetic peptide is 2032.2 Da, determined by LC-MS, and correlates with the calculated masses of the oxidized peptide (2030.2 Da) and for reduced AusB (2032.2 Da). Figure 3 shows the RP-HPLC purification chromatogram.
Electrophysiological Screening against Voltage-Gated and Ligand-Gated Ion Channels
The masses of the synthetic peptides were determined by MALDI-TOF, validating that the peptides were folded successfully. Peptides were purified and electrophysiologically screened against a panel of NaV, KV and CaV channels, as well as nAChRs, according to their framework. The peak indicated in the box was collected for further purification; (B) Ion exchange chromatogram of the peak fractions collected in the first purification step. The indicated peaks were subjected to another RP-HPLC purification step; (C) RP-HPLC 1 chromatogram as indicated in (B). Edman degradation of the first peak, Asi14a, revealed the amino acid sequence as described; (D) RP-HPLC 2 chromatogram, as indicated in Figure 2B. Edman degradation of this peak, Asi3a, revealed the amino acid sequence as described above. The brown lines in (A,C,D) are acetonitrile gradients. The line in (B) shows the ion exchange gradient.
The last peptide was found in the venom of C. australis and was named AusB. It has a molecular mass of 2030.8 Da, determined by MALDI-TOF. The molecular mass of the unfolded synthetic peptide is 2032.2 Da, determined by LC-MS, and correlates with the calculated masses of the oxidized peptide (2030.2 Da) and for reduced AusB (2032.2 Da). Figure 3 shows the RP-HPLC purification chromatogram.
Electrophysiological Screening against Voltage-Gated and Ligand-Gated Ion Channels
The masses of the synthetic peptides were determined by MALDI-TOF, validating that the peptides were folded successfully. Peptides were purified and electrophysiologically screened against a panel of Na V , K V and Ca V channels, as well as nAChRs, according to their framework. Lo6/7a and Lo6/7b have a cysteine framework VI/VII, and both belong to the O or I3-superfamily according to the conotoxin classification described by Akondi et al. [2]. Therefore, the folded peptides were screened against a panel of NaV and KV channels ( Figure 4). Preliminary screening on CaV channels did not reveal significant inhibition of the channel (results not shown). Therefore, up to now, no target could be assigned to Lo6/7a and Lo6/7b. Lo6/7a and Lo6/7b have a cysteine framework VI/VII, and both belong to the O or I3-superfamily according to the conotoxin classification described by Akondi et al. [2]. Therefore, the folded peptides were screened against a panel of Na V and K V channels ( Figure 4). Preliminary screening on Ca V channels did not reveal significant inhibition of the channel (results not shown). Therefore, up to now, no target could be assigned to Lo6/7a and Lo6/7b. Lo6/7a and Lo6/7b have a cysteine framework VI/VII, and both belong to the O or I3-superfamily according to the conotoxin classification described by Akondi et al. [2]. Therefore, the folded peptides were screened against a panel of NaV and KV channels ( Figure 4). Preliminary screening on CaV channels did not reveal significant inhibition of the channel (results not shown). Therefore, up to now, no target could be assigned to Lo6/7a and Lo6/7b.
Asi3a
Asi3a belongs to the M-superfamily from which other members target Na V s, K V s or nAChRs. Therefore, we performed an electrophysiological screening against a panel of these channels, visualized in Figure 5. None of the tested ion channels was influenced by this peptide.
Asi3a
Asi3a belongs to the M-superfamily from which other members target NaVs, KVs or nAChRs. Therefore, we performed an electrophysiological screening against a panel of these channels, visualized in Figure 5. None of the tested ion channels was influenced by this peptide. Figure 5. Electrophysiological screening of Asi3a on NaV and KV channels (10 μM) and nAChRs (5 μM). * Represents traces after toxin application and overlapping control traces before toxin application.
Asi14a
Asi14a is another peptide found in the venom from C. asiaticus. Conopeptides from the J-or L-superfamily mainly act on nAChRs and KVs. Therefore, we electrophysiologically screened Asi14a against a selected panel of these channels ( Figure 6). No antagonistic activity of Asi14a could be observed. Figure 5. Electrophysiological screening of Asi3a on Na V and K V channels (10 µM) and nAChRs (5 µM). * Represents traces after toxin application and overlapping control traces before toxin application.
Asi14a
Asi14a is another peptide found in the venom from C. asiaticus. Conopeptides from the J-or L-superfamily mainly act on nAChRs and K V s. Therefore, we electrophysiologically screened Asi14a against a selected panel of these channels ( Figure 6). No antagonistic activity of Asi14a could be observed. Figure 6. Electrophysiological screening of Asi14a on KV channels (10 μM) and nAChRs (5 μM). * Represents traces after toxin application and overlapping control traces before toxin application.
AusB
AusB from C. australis is a disulfide poor peptide, having only one Cys-bond. Since no function was previously assigned to such conopeptides, its effect on a large panel of channels was evaluated. The peptide was electrophysiologically screened against a panel of NaVs, KVs and nAChRs as expressed heterologously in Xenopus laevis oocytes ( Figure 7). Up to now, no target could be identified. Figure 6. Electrophysiological screening of Asi14a on K V channels (10 µM) and nAChRs (5 µM). * Represents traces after toxin application and overlapping control traces before toxin application.
AusB
AusB from C. australis is a disulfide poor peptide, having only one Cys-bond. Since no function was previously assigned to such conopeptides, its effect on a large panel of channels was evaluated. The peptide was electrophysiologically screened against a panel of Na V s, K V s and nAChRs as expressed heterologously in Xenopus laevis oocytes ( Figure 7). Up to now, no target could be identified.
Antibacterial Activity
The five new conopeptides were screened against 29 Gram-negative and 10 Gram-positive bacterial strains and two yeast strains (Table 2). A turbid zone of inhibition was only obtained for Lo6/7a against the Gram-positive strain Bacillus megaterium ATCC13632. For the other peptides, no growth inhibition was observed.
Antibacterial Activity
The five new conopeptides were screened against 29 Gram-negative and 10 Gram-positive bacterial strains and two yeast strains (Table 2). A turbid zone of inhibition was only obtained for Lo6/7a against the Gram-positive strain Bacillus megaterium ATCC13632. For the other peptides, no growth inhibition was observed. Table 2. List of yeasts, Gram-negative and Gram-positive bacteria used in antimicrobial screening of five different conopeptides.
Gram-Negative Bacteria
Gram-Positive Bacteria
Discussion
In this report, we describe five novel conopeptides, discovered in the venom of C. longurionis, C. asiaticus and C. australis. These novel sequences have different cysteine frameworks and some of them likely represent new subgroups, based on sequence comparison with known conotoxins.
Conopeptide Alignment and Classification
Peptide Lo6/7a is a 24-residue conotoxin isolated from the venom of C. longurionis. Depending on the target, peptides from the O-superfamily are subdivided into different families: ω-conotoxins act on Ca V channels; κ-conotoxins target K V channels; and µOor δ-conotoxins influence Na V channels. By performing a Conoserver alignment search on Lo6/7a, the highest percentage of similarity (92%) was obtained for Pr6c from Conus parius [8] (Figure 8). Up to now, the target from Pr6c also remains to be discovered, but the authors suggested the peptide to be either an ωor κ-conotoxin. Despite the high sequence identity of Lo6/7a with Pr6c, we could not demonstrate such activity on Ca V or K V channels unequivocally. Another peptide from Conus textile (a peptide causing convulsions in mice) has a similarity percentage of 59% [20]. This peptide induces symptoms characterized by "sudden jumping activity followed by convulsions, stretching of limbs and jerking behavior". The authors predicted that this peptide belongs to a new undefined class of conotoxins. Two other peptides, Vc7.4 and Vc7.3, from Conus victoriae were described by Robinson et al. (2014) [21]. In this study, the precursor sequences of Vc7.4 and Vc7.3 were identified, and it was shown that these peptides, as well as the textile convulsant peptide (C. textile), are members of a previously-undefined conotoxin superfamily, which was designated U-superfamily. This peptide superfamily shares the cysteine framework (VI/VII) of most members of the O1-, O2-and O3-superfamilies. However, the pre-and pro-peptide sequences substantially differ from other known conotoxin superfamilies. Moreover, when the O-superfamily is compared with the U-superfamily, there is little similarity in the intercysteine loop composition or length (i.e., the U-superfamily has only two residues, while the O-superfamily conotoxins have six) [21]. The specific physiological target of these peptides has not yet been derived. However, given the similarity in the mature peptide sequence of these conotoxins with Lo6/7a, it is likely that they belong to the same superfamily and share a similar target.
Mar. Drugs 2016, 14,199 10 of 17 (2014) [21]. In this study, the precursor sequences of Vc7.4 and Vc7.3 were identified, and it was shown that these peptides, as well as the textile convulsant peptide (C. textile), are members of a previously-undefined conotoxin superfamily, which was designated U-superfamily. This peptide superfamily shares the cysteine framework (VI/VII) of most members of the O1-, O2-and O3-superfamilies. However, the pre-and pro-peptide sequences substantially differ from other known conotoxin superfamilies. Moreover, when the O-superfamily is compared with the U-superfamily, there is little similarity in the intercysteine loop composition or length (i.e., the U-superfamily has only two residues, while the O-superfamily conotoxins have six) [21]. The specific physiological target of these peptides has not yet been derived. However, given the similarity in the mature peptide sequence of these conotoxins with Lo6/7a, it is likely that they belong to the same superfamily and share a similar target. [22]), Vc7.4 (C. victoriae, [21]), a convulsant peptide from C. textile [20,23], Tx6.5 (C. textile, [24]), Vc7.3 (C. victoriae, [21]) and Cl6a (C. californicus, [25]).
Asi3a is classified in the M-superfamily, acting generally on NaVs (μ-conotoxins), KVs (κM-conotoxins) and nAChRs (ψ-conotoxins). Asi3a shows most identity with conotoxin Pr3a from Conus parius (87%) (Figure 10). Jimenez et al. [22] classified this peptide as an M-superfamily conotoxin and performed a bioassay that was carried out by intraperitoneal injection of fish. The purified peptide Pr3a (1 nmol) resulted in paralysis of the fish after ~5 min. A functional characterization of peptides similar to Asi3a has not yet been performed. Peptide Lo6/7b aligns with members of the O-superfamily, although with low percentages of similarity ( Figure 9). LtVIC is the only peptide from which a physiological target has been identified up to now. This conotoxin inhibits sodium currents in adult rat dorsal root ganglion neurons [26]. Therefore, LtVIC is considered a µ(O)-conotoxin. In our electrophysiological set-up, we could not identify Lo6/7b as a µ(O)-conotoxin. Nevertheless, the similarity of Lo6/7a with LtVIC (46%) is rather low.
Mar. Drugs 2016, 14,199 10 of 17 (2014) [21]. In this study, the precursor sequences of Vc7.4 and Vc7.3 were identified, and it was shown that these peptides, as well as the textile convulsant peptide (C. textile), are members of a previously-undefined conotoxin superfamily, which was designated U-superfamily. This peptide superfamily shares the cysteine framework (VI/VII) of most members of the O1-, O2-and O3-superfamilies. However, the pre-and pro-peptide sequences substantially differ from other known conotoxin superfamilies. Moreover, when the O-superfamily is compared with the U-superfamily, there is little similarity in the intercysteine loop composition or length (i.e., the U-superfamily has only two residues, while the O-superfamily conotoxins have six) [21]. The specific physiological target of these peptides has not yet been derived. However, given the similarity in the mature peptide sequence of these conotoxins with Lo6/7a, it is likely that they belong to the same superfamily and share a similar target. [22]), Vc7.4 (C. victoriae, [21]), a convulsant peptide from C. textile [20,23], Tx6.5 (C. textile, [24]), Vc7.3 (C. victoriae, [21]) and Cl6a (C. californicus, [25]).
Asi14a belongs to the A-, L-or J-superfamilies acting typically on nAChRs (L-and A-superfamily). The J-superfamily characteristically targets KV channels. No meaningful alignment with other A-, L-or J-superfamily peptides could be performed. Therefore, we conclude that Asi14a probably belongs to a new subclass of framework XIV proteins. A BLAST homology search with Asi14a did not reveal similarity to any known peptide or protein.
AusB is an unusual peptide found in the venom of C. australis. Containing 18 amino acids, AusB has only one cysteine bond classifying it among the disulfide poor conopeptides. A Conoserver search resulted in a poor quality alignment, and a BLAST did not align the peptide with other relevant peptides either. Since AusB could not be matched with any disulfide-poor conotoxin, this peptide represents a new family, which we will label ConoGAY peptides, named after the first three N-terminal amino acids of this peptide. [35]. Finally, Takada et al. (2006) showed that asteropine A, a sialidase-inhibiting conotoxin-like peptide from the marine sponge Asteropus simplex, might be an important lead compound for antibacterial and antiviral drug development [36]. This is interesting since multidrug resistant bacterial infections are a growing global health problem. Antimicrobial peptides from poisonous animals are described for a number of scorpion peptides, as well as peptides from snakes, frogs, bees (Apis sp.), etc., as part of their host defense system [37][38][39][40][41][42][43][44][45][46]. For scorpions in particular, it has been proposed that the presence of antibacterial peptides protects the venom gland from pathogenic infections or potentiates toxin action [47]. Scorpion antimicrobial peptides (AMP) are positivelycharged amphipathic peptides divided into three structural categories: (1) cysteine containing peptides with mainly three or four disulfide bridges; (2) peptides with an amphipathic α-helix, but lacking cysteine residues; and (3) peptides rich in Pro and Gly amino acids. One example of a cysteine containing scorpion AMP is scorpine, which showed activity against both Gram-positive (B. subtilis) and Gram-negative (K. pneumonia) bacteria (MIC 1-10 μM) [47]. When it comes to conotoxins, this path of investigation remains underexplored.
Asi14a belongs to the A-, L-or J-superfamilies acting typically on nAChRs (L-and A-superfamily). The J-superfamily characteristically targets K V channels. No meaningful alignment with other A-, L-or J-superfamily peptides could be performed. Therefore, we conclude that Asi14a probably belongs to a new subclass of framework XIV proteins. A BLAST homology search with Asi14a did not reveal similarity to any known peptide or protein.
AusB is an unusual peptide found in the venom of C. australis. Containing 18 amino acids, AusB has only one cysteine bond classifying it among the disulfide poor conopeptides. A Conoserver search resulted in a poor quality alignment, and a BLAST did not align the peptide with other relevant peptides either. Since AusB could not be matched with any disulfide-poor conotoxin, this peptide represents a new family, which we will label ConoGAY peptides, named after the first three N-terminal amino acids of this peptide.
Antagonistic Assays in the Quest of Identifying Novel Pharmacological Targets
Literature indications for conotoxins as potential antimicrobial compounds are given by Biggs et al. [34], Jiang et al. [35] and Takada et al. [36]. Biggs et al. (2007) discovered conolysin-Mt, a disulfide-poor conopeptide that was initially tested on oocytes where it causes membrane potential collapse within seconds. The peptide was also evaluated for antagonism against three bacterial strains: E. coli D21, E. coli ATCC 25922 and S. aureus ATCC 6538. The authors noticed low antibacterial activity against the two E. coli strains tested, with a minimal inhibitory concentration (MIC) > 50 µM. The MIC against the Gram-positive S. aureus was 25-50 µM [34]. Jiang et al. (2011) tested a cysteine-rich peptide library mimicking µ-conotoxins from Conus geographus on antiviral activity against influenza virus [35]. Finally, Takada et al. (2006) showed that asteropine A, a sialidase-inhibiting conotoxin-like peptide from the marine sponge Asteropus simplex, might be an important lead compound for antibacterial and antiviral drug development [36]. This is interesting since multidrug resistant bacterial infections are a growing global health problem. Antimicrobial peptides from poisonous animals are described for a number of scorpion peptides, as well as peptides from snakes, frogs, bees (Apis sp.), etc., as part of their host defense system [37][38][39][40][41][42][43][44][45][46]. For scorpions in particular, it has been proposed that the presence of antibacterial peptides protects the venom gland from pathogenic infections or potentiates toxin action [47]. Scorpion antimicrobial peptides (AMP) are positively-charged amphipathic peptides divided into three structural categories: (1) cysteine containing peptides with mainly three or four disulfide bridges; (2) peptides with an amphipathic α-helix, but lacking cysteine residues; and (3) peptides rich in Pro and Gly amino acids. One example of a cysteine containing scorpion AMP is scorpine, which showed activity against both Gram-positive (B. subtilis) and Gram-negative (K. pneumonia) bacteria (MIC 1-10 µM) [47]. When it comes to conotoxins, this path of investigation remains underexplored.
In this work, all peptides were electrophysiologically tested on relevant ion channels predicted by their cysteine arrangement. These results indicate novel functionalities other than expected based on their cysteine framework, since no activity could be identified on all of the targets studied. In order to further determine the mode of action and the potential molecular targets of these conotoxins, in vivo or ex vivo assays should be performed. As such, symptoms observed after intracranial injection of toxins in mice may provide indications on the type of receptor or channel targeted. Furthermore experiments on neuromuscular preparations may possibly identify a pre-or post-synaptic effect or even sodium/potassium or nicotinic antagonism. In addition, a broad screening was performed against a collection of micro-organisms. Low and very specific activity was observed for Lo6/7a against Bacillus megaterium ATCC13632. Since 1 mM is a very high test concentration and the halo was small, the inhibitory effect of Lo6/7a cannot be attributed as the main action of this peptide. Examples of scorpion antimicrobial peptides that potently target B. megaterium are meucin-13 (MIC 0.25 µM), meucin-18 (MIC 0.25 µM) and pantinin-3 (MIC 6 µM) [47].
Cone Snail Specimens and Venom Extraction
Specimens of C. longurionis, C. asiaticus and C. australis (identified by Kiener (1845), da Motta (1985) and Holten (1802), respectively, and classified by Tucker and Tenorio [48]) were collected from the Indian Ocean near Tamil Nadu, India. The venomous apparatuses (venom bulbs and venom ducts) were extracted from the specimens as described previously [49]. The collected tissues were preserved in RNAlater solution (Ambion, Austin, TX, USA) and stored at −20 • C. The venomous apparatuses were used for peptide/protein extraction.
Peptide Fractionation and Purification
Two steps were followed for the separation of the venom compounds of C. longurionis. In the first step, the lyophilized crude venom powder was solubilized into 50% acetonitrile (ACN)/water and aliquots were loaded on a Gel filtration Superdex™ Peptide 10/300 GL column with 50% ACN/water as the mobile phase (flow rate 0.5 mL/min) to separate the peptides and proteins based on their size. Two sample collections obtained were stored overnight at −80 • C, freeze-dried and finally solubilized in 5% ACN/water. For the second step, an analytical Vydac C18 column (218MS54, 4.6 mm × 250 mm, 5-µm particle size; Grace, Deerfield, IL, USA) with a two solvent system was used: (A) 0.1% trifluoroacetic acid (TFA)/H 2 O and (B) 0.085% TFA/ACN. The sample was eluted at a constant flow rate of 1 mL/min with a 0%-80% gradient of Solvent B over 90 min (1% can per minute after 10 min of Solvent A). The HPLC column elutes were monitored by a UV/VIS-155 detector (214 nm and 280 nm; Gilson, Middleton, WI, USA).
Three steps were followed for the separation of the venom compounds of C. asiaticus. In the first step, the lyophilized crude venom powder was solubilized using 5% acetonitrile (ACN)/water and aliquots were loaded on an analytical Vydac C18 column (218MS54, 4.6 mm × 250 mm, 5-µm particle size; Grace, Deerfield, IL, USA) with a two-solvent system: (A) 0.1% trifluoroacetic acid (TFA)/H 2 O and (B) 0.085% TFA/ACN. The sample was eluted at a constant flow rate of 1 mL/min with a 0%-80% gradient of Solvent B over 90 min (1% ACN per minute after 10 min of Solvent A). The HPLC column elutes were monitored by a UV/VIS-155 detector (214 nm and 280 nm; Gilson, Middleton, WI, USA). The largest peak was collected and freeze-dried for further purification. This fraction collection was subjected to a second purification step, namely ion exchange chromatography, using a Luna SCX column (Phenomenex, Torrance, CA, USA, 4.6 mm × 250 mm, 5-µm particle size) at room temperature. The solutions used for ion exchange chromatography were: (A) 20 mM KH 2 PO 4 , pH 2.5: ACN (75: 25) and (B) 20 mM KH 2 PO 4 /0.5 M KCl:ACN (75:25). The sample was eluted using a three-step protocol: 0% Solution B for 15 min, 0%-100% Solution B for 30 min and 100% B for 15 min at a flow rate of 1 mL/min. The collected fractions were stored overnight at −80 • C and freeze-dried. A third purification step was performed by RP-HPLC, using the same conditions as described for the first purification step.
Two steps were followed for the separation of the venom compounds of C. australis. In the first step, the lyophilized crude venom powder was solubilized into 50% acetonitrile (ACN)/water, and aliquots were loaded on a Gel filtration Superdex™ Peptide 10/300 GL column with 50% ACN/water as the mobile phase (flow rate 0.5 mL/min) to separate the peptides and proteins based on their size. Three sample collections were made that were stored overnight at −80 • C, freeze-dried and finally solubilized in 5% ACN/water. For the second step, an analytical Vydac C18 column (218MS54, 4.6 mm × 250 mm, 5-µm particle size; Grace, Deerfield, IL, USA) with a two-solvent system was used: (A) 0.1% trifluoroacetic acid (TFA)/H 2 O and (B) 0.085% TFA/ACN. The sample was eluted at a constant flow rate of 1 mL/min with a 0%-80% gradient of Solvent B over 90 min (1% ACN per minute after 10 min of Solvent A). The HPLC column elutes were monitored by a UV/VIS-155 detector (Gilson, Middleton, WI, USA) scanning both 214 nm and 280 nm.
Theoretically-calculated masses of the peptides were done with an online Peptide Mass Calculator (Peptide Protein Research Ltd., Hampshire, UK). Peptide homology search was generated online at Conoserver.org [50,51] and NCBI (Rockville Pike, Bethesda MD, USA) [52]. The CLC Main Workbench 7 software was used to align the peptide sequences (CLC bio, QIAGEN, Hilden, Germany).
Peptide Synthesis and Folding
Lo6/7a and Lo6/7b were synthesized by GeneCust (Elange, Luxemburg). Asi3a, Asi14a and AusB were synthesized by GenicBio Limited (Shanghai, China). All peptides, except AusB, were C-terminally amidated, purified by HPLC and analyzed with LC-MS, then freeze-dried and stored at −20 • C until use. The peptides were folded using an oxidative folding solution (1 mM reduced glutathione (Sigma, Munich, Germany), 1 mM oxidized glutathione (Roche, Mannheim, Germany), 1 mM ethylenediaminetetraacetic acid (EDTA; Sigma, Munich, Germany) and 100 mM Tris/HCl (Merck, Darmstadt, Germany) [53]). The solution was adjusted to pH 7.63 with 10 M NaOH (Merck, Darmstadt, Germany). Prior to functional characterization, the purity and folding of the synthetic peptides was validated (MALDI-TOF MS), and a chromatographic characterization was undertaken by RP-HPLC. On the basis thereof, a careful comparison of the retention time with the one of the native material was done.
Whole-cell currents from oocytes were recorded at room temperature (18-22 • C) by the two-electrode voltage clamp technique using a GeneClamp 500 amplifier (Axon Instruments, Foster City, CA, USA) controlled by a pClamp data acquisition system (Molecular Devices, Sunnyvale, CA, USA). Oocytes were placed in a bath containing ND96 solution. Voltage and current electrodes were filled with 3 M KCl, and the resistances of both electrodes were maintained as low as possible (0.5 to 1.5 MΩ). To eliminate the effect of the voltage drop across the bath grounding electrode, the bath potential was actively controlled by a two-electrode bath clamp. Leak subtraction was performed using a −P/4 protocol.
For Na V channels, whole-cell current traces were evoked every 5 s by a 100-ms depolarization to the voltage corresponding to the maximal activation of the Na V -subtype in control conditions (0 mV), starting from a holding potential of −90 mV. The elicited currents were sampled at 20 kHz and filtered at 2 kHz using a four-pole, low-pass Bessel filter. Concentration-response curves were constructed by adding different toxin concentrations directly to the bath solution.
K V 1.1-K V 1.6 and K V 3.1 currents were evoked by 500-ms depolarizations to 0 mV followed by a 500-ms pulse to −50 mV, from a holding potential of −90 mV. The elicited currents were sampled at 2 kHz and filtered at 500 Hz using a four-pole low-pass Bessel filter. K V 10.1 currents were evoked by 2-s depolarizing pulses to 0 mV from a holding potential of −90 mV. hERG or K V 11.1 peak and tail currents were generated by a 2.5-s prepulse from −90 mV-40 mV followed by a 2.5-s pulse to −120 mV. K V 10.1 currents were sampled at 2 kHz and filtered at 1 kHz; hERG currents were sampled at 10 kHz and filtered at 1 kHz using a four-pole low-pass Bessel filter.
For measuring nAChR currents, the following conditions were applied: during recordings, oocytes were continuously perfused with ND96 at a rate of 2 mL/min, with the conopeptides applied during 30 s before ACh was added. ACh (200 µM) was applied for 2 s at 2 mL/min, with 30-s washout periods between different ACh applications and 200 s after toxin application. The percentage response or percentage inhibition was obtained by averaging the peak amplitude of at least three control responses (two directly before exposure to the peptide and one after 200 s washout). Whole-cell current traces were evoked from a holding potential of −90 mV.
Data were analyzed using pClamp Clampfit 10.0 (Molecular Devices, Sunnyvale, CA, USA) and Origin 7.5 software (Originlab, Northampton, MA, USA) and presented as the result of at least 3 independent experiments (n ≥ 3).
Antibacterial Assays
The different bacterial and yeast strains were inoculated in 5 mL of the appropriate medium and incubated overnight at the appropriate temperatures and shaking at 200 rpm. Next, agar plates were overlaid with 5 mL soft agar (0.5%), seeded with 50 µL of the overnight cultures (~10 9 CFU/mL). Cell lawns were supplemented with 5-µL spots of the different conotoxins and derivatives (concentration~1 mM) and air-dried. Plates were incubated overnight and evaluated for the presence of zones of growth inhibition or halos. ND96 buffer was used as the negative control.
Conclusions
We purified five novel conotoxins from the venom glands of three Indian cone snail species that were largely unexplored up to now. The discovered sequences open new perspectives concerning the conopeptide classification and hint to new representatives of new classification groups since the amino acid sequences significantly differ from what is known in the literature. No targets could be attributed to the peptides, pointing to novel functionalities. Further experiments on other ion channels or receptors are required to reveal the physiological impact of these conopeptides.
Supplementary Materials:
The following are available online at www.mdpi.com/1660-3397/14/11/199/s1, Table S1: Strain names, growth conditions (media and growth temperature) and source of the bacterial strains used in this work. | 2016-10-31T15:45:48.767Z | 2016-10-27T00:00:00.000 | {
"year": 2016,
"sha1": "347a082f6c0b0976cfe17f1b07f03bf3dc5ddfb9",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-3397/14/11/199/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "347a082f6c0b0976cfe17f1b07f03bf3dc5ddfb9",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
34890786 | pes2o/s2orc | v3-fos-license | Extracellular domains of the bradykinin B2 receptor involved in ligand binding and agonist sensing defined by anti-peptide antibodies.
Many of the physiological functions of bradykinin are mediated via the B2 receptor. Little is known about binding sites for bradykinin on the receptor. Therefore, antisera against peptides derived from the putative extracellular domains of the B2 receptor were raised. The antibodies strongly reacted with their corresponding antigens and cross-reacted both with the denatured and the native B2 receptor. Affinity-purified antibodies to the various extracellular domains were used to probe the contact sites between the receptor and its agonist, bradykinin or its antagonist HOE140. Antibodies to extracellular domain 3 (second loop) efficiently interfered, in a concentration-dependent manner, with agonist and antagonist binding and vice versa. Antibodies to extracellular domain 4 (third loop) blocked binding of the agonist but not of the antagonist, whereas antibodies to extracellular domains 1 and 2 or to intracellular domains failed to block ligand binding. Antibodies to ectodomain 3 competed with agonistic anti-idiotypic antibodies for B2 receptor binding. Further, affinity-purified antibodies to the amino-terminal portion of extracellular domain 3 transiently increased intracellular free Ca concentration and thus are agonists. The Ca signal was specifically blocked by the B2 antagonist HOE140. By contrast, antibodies to the carboxyl-terminal segment of extracellular domain 4 failed to trigger Ca release. The specific effects of antibodies to the amino-terminal portion of extracellular domain 3 suggest that this portion of the B2 receptor may be involved in ligand binding and in agonist function.
Many of the physiological functions of bradykinin are mediated via the B2 receptor. Little is known about binding sites for bradykinin on the receptor. Therefore, antisera against peptides derived from the putative extracellular domains of the B2 receptor were raised. The antibodies strongly reacted with their corresponding antigens and cross-reacted both with the denatured and the native B2 receptor. Affinity-purified antibodies to the various extracellular domains were used to probe the contact sites between the receptor and its agonist, bradykinin or its antagonist HOE140. Antibodies to extracellular domain 3 (second loop) efficiently interfered, in a concentration-dependent manner, with agonist and antagonist binding and vice versa. Antibodies to extracellular domain 4 (third loop) blocked binding of the agonist but not of the antagonist, whereas antibodies to extracellular domains 1 and 2 or to intracellular domains failed to block ligand binding. Antibodies to ectodomain 3 competed with agonistic anti-idiotypic antibodies for B2 receptor binding. Further, affinity-purified antibodies to the amino-terminal portion of extracellular domain 3 transiently increased intracellular free Ca 2؉ concentration and thus are agonists. The Ca 2؉ signal was specifically blocked by the B2 antagonist HOE140. By contrast, antibodies to the carboxyl-terminal segment of extracellular domain 4 failed to trigger Ca 2؉ release. The specific effects of antibodies to the amino-terminal portion of extracellular domain 3 suggest that this portion of the B2 receptor may be involved in ligand binding and in agonist function.
Physiological and pathophysiological processes are mediated by kinins and their receptors. Kinins are liberated by proteolytic cleavage of the precursor proteins kininogens (1); they decrease blood pressure, induce pain and inflammation, contract smooth muscles, and regulate ion fluxes (2). Receptors for kinins are classified pharmacologically into two major sub-types, B1 and B2 (3). The B1 receptors are triggered by carboxyl-terminally truncated kinins such as [des-Arg 10 ]kallidin, whereas bradykinin is the agonist of B2 receptors. Molecular cloning has revealed the primary structures of the B1 (4) and the B2 receptors (5) and classified them as members of the G-protein-coupled receptor family that are thought to contain seven membrane spanning ␣-helices.
The signaling pathways of the B2 receptors have been explored in some detail. The bradykinin B2 receptor is preferentially coupled to G proteins of the G␣ q subtype (6), which activate the phospholipase C-mediated cascade. This results in the hydrolysis of inositol-containing lipids, the generation of inositol phosphates, and the transient rise of the intracellular free Ca 2ϩ concentration (7). The initial increase of intracellular Ca 2ϩ is followed by Ca 2ϩ extrusion, which counteracts Ca 2ϩ influx, thereby regulating total cell calcium (8). B2-mediated release of diacylglycerol, another hydrolysis product of phospholipase C, results in the translocation of specific protein kinase C isoforms (9). The B2 receptor is also coupled to the phospholipase A2 pathway, which releases the prostaglandin precursor, arachidonic acid (10).
Although the amino acid sequence of the B2 receptor has been deduced from its cDNA and its transmembrane topology has been predicted from the corresponding hydropathy plots, the specific role of the extracellular domains in ligand binding and in signal transduction is unknown. To address this question, we have raised antibodies against peptides derived from the ectodomains of the B2 receptor and used them to probe for the function(s) of the corresponding structures. Our data show that extracellular domain 3 is involved in ligand binding and may play an essential role in communicating the agonist signal through the receptor.
Triticum vulgaris, N-acetylglucosamine, and fluorescein isothiocyanate-conjugated goat anti-rabbit immunoglobulin were from Sigma; rhodamine-conjugated donkey anti-rabbit immunoglobulin was from Jackson; Centricon 30 filters were from Amicon; keyhole limpet hemocyanin and fura-2/AM were from Calbiochem; polyvinylidene difluoride sheets were from Millipore; Affi-Gel 10 and nonfat dry milk was from Bio-Rad; MaxiSorb titer plates were from Nunc; HOE140 and 3-(4hydroxyphenyl-propyl)-HOE140 (HPP-HOE140) was from Hoechst; and gelatin was from Merck. All other chemicals were of analytical grade.
Cell Transfection and Infection with Recombinant Baculovirus-CHO cells were transfected with the rat B2 receptor cDNA (rB2CHO12/4) using the Lipofectin transfection method as described (8,9). The pVL1392 vector (kindly provided by Dr. H. Reilä nder, Frankfurt, Germany) contained the human B2 receptor cDNA such that transcription was directed to the predicted initiation site (11). Sf9 cells (2 ϫ 10 6 /ml) were infected with the wild type or the recombinant baculovirus at a multiplicity of infection of 2-5. Cells were harvested 48 -72 h after infection (12).
Cell Culture-Human foreskin fibroblasts, HF-15 (13) were grown to confluency in Dulbecco's modified Eagle's medium containing 10% (v/v) fetal calf serum for 2-3 weeks and used at passages 10 -15. Chinese hamster ovary (CHO) cells were grown in Ham's F12 medium and Sf9 cells in TC100 medium, each containing 10% fetal calf serum. The epithelial carcinoma cell line A431 was maintained in RPMI 1640 medium supplemented with 10% fetal calf serum, 1 mM sodium pyruvate, 100 units/ml penicillin and 0.1 mg/ml streptomycin. HF-15, CHO, and A431 cells were kept in a humidified 5% CO 2 , 95% air atmosphere at 37°C, and Sf9 cells were kept in a humidified air atmosphere at 27°C.
Extraction of Membrane Proteins by Triton X114 -CHO cells were washed twice with ice-cold PBS and harvested in 0.5% (v/v) of Triton X-114 in PBS containing protease inhibitors; the yield was approximately 1 mg of protein/0.5 ml of Triton X-114. After centrifugation (3 min, 4°C, 13,000 rpm) the pellet debris was discarded. The supernatant was heated to 30°C for 4 min and centrifuged (3 min, 24°C, 3,000 rpm) to cause phase separation (14). The supernatant was discarded, and the Triton X-114 phase was dissolved in ice-cold PBS. After another phase separation step, the Triton X-114 phase was diluted with 2 volumes of SDS sample buffer (15) and applied to polyacrylamide gel electrophoresis (PAGE).
Radioiodination of Peptides and Antibodies-Peptides or antibodies (1 g each) dissolved in 100 l of PBS were incubated with 2 mCi of carrier-free Na-[ 125 I] on a solid phase of Iodogen (100 g/tube) for 10 min (16). Unreacted iodine was separated by gel filtration over Sephadex G50 colums or by anion exchange chromatography over Dowex-1.
Synthesis of Peptides and Production of Anti-peptide Antibodies-Peptides derived from the rat or human B2 receptor sequence ( Fig. 1) were synthesized by solid phase peptide synthesis using the Fmoc (N-(9-fluorenyl)methyloxycarbonyl) or the t-Boc (t-butyloxycarbonyl) chemistry (Table I). Peptides purified by high performance liquid chromatography were routinely analyzed by Edman degradation and electrospray mass spectrometry. Peptides were covalently coupled to the carrier protein, keyhole limpet hemocyanin, by maleimidocaproyl Nhydroxysuccinimide (17). Rabbits were immunized with the conjugates (18). Peptide MLN33 was used for immunization without prior coupling to a carrier protein. Antisera were tested for antigen specificity and cross-reactivity with homologous human or rat peptides by the indirect enzyme-linked immunosorbent assay (ELISA) (19) using microtiter plates (MaxiSorb, Nunc) coated with 2 g/ml of the peptide or 0.5 g/ml of the conjugate.
Western Blotting and Immunoprinting-Proteins were resolved by SDS-PAGE and transferred to polyvinylidene difluoride sheets using semidry blotting (20). The sheets were treated with 50 mM Tris, 0.2 M NaCl, pH 7.4 (buffer A), containing 5% (w/v) of nonfat dry milk and 0.1% (w/v) of Tween 20 for 1 h. Antisera were diluted 1:1000 in buffer A containing 2% (w/v) of bovine serum albumin. After 30 min of incubation at 37°C the polyvinylidene difluoride sheets were washed five times for 15 min each with buffer A and incubated for 30 min with peroxidase-labeled F(abЈ) 2 fragments of goat anti-rabbit antibody (Sigma, 1:5000). After extensive washing, bound antibody was visualized using the ECL chemiluminescence detection kit (Amersham).
Purification of Anti-peptide Antibodies by Affinity Chromatography-Peptides were covalently coupled to Affi-Gel 10 (5 mg/ml of gel) according to the manufacturer's instructions (Bio-Rad). The antiserum (5 ml/ml of gel) was applied and incubated under gentle agitation for 12 h at 4°C. The affinity matrix was washed three times with PBS, and the bound antibodies were eluted with 0.2 M glycine, pH 2.5, and immediately neutralized with 1 M KOH. Antibodies were desalted and concentrated using a Centricon filtration unit, exclusion limit 30,000 Da. The purity and specificity of the antibodies were analyzed by SDS-PAGE and enzyme-linked immunosorbent assay, respectively.
Lectin Affinity Chromatography of B2 Receptor-WGA was covalently coupled to Affi-Gel 10 (10 mg/ml of gel). B2 receptors from HF-15 cell membranes were solubilized with 4 mM CHAPS in 20 mM PIPES, pH 6.8. The solution was diluted with an equal volume of 20 mM PIPES, pH 6.8, adjusted to 1 M NaCl, 100 mM MnCl 2 , 100 mM CaCl 2 , and incubated for 4 h at 4°C with the WGA affinity matrix. After extensive washing, bound proteins were eluted by a 30-min incubation with an equal volume of 20 mM PIPES, 1 M NaCl, 100 mM MnCl 2 , 100 mM CaCl 2 , 0.5 M N-acetyl glucosamine, pH 6.8. Proteins were desalted and precipitated by 80% (v/v) acetone (21) and recovered by centrifugation. The protein pellet was dissolved in 2% (w/v) SDS, 5 mM EDTA, 5% Anti-HOE140 (AS 255) a Peptides are identified by their first three amino terminal residues using the one-letter code, followed by the total number of residues constituting the peptide.
Immunoaffinity Chromatography of the B2 Receptor-Affinity-purified domain-specific antibodies were covalently bound to Affi-Gel 10 (15 mg/ml gel). Membranes of Sf9 cells infected with baculovirus encoding the human B2 cDNA (100 pmols of B2 receptor/20 mg of total membrane protein) were solubilized with 2% (w/v) sodium deoxycholate in PBS including 1 mM phenylmethanesulfonyl fluoride, 1 g/ml E64, and 2 M leupeptin. The deoxycholate was diluted to 0.1% (w/v) by the addition of 20 mM HEPES, pH 7.4, containing 150 mM NaCl, 1 mM EDTA (buffer B). Then 10% (v/v) glycerol and 0.1% (w/v) Triton X-100 were added, and the solution was applied to the immunoaffinity matrix for an overnight incubation. The affinity matrix was extensively washed with buffer B, and bound proteins were eluted with 0.2 M glycine, pH 2.5 supplemented with 10% (v/v) 1,4-dioxane. The eluted protein fraction was neutralized with 1 M Tris, pH 8.0, and concentrated by Centricon filtration (exclusion limit 30,000 Da). The purity of the enriched B2 receptor was assessed by SDS-PAGE and silver staining. For NH 2 -terminal sequencing, proteins from three experiments were pooled, applied to a ProSpin sample preparation cartridge, and sequenced on a 477 A protein sequencer equipped with an on-line 120A PTH Analyzer (Applied Biosystems).
Affinity Cross-linking of the B2 Receptor-B2 agonist or antagonist was cross-linked to the B2 receptor as described previously (22) with minor modifications. B2 receptors of HF-15 cells were enriched by WGA affinity chromatography, and the eluted proteins were desalted by dialysis prior to ligand binding and cross-linking with 1 mM difluorodinitrobenzene. Cross-linking to recombinant B2 receptors of CHO (1.5 pmol/mg of protein) and Sf9 cells (4 -5 pmol/mg of protein) was performed on intact cells without prior enrichment of receptor protein.
Competition Studies with Radiolabeled Ligands-Membranes or confluent HF-15 cells on 24-well plates in 0.5 ml of RPMI 1640 including protease inhibitors and buffered with 20 mM Na ϩ -HEPES, pH 7.4 (binding buffer) were incubated with [ 125 I]HPP-HOE140 (0.5 nM, specific activity 1367 Ci/mmol) or with [ 3 H]bradykinin (2 nM, specific activity 98 Ci/mmol) in the presence of increasing concentrations of affinity-purified antibodies (5 ϫ 10 Ϫ11 M to 1 ϫ 10 Ϫ5 M). After 2 h of incubation at 4°C, the cells were washed three times with ice-cold medium. The cells were dissolved in 1% (w/v) NaOH, and radioactivity was determined.
Competition Studies with Iodinated Antibodies-Confluent HF-15 cells on 24-well plates were washed twice with binding buffer (see above). Then 0.5 ml of binding buffer was added to each well. For competition studies cells were incubated at 4°C with 125 I-labeled immunoselected antibodies (1 ϫ 10 Ϫ8 M; specific activity 0.02 Ci/mg) in the presence or absence of 1 ϫ 10 Ϫ5 M bradykinin or HOE 140. After 2 h of incubation at 4°C, the cells were washed three times with ice-cold medium and dissolved in 1% (w/v) NaOH, and radioactivity was determined.
Immunofluorescence Studies of A431 Cells-The human epithelial cell line, A431 was grown on glass coverslips for 48 h. Three h before the experiment, the medium was replaced by RPMI 1640 supplemented with 0.5% (v/v) fetal calf serum. Prior to immunofluorescence, cells were washed three times with 60 mM PIPES, 25 mM HEPES, 10 mM EDTA, 2 mM Mg(CH 3 COO) 2 , pH 6.9, and fixed for 30 min with 3% (w/v) paraformaldehyde in the same buffer adjusted to pH 7.5. Excess paraformaldehyde was quenched by the addition of 50 mM NH 4 Cl in PBS, pH 7.4; this was followed by 30 min of incubation with PBS, pH 7.4, containing 0.3% (w/v) gelatin. The cells were treated for 1 h at room temperature with anti-peptide antisera, 1:100 in 0.3% gelatin/PBS. The first antibody was detected using a rhodamine-coupled donkey antirabbit immunoglobulin, 1:100 in 0.3% gelatin/PBS. Controls included antisera preincubated for 2 h with 20 M of their respective antigens. The coverslips were embedded in Moviol ® and viewed with an Orthoplan microscope (Leitz).
Measurement of Changes in Intracellular Free Ca 2ϩ Concentration-Intracellullar free Ca 2ϩ concentration, [Ca 2ϩ ] i , of HF-15 cells was determined by fura-2/AM as described previously (8) with minor modifications. Confluent HF-15 cells grown on 10-mm-diameter glass coverslips were washed twice with minimum essential medium buffered with 20 mM Na ϩ -HEPES, pH 7.4 (HMEM), and incubated with 2 M fura-2/AM in HMEM containing 0.04% (w/v) pluronic F-127. After a 45-min incubation at 30°C, the cells were washed twice and incubated in HMEM for another 30 min to allow for complete deesterification of fura-2/AM. For determination of changes in [Ca 2ϩ ] i , the coverslips were mounted in a holder at an angle of 45°and put into a thermostatted quartz cuvette, and fluorescence at 510 nm was determined. The excitation wavelength alternated between 340 and 380 nm in intervals of 600 ms. Changes in [Ca 2ϩ ] i are given as the ratio of 340 and 380 nm.
Selection of Peptides for
Immunizations-To raise antisera that cross-react with four predicted extracellular domains (EDs) 2 of the B2 receptor, we selected six segments from ED1 through ED4 of the B2 receptor so that the peptides included a cysteine residue if available (Table I). The peptides covered the entire sequence of the putative ectodomains of the B2 receptor; there were two overlaps of 1 and 3 residues between the peptides selected from ED3 and ED4, respectively (Fig. 1). Two Table I. additional peptides chosen from the putative intracellular domains (IDs), ID2 and ID4, served as controls. The peptides were covalently coupled to a carrier protein, keyhole limpet hemocyanin, and used for immunization except for MLN33, which was directly used without prior conjugation. The resultant antisera recognized their cognate antigens as verified by the indirect enzyme-linked immunosorbent assay, and crossreacted with the sequences from the human or the rat B2 receptor, respectively (not shown). A single peptide, designated CWN12, which is derived from ED4 and covers the center portion of peptide SGC18, failed to produce a significant titer of specific antibodies (not shown).
Specificity of the Anti-B2 Antisera-To assess the specificity of the antisera, we used membranes from CHO cells and Sf9 cells that overexpress the rat or the human B2 receptor for Western blotting and immunoprinting. The results are exemplified for the antiserum to the first ectodomain, ED1. In Triton X-114-extracted membranes of recombinant CHO cells expressing the rat B2 cDNA, anti-ED1 detected a single 69 Ϯ 3-kDa protein (Fig. 2B, lane 1). No specific staining was found in nontransfected cells, Fig. 2B, lane 2. As control B2 receptor from CHO cells that was labeled by HOE140 was detected using anti-HOE140 antiserum (Fig. 2B, lane 3). Binding of the antibodies to the B2 receptor was suppressed when cross-linking was performed in the presence of a 1,000-fold molar excess of bradykinin (Fig. 2B, lane 4). We further tested the specificity of the anti-ED1 antiserum using CHAPS-extracted membranes of Sf9 cells infected with recombinant baculovirus encoding the human B2 cDNA, and detected three protein bands of 38, 41, and 45 Ϯ 5 kDa (Fig. 2C, lane 1). Proteins with a molecular mass of approximately 75-80 kDa are likely to be dimerized B2 receptors that aggregated probably during sample preparation of Sf9 cell membranes expressing high amounts of B2 receptor (22). Membranes from mock-infected Sf9 cells did not show any specific bands (Fig. 2C, lane 2). As a control HOE140 was cross-linked to the B2 receptor followed by detection with anti-HOE140 antiserum. Proteins of similar molecular weight were stained, Fig. 2C, lane 3. This staining was suppressed by a 1,000-fold molar excess of bradykinin (Fig. 2C, lane 4). The differences in the apparent molecular masses of recombinant B2 receptors from Sf9 cells and of B2 receptors of HF-15 fibroblasts and the occurrence of multiple immunoreactive bands in Sf9 cells are likely caused by incomplete glycosylation, characteristic for glycoproteins expressed in Sf9 cells (23).
Affinity Purification and Amino-terminal Sequence Analysis of the B2 Receptor-Is the protein identified by immunostaining with anti-peptide antisera the authentic B2 receptor? We enriched the receptor protein from Sf9 membranes by immunoaffinity chromatography using a mixture of immunoselected anti-peptide antibodies to the extracellular domains. Edman degradation of the protein revealed an amino-terminal sequence of Met-Leu-Asn-Val-Thr-Xaa-Gln-Gly-Xaa-Thr-Leu-Asn-Gly-Thr-Phe-Ala-Xaa-Ser, where Xaa stands for an unidentified residue. This sequence is identical with the human B2 receptor sequence starting at the third in-frame initiator codon (11); note that the construct in baculovirus had been engineered such that only the most 3Ј-located initiator codon was available. These data demonstrate that the affinity-purified anti-peptide antibodies selectively enrich B2 receptor from solubilized Sf9 membranes.
Fluorescence-activated Cell Sorting (FACS) Analysis of Native B2 Receptors on HF-15
Fibroblasts-To test the reactivity of the antibodies with native B2 receptor we performed FACS analysis of HF-15 cells that were stained by antisera to the various extracellular domains ED1 to ED4 (Fig. 3A). Antisera to extracellular domains ED1 to ED4 bound to B2 receptors of intact HF-15 cells (Fig. 3A, I-VI) as demonstrated by increased fluorescence intensity in comparison with preimmune serum (Fig. 3A, VII), suggesting that the various anti-peptide antibodies cross-react to similar extents with the B2 receptor. No specific staining was observed with antisera to intracellular domain ID2 or ID4, exemplified for anti-ID2 (Fig. 3A, VIII). This finding is in agreement with the hypothetical model of the B2 receptor (cf. Fig. 1).
Redistribution of B2 Receptor Detected by Antibodies to Extracellular Domains-The successful binding of anti-peptide antisera to cellular B2 receptors allowed us to examine the fate of the B2 receptor after its activation by an agonist. In these studies we applied a mixture of the various anti-ED antibodies. HF-15 cells were preincubated at 37°C for 60 min in the absence (Fig. 3B, I) or presence of 1 M bradykinin (Fig. 3B, III). Pretreatment of the cells by the B2 agonist drastically reduced the antibody binding to B2 receptors (Fig. 3B, III). Thus, after agonist treatment the antigenic epitopes are no longer available for antibody binding. Preincubation with 1 M of the antagonist, HOE140, did not change the overall fluorescence intensity (Fig. 3B, IV). Together these data suggest that our anti-peptide antibodies readily recognize the extracellular domain(s) of the B2 receptor and that, like other G-proteincoupled receptors, sequestration and/or internalization modifies the accessibility of extracellular domains for antibody detection.
Immunofluorescence of A431 Cells-To immunovisualize the B2 receptor on cells other than fibroblasts, the epidermoid carcinoma cell line A431 was chosen. A strong immunostaining of the plasma membrane of fixed, nonpermeabilized cells was observed with a mixture of antibodies against ED1 to ED4 (Fig. 4a). A specific staining of the outer rims of the cells was also observed when the individual antisera against extracellular domains, ED1, ED2, ED3 N , and ED4 C , were used ( Fig. 4e to 4h). The cells exhibit a punctuated labeling, which may be due to receptor clustering and/or high receptor density in pseudopodia and microvilli of A431 cells; this latter notion was confirmed by electron microscopy (not shown). The presence of the cognate antigen for each antibody abrogated the specific immunostaining (b). No staining was seen with preimmune serum (c), with antiserum to an unrelated peptide (d), or with an antiserum to intracellular domain, ID2 (i). Hence the antipeptide antibodies are useful to probe for the B2 receptor on the surface of various cell types (Figs. 3 and 4).
Blockade of Bradykinin Binding by Anti-ED Antibodies-We asked whether our antibodies interfered with the binding of bradykinin to the B2 receptor. HF-15 cell membranes were preincubated for 2 h at 4°C with 250 nM affinity-purified antibodies to the various extracellular domains. This was followed by the addition of 2 nM [ 3 H]bradykinin and further incubation for 60 min at 4°C. The unbound radioligand was separated by filtration through GF/C glass filters, and the filter-bound radioactivity was determined. Controls were done in the absence (total binding) or presence of 2 M of the unlabeled ligand, bradykinin (Fig. 5A, columns 1 and 2). Out of six antisera tested only antibodies against the amino-terminal portion of extracellular domain 3, ED3 N , and to the carboxylterminal segment of extracellular domain 4, ED4 C , interfered with bradykinin binding to the B2 receptor (Fig. 5A, columns 5 and 8). Antibodies to other segments of the extracellular domains (Fig. 5A, columns 3, 4, 6, and 7) and to the intracellular domains (not shown) had no effect on [ 3 H]bradykinin binding. These data suggest that domains ED3 and ED4 of the B2 receptor might be critically involved in ligand binding.
Concentration-dependent Displacement of Radioligand by Anti-ED Antibodies-To further analyze the involvement of extracellular domains 3 and 4 in agonist and antagonist binding, we tested whether the effect of the antibodies is dose-de- pendent. Intact HF-15 cells were incubated for 2 h at 4°C with the radioligand in the presence of increasing concentrations, 100 pM to 10 M, of the affinity-purified antibodies (Fig. 5, B or C). Cell-bound radioactivity was determined. Antibodies to ED3 N effectively blocked binding of [ 3 H]bradykinin (Fig. 5B) and of [ 125 I]HPP-HOE140 (Fig. 5C) (Fig. 5B); however, the antibodies did not block the binding of [ 125 I]HPP-HOE140 (Fig. 5C). Other antibodies such as anti-ED3 C did not interfere with the binding of the ligands (Fig. 5, B and C). We conclude that at least part of the contact site for bradykinin and/or for HOE140 may be near or within the amino-terminal portion of ED3. The carboxyl-terminal portion of ED4 may contribute to the receptor binding of the agonist but not of the antagonist.
Displacement of 125 I-Labeled Anti-ED Antibodies by B2 Ligands-Anti-ED antibodies interfere with radioligand binding to the B2 receptor. This may either indicate that they are competitive inhibitors for the binding site or that the antibodies act allosterically by inducing and/or stabilizing a receptor conformation unable to bind the ligand. To discriminate between these possibilities competition binding between the unlabeled ligands, bradykinin and HOE140, and the 125 I-labeled antibodies anti-ED3 N and anti-ED4 C was done with HF-15 fibroblasts. The binding of 10 nM 125 I-labeled anti-ED3 N to the receptor was reduced by 80% in the presence of 10 M bradykinin, and it was abolished by 10 M HOE140 (Fig. 6A). At the same concentration the cognate peptide, KDY13, completely displaced radiolabeled anti-ED3 N ; the B1 receptor agonist, [des-Arg 9 ]bradykinin, had no effect (Fig. 6A). In the case of 125 I- labeled anti-ED4 C antibodies, no inhibition of binding was seen in the presence of 10 M bradykinin or HOE140, whereas displacement was observed by the same concentration of the cognate peptide, SGC18, Fig. 6B. We conclude that antibodies to ED3 N , but not antibodies to ED4 C , are competitive with B2 receptor ligands. Anti-ED4 C may cause an allosteric alteration and stabilize a conformation of the receptor unable to bind agonists.
Displacement of Anti-idiotypic Antibodies by Antibodies to ED3 N -To further address the interaction between anti-ED3 N and the kinin receptor we applied anti-idiotypic antibodies that had been raised against the idiotype, monoclonal antibody MBK3 to bradykinin (24). These antibodies have previously been shown to bind to and stimulate in an agonist-like manner the human or mouse B2 receptor (24). Competition experiments demonstrate that bradykinin and HOE140 interfere with 125 I-labeled anti-idiotypic antibodies for receptor binding (Fig. 6C). Antibodies to ED3 N displaced the radiolabeled antiidiotypes, although not completely, whereas antibodies to ED1, ED2, and ED4 C had no effect (anti-ED4 C is shown in Fig. 6C). These findings indicate that the majority of the anti-idiotypic antibodies are likely to bind to ED3 N and that interaction sites for bradykinin, HOE140, anti-idiotypic antibodies, and anti-ED3 N antibodies with the external portion of the receptor are mutually overlapping.
Agonist-like Effects of Antibodies to ED3 N -Our finding that antibodies to ED3 N interfere with the receptor binding of bradykinin and anti-idiotypic antibodies prompted us to ask whether anti-ED3 N itself is an agonist. Therefore, we measured intracellular free Ca 2ϩ in HF-15 fibroblasts treated with anti-ED3 N . At a concentration of 250 nM anti-ED3 N transiently increased [Ca 2ϩ ] i in an agonist-like manner, Fig. 7A. This effect is mediated by the B2 receptor because a 10-fold molar excess of the B2 antagonist, HOE140, prevented the Ca 2ϩ transient (Fig. 7B). Antibodies to the distal portion of the same domain, ED3 C , or to other ectodomains such as ED4 C were without effect (Fig. 7, C and D). Hence polyclonal antibodies to the amino-terminal portion of ectodomain 3 are agonists. DISCUSSION In these studies antibodies directed to putative extracellular domains of the bradykinin B2 receptor were prepared. These antibodies were used to map extracellular domains involved in ligand binding. This approach of ligand binding site mapping is complementary to the site-directed mutagenesis studies (25)(26)(27). The antibody approach reduces the commonly voiced concern about site-directed mutagenesis, that the mutation changes the receptor structure and binding ability without being located at the ligand binding site.
Our experiments show that the amino-terminal portion of ectodomain 3, ED3 N , is involved in agonist binding and sensing because (i) antibodies to this segment competed with bradykinin for binding to the B2 receptor, (ii) bradykinin almost completely abolished the binding of radiolabeled anti-ED3 N to B2 receptors, and (iii) anti-ED3 N antibodies were agonists. A direct contact between the ED3 N segment and bradykinin is uncertain; however, the mutual competition of ED3 N antibodies and bradykinin suggests such a possibility. The ED3 N region is also involved in binding of the antagonist, HOE140, because (i) anti-ED3 N blocked HOE140 binding to B2 receptors, (ii) HOE140 completely abolished the binding of radiolabeled anti-ED3 N to B2 receptors, and (iii) HOE140 nearly completely blocked anti-ED3 N -induced cytosolic Ca 2ϩ increase, i.e. anti-ED3 N agonism.
The agonistic effect of anti-ED3 N antibodies demonstrates that this receptor region can assume or can be induced to assume conformation(s) that transmit the signal to the Gprotein. If one considers a two-domain model of G-proteincoupled receptors as suggested by the observation that transmembrane regions (TMs) 1-5 and TMs 6 -7 need not be covalently connected for G-protein-coupled receptors to bind and signal (28,29), then anti-ED3 N might push apart the two domains, allowing access of the G-protein to the intracellular loops. Alternatively, anti-ED3 N might stabilize the R* activated form of the receptor which is at equilibrium with the R inactive form under basal conditions (30). Autoantibodies to extracellular domains of the adrenergic and muscarinic receptors have been detected in the serum of patients with myocardial diseases or malignant hypertension (31)(32)(33). These antibodies are directed to the same extracellular loop as is anti-ED3 N , interfere with ligand binding, and are agonists. These similarities between antibodies to extracellular domains of cationic amine receptors and of a peptide receptor emphasize the common molecular mechanisms governing the action of Gprotein-coupled receptors.
A few attempts have been made to elucidate the binding site of the B2 receptor using site-directed mutagenesis (26,27). 3 Alanine substitutions of negatively charged residues were FIG. 7. Effect of anti-ED antibodies on the [Ca 2؉ ] i of human fibroblasts. HF-15 cells were loaded with fura-2/AM, and the change in the ratio of fluorescence at 340/380 nm was followed. At the time points indicated 250 nM affinity-purified antibodies to ED3 N (panels A, B), ED3 C (C), or ED4 C (D) were applied. In panel B, 2.5 M HOE140 was added 50 s prior to application of the antibody. The results were similarly obtained with three different antibody preparations, each derived from two different rabbits. made at the TM7/ED4 boundary, D286A contained in the ED4 C epitope, and at the TM6/ED4 boundary, D268A contained in the ED4 N epitope. The D268A change reduced slightly the affinity of bradykinin and did not change the affinity with the related antagonists HOE140 or NPC17761 (26). 3 However, anti-ED4 N antibodies had no effect on bradykinin or HOE140 binding. The D286A mutation had larger effects on the bradykinin affinity and small effects on antagonist binding affinity. In accordance with that observation, anti-ED4 C antibodies reduced bradykinin binding but had no effect on antagonist binding. Finally an alanine substitution, E179A, contained in the ED3 N epitope, also caused a small reduction in bradykinin affinity (26). Thus the mutagenesis and the antibody methods for probing ligand binding sites concur in their indication that the charged residues Asp 286 and Glu 179 and associated peptide regions may be involved in agonist binding.
Our anti-ED4 N antibody does not confirm the involvement of Asp 268 in bradykinin binding; however, we note that D268 is the amino-terminal residue of the ED4 N peptide, DTL12; and thus, the anti-ED4 N antibodies may not bind the Asp 268 residue as part of an extended protein chain in the same way as when it is the first residue of a peptide. The finding that anti-ED4 C inhibits bradykinin binding but is unable to inhibit HOE140 binding suggests that the binding sites for peptidic agonists and antagonists on B2 receptors do not perfectly overlap. This conclusion agrees with the suggestions, derived from site-directed mutagenesis studies, that agonists and antagonists do not bind to identical sites on the receptor (26,34). 3 These studies, which used anti-extracellular domain antibodies covering all the extracellular domains of the bradykinin receptor, demonstrate that the binding of agonists by the receptor involves extracellular regions at the top of TM4, ED3 N , and to a lesser extent at the top of TM7, ED4 C . In contrast, the binding of antagonists is only affected by antibodies directed to the top of TM4, ED3 N . Furthermore, the anti-ED3 N antibodies are agonists, suggesting that the TM4 to TM5 loop, ED3, is important for signal transduction. These studies also point to the importance of extracellular domains for binding and signal transduction in this member of the G protein-coupled receptor family and demonstrate the utility of epitope-specific antibodies in defining functionally important regions of receptors. | 2018-04-03T02:44:01.074Z | 1996-01-19T00:00:00.000 | {
"year": 1996,
"sha1": "0857c456387d125418325c199a45a0c91e69926d",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/271/3/1748.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "93e09ff48d2e8567adb8b28ca3a2fbd1e374b096",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
267886542 | pes2o/s2orc | v3-fos-license | Association of Age and Response to Methylphenidate HCL Treatment in Adult ADHD: A Proton Magnetic Resonance Spectroscopy Study
Purpose This study investigated the age-dependent effects of methylphenidate (MPH) on brain metabolites including choline (Cho), N-acetyl aspartate (NAA) and creatine (Cr) levels in the dorsolateral prefrontal cortex (DLPFC), striatum, cerebellum, and anterior cingulate cortex (ACC) regions of the brain in adult patients with attention deficit hyperactivity disorder (ADHD). Patients and Methods The study was included 60 patients with ADHD between the ages of 18 and 60 years. The patients were grouped with respect to their ages as follows: 18–24 years, 25–30 years, and 31 and over years. Levels of NAA, Cr and Cho in DLPFC, ACC, cerebellum and striatum were measured with magnetic resonance spectroscopy (MRS). Subjects were then given 10 mg of oral MPH and the same metabolite levels were measured 30 minutes apart. Results Twelve (20%) of the cases were female and 48 (80%) were male. The age distribution of the cases is as follows: 15 subjects between the ages of 18–24, 26 subjects between the ages of 25–30 and 19 subjects over the age of 30. NAA levels were higher after MPH in the DLPFC of the 18–24 age group (p = 0.016) and in the cerebellum of the 25–30 age group (p = 0.041). No increase in Cho and Cr levels was observed after treatment compared to before (p > 0.05). Conclusion It is thought that treatment of MPH can be effective on metabolites in different brain regions and this effect can vary upon age adult ADHD patients. After MPH treatment, both the 18–24 age group (in the DLPFC) and the 25–30 age group (in the cerebellum) was detected significantly higher NAA levels compared to pre-treatment levels. This increase in NAA levels suggested that pharmacotherapy, especially at early ages, may be effective on neuronal damage.
Introduction
Neuroimaging techniques have been crucial to understanding the functional and structural changes in brain regions associated with attention deficit hyperactivity disorder (ADHD).Recent advances in these techniques also enable such further research into brain structures and functions in response to psychostimulant drug treatments.In structural neuroimaging studies; volume reduction has been reported in brain regions such as the frontal lobe, cerebellum, corpus callosum, total and right brain, and caudate nucleus. 1 In functional neuroimaging studies; It has been reported that regional blood circulation and glucose metabolism decrease in the prefrontal cortex (PFC) and cerebellar regions, but increase in the parietooccipital cortex at rest, and symptoms regress after psychostimulant drug treatment. 2Magnetic resonance spectroscopy (MRS) is used in the differential diagnosis of diseases with neurodegenerative activity.N-acetyl aspartate (NAA) as an indicator of overall neuronal integrity, low NAA/creatine (Cr) ratio is associated with neuronal loss or damage.Choline (Cho) reflects membrane integrity and higher choline levels or Cho/Cr ratio indicate higher cellular destruction, myelin degradation, gliosis and inflammation.Creatine is an invariable member of cellular energy metabolism. 3 meta-analysis of 16 MRS studies analyzed the effect of age on neurochemical abnormality in both children and adults with ADHD. 4 Eleven studies included children with ADHD, while five involved adults with ADHD.[5][6][7][8][9] All five studies included drug-free adults with ADHD.In this meta-analysis, it was reported that the NAA level in the PFC of children with ADHD was higher than normal, but there was no difference in the striatum and cerebellum.No difference in metabolite levels was found at any other site in adults with ADHD.In addition, it has been reported that there is a negative correlation between high NAA levels in PFC and the mean age of the patients.It has been suggested that the age-related abnormality of the NAA level in the PFC is a potential neural basis for the age-related variation of ADHD symptoms. 4Most neuroimaging studies have supported disruption in the frontostriatal cerebellar circuit.Methylphenidate (MPH) has been shown to affect frontostriato-thalamic circuit functions related to the pathophysiology of ADHD. 10 It has been reported that blood flow velocity increased in caudate, bilateral prefrontal and thalamic regions after MPH application.2 Methylphenidate is effective in maintaining adequate attention through the dopamine and serotonin system in the neocortex and filtering out unnecessary sensory stimuli by normalizing hyperexcitability in the somatosensory cortex. 2 Studies examining treatment-related metabolic changes in adults with ADHD are very few. 11 n the literature, MRS studies investigating the effect of stimulant treatment was availabed only thirteen studies.11 Five of these studies were conducted in adult ADHD.[11][12][13][14][15] In these studies, changes in the brain were examined before and after MPH.Different results have been reported.
There is no study in the English literature investigating the effects of MPH on the age-related brain.This study aimed to evaluate the age-related effects of MPH on NAA, Cr and Cho levels in the dorsolateral prefrontal cortex (DLPFC), striatum, cerebellum, and anterior cingulate cortex (ACC) in adult ADHD patients.
Study Design
Sixty patients aged 18-60 years who met the criteria for adult ADHD according to the Diagnostic and Statistical Manual for Mental Disorders (DSM-IV) were included in the study.Patients with concomitant neurological disease, mental retardation and psychiatric disorders were excluded from the study.Wender Utah Rating Scale (WURS) and Adult ADHD Diagnosis and Rating Scale were used to evaluate patients.
Wender-Utah Rating Scale (WURS)
This scale can be used to assess adults for attention deficit hyperactivity disorder with a subset of 25 questions associated with that diagnosis.A validity and reliability study of WURS was conducted for Turkish individuals with a cut-off score of 36. 16
Adult ADD/ADHD DSM IV-Based Diagnostic Screening and Rating Scale
This scale is a self assessment scale and patients can complete the questionnaire after being duly informed.When developing adult ADD/ADHD Scale, 18 symptoms of the diagnostic criteria in DSM-IV were reframed, so patients can understand them.The first part of this scale had 9 inattention questions and the second part had 9 hyperactivity/ impulsivity questions.The Adult ADD/ADHD DSM-IV Based Diagnostic Screening and Rating Scale has a validity and reliability study for Turkish individuals. 17atients who scored 36 points or more on the WURS and gave an answer of 2 or 3 points to at least six of the nine questions in the first and/or second parts of the Adult ADHD Diagnosis and Evaluation Scale were diagnosed with ADHD.
Levels of NAA, Cr and Cho in DLPFC, ACC, cerebellum and striatum were measured via proton MRS.Subjects were then given 10 mg of oral MPH and the same metabolite levels were measured 30 minutes apart.The patients were grouped with respect to their ages as follows: 18-24 years, 25-30 years, and ≥31 years.The study was conducted in line with the principles of the Declaration of Helsinki and was approved by the Pamukkale University Faculty of Medicine Ethics Committee (date:25/03/2011 no:52).Informed written consent was obtained from all patients included in the study.
Proton Magnetic Resonance Spectroscopy
1.5 Tesla MR device (GE Medical System, Milwaukee, WI, USA) used with a standard head coil.The MRS protocol was as follows: horizontal plane, 10 mm thickness; TR/TE, 3000/88.2;Angle of View, 10; matrix, 512 × 512; Then 1. T2 weighted fast spin echo sequences were obtained using these parameters.MRS was performed using the single voxel (¹H-voxel) technique inserted into each of the DLPF, ACC, cerebellum and striatum.A volume of interest was manually placed in each area of the appropriate brain tissue.The chemical shift selective pulse (CHESS) method was used to suppress water-borne signals.Following this, point-resolved spectroscopy (PRESS) technique was used to localize the spectroscopy volume (TR/ TE: 3000-35).As a result, short TE time spectra were obtained from the VOI in the ACC, striatum, DLPFC and cerebellum regions, and the metabolite ratios obtained with the "General Electric software spectral analysis program" were evaluated.H1 MRS analyzes were performed by the radiologist and NAA, Cho, Cr values were measured in the DLPFC, ACC, cerebellum and striatum areas.Patients were given 10 mg oral methylphenidate and after a 30-minute waiting period, NAA, Cho, Cr values were measured again.
Statistical Analysis
Statistical Package for the Social Sciences version 16 (SPSS, Inc., Chicago, Il, USA) was used for data analysis.The change from before in brain metabolite levels after MPH administration was analyzed with a paired t-test.The Kruskal-Wallis test was used to compare brain metabolite levels before and after MPH administration between age groups.A p-value <0.05 was considered statistically significant.Mann Whitney U-test was used to determine the group that caused the difference.Bonferroni correction for multiple comparisons using the Mann-Whitney U-test was performed (0.05/3 = 0.017) to analyze the nonparametric Kruskal-Wallis test results.Therefore, the p value was considered significant as 0.05/3=0.017since there were three groups.
In the 18-24 years age group, the increase in the NAA levels in the DLPFC region of the patients after MPH treatment was significant compared to their pretreatment levels (p = 0.016).In the 25-30 years age group, the increase in the NAA levels in the cerebellum region of the patients after MPH treatment was significant compared to their pretreatment levels (p = 0.041).
Table 1 shows the NAA levels of the patients measured in the ACC, striatum, cerebellum, and DLPFC regions before and after MPH treatment between the age groups.There was a significant difference in NAA levels between the age groups before MPH treatment in the striatum (p = 0.005).Accordingly, NAA levels of the patients in the 18-24 years age group were significantly higher than that of the 25-30 years and ≥31 years age groups (respectively Mann Whitney U = 82,500; p = 0.002 < 0.017 (0.05/3)), Mann Whitney U = 66,000; p = 0.008 < 0.017 (0.05/3).In other brain regions, before and after MPH treatment, there were no significant differences in NAA levels between age groups (p > 0.05).
Table 2 outlines the Cr levels of the patients measured in the ACC, striatum, cerebellum, and DLPFC regions before and after MPH treatment regarding age groups.In striatum, before MPH treatment, there was significant difference in Cr levels between age groups (p = 0.001).According to data, Cr levels of the patients in the 18-24 years age group were significantly higher than that of the 25-30 years and ≥31 years age groups (respectively MannWhitney U = 76,000; p = 0.001 < 0.017 (0.05/3), Mann Whitney U = 44,500; p = 0.001 < 0.017 (0.05/3)).Additionally, there was a significant difference between age groups in Cr levels after MPH treatment in the striatum (p = 0.047).Accordingly, the Cr levels of the patients in the 18-24 years age group were significantly higher than that of the 25-30 years age group (Mann Whitney U = 107,000; p = 0.017 (0.05/3)).There was no significant difference in Cr levels between the age groups before and after MPH treatment in other parts of the brain (p > 0.05).
Table 3 displays the Cho levels of the patients measured in the ACC, striatum, cerebellum, and DLPFC regions before and after MPH treatment regarding age groups.In striatum, before MPH treatment, there was significant difference in Cho levels between age groups (p = 0.041).Accordingly, Cho levels of the patients in the 18-24 years age group were significantly higher than that of the 25-30 years age group (Mann Whitney U = 106,000; p = 0.016 < 0.017 (0.05/3)).There was no significant difference between age groups in Cho levels before and after MPH treatment in other brain regions (p > 0.05).
Discussion
The current study results demonstrated that there were significant differences in the brain metabolite levels between different age groups of ADHD patients prior to the MPH treatment.Following MPH treatment, there were significant differences both between age groups and within the same age groups compared to their pretreatment levels.
6][7][8][9] In this metaanalysis, increased NAA levels were found in the medial prefrontal cortex of children with ADHD but no abnormal data in adults with ADHD. 4 In adults with ADHD, no difference in metabolite levels was found in any other region.In addition, it was revealed a negative correlation between heightened NAA levels and mean age of patients in PFC. 4 In a previous study, significantly lower concentrations of NAA were measured in the left DLPFC in unmedicated ADHD patients compared to healthy controls and they emphasized that NAA levels have a particular importance in brain metabolite studies because a decrease in NAA levels implies a reversible stage of severe neuronal dysfunction in the patients. 18In this study, before MPH treatment, in the striatum region, NAA levels of the patients in the 18-24 years age group were significantly higher than the other age groups and in the same region.Additionally, considering that age and NAA levels are negatively correlated in the literature, and NAA levels decrease as the brain develops, higher NAA levels were found at earlier ages in accordance with the literatüre in our study. 19n this study, after MPH treatment, both the 18-24 years age group (in the DLPFC) and the 25-30 years age group (in the cerebellum) displayed significantly increased NAA levels compared to their pretreatment levels.In a study conducted by Bertolino et al 20 on schizophrenic patients; After 4 weeks of atypical antipsychotic use, an increase in the NAA/Cr ratio was observed in DLPFC and it was suggested that pharmacotherapy may be effective on brain metabolites.NAA is an indicator of overall neuronal integrity, and a decrease in NAA levels indicates a reversible phase of severe neuronal dysfunction. 18In light of this information, this increase in NAA levels suggested that the treatment of MPH may be effective on neuronal damage, especially at early ages.Additionally, the study suggests that ongoing neuronal plasticity, which is likely to be the case before more than half of our patients were also receiving treatment, continues after a single dose of MPH treatment.
Before MPH treatment, in the striatum region, Cr levels of the patients in the 18-24 years age group were significantly higher than the other age groups and in the same region.In normal people, Cr increases with age.On the other hand, it decreases in hypoxia and hypoperfusion. 21In a study examining the effects of long-term use of MPH on cerebral blood flow in ADHD patients, a decrease in regional cerebral blood flow was found in the right hemisphere orbitofrontal cortex and anterior part of the middle PFC compared to the control group before treatment. 22This abnormal decrease in the right PFC returned to normal after treatment.Regional cerebral blood flow in the right striatum decreased with MPH treatment.However, the treatment resulted in increased blood flow in the upper PFC regions.Striatal activity may have been inhibited by MPH-induced prefrontal activation.Because inhibitory signals are sent to the striatum by cortical dopamine activity in the prefrontal region via the frontostriatal circuit.These comments support the view that the main pathology in ADHD is PFC dysfunction and subsequent striatal hyperactivation. 22In our study, increased striatal blood flow and Cr values were found before MPH, which was consistent with this study.Higher Cr levels at early ages may be related to the fact that ADHD symptoms are more severe at these ages and that striatal hyperactivation is more common at this age.
In addition, after MPH treatment, Cr levels in the striatum region of the patients in the 18-24 years age group were significantly higher than that of the 25-30 years age group.Similarly, Cr level, which is higher in early-aged ADHD patients, is thought to decrease relatively at the same rate due to a single dose of 10 mg MPH in each age group, but it is a possible finding that it is still higher after MPH.It is speculated that the increase in Cr levels after MPH treatment as in the current study, may be resulting from the normalization of cerebral blood flow and glucose metabolism following psychostimulant administration in ADHD. 4 At the same time, in studies evaluating MRS findings, it is stated that the findings about Cr level are contradictory and show less reliability. 23n another meta-analysis, researchers found an increase in the Cho signal in the striatum of children with ADHD and the bilateral pregenual ACC of adults with ADHD. 3,24Similarly, in a meta-analysis including both child and adult ADHD patients, significant changes of Cho levels in the right frontal lobe and left striatum were observed in children and the variations in Cho levels were significant in the left and right ACC in adults. 24Before MPH treatment, in the striatum region, Cho levels of the patients in the 18-24 years age were significantly higher than that of 25-30 years age group.The fact that Cho, which is one of the indicators of cell destruction, is high at early ages, and that ADHD symptoms and neuronal damage are more severe at that age, suggests that it may be associated with a decrease in symptoms and/or severity in adulthood.
Contrary to these studies reporting variations in the brain metabolites, another study showed no significant difference in NAA, Cho and Cr concentration compared to the healthy control group which may be due to the study patients using distinct medication protocols. 25In line with this study, a double-blind, placebo-controlled MRS study among adult ADHD patients documented that 12-week MPH treatment did not significantly affect Cr, Cho or NAA levels in the cerebellar hemisphere and ACC. 11he heterogeneity and conflict in the results of the ADHD studies concerning brain metabolites might be related to many varying factors in the patient populations such as being different ages, using different medications, being in different subtypes of ADHD, and having comorbidities.In addition, some other studies are claiming that genetic diversity in ADHD patients can also influence the levels of brain metabolites in specific regions.In a study among patients with ADHD, after MPH treatment, there were significant differences in the NAA, Cho, and Cr levels in different genotype carriers suggesting that polymorphisms of the catechol-O-methyltransferase gene in ADHD patients can explain individual differences in neurochemical responses to MPH. 26 Another study by the current group concerning genetic polymorphism in ADHD patients reported that patients with the 10/10 genotype DAT1 gene showed increased Cr levels after MPH uptake in the cerebellum and in patients with the SNAP-25 Mnll polymorphism G/G genotype and Ddel polymorphism T/T genotype, NAA levels were significantly increased after MPH treatment compared with pretreatment levels in the ACC. 27,28here are several limitations of the current study.Lack of control groups, the inevitable effects of other stimulants on the patients such as smoking, low-filed MR imaging and the unilateral area evaluating can be considered as limitations.This is the first study to investigate brain metabolites from the age-related effects of MPH perspective in ADHD.
Conclusion
The current study findings show that there might be an association between age groups and dynamics of brain metabolites before/after MPH treatment in adult ADHD patients which supports the literature on neuro-metabolite associations with ADHD symptoms.In this study, after MPH treatment, both the 18-24 age group (in the DLPFC) and the 25-30 age group (in the cerebellum) was detected significantly higher NAA levels compared to pre-treatment levels.This increase in NAA levels suggested that pharmacotherapy, especially at early ages, may be effective on neuronal damage.Although the research field of ADHD is rapidly developing, it is still mostly focused on pediatric patients.Further developments may lead to novel strategies in spectroscopic investigations of ADHD and further studies focusing on specific brain regions in the course of different age groups may help better clinical understanding of ADHD.
Table 3
Age-Dependent Distribution of Choline Levels Before and After Methylphenidate Treatment | 2024-02-26T05:08:28.786Z | 2024-02-01T00:00:00.000 | {
"year": 2024,
"sha1": "8bb1ebaaf8bd766e113a9b3f2ac45ebefe49ceba",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=96902",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8bb1ebaaf8bd766e113a9b3f2ac45ebefe49ceba",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246429788 | pes2o/s2orc | v3-fos-license | Public health impact of mass sporting and cultural events in a rising COVID-19 prevalence in England
A subset of events within the UK Government Events Research Programme (ERP), developed to examine the risk of transmission of COVID-19 from attendance at events, was examined to explore the public health impact of holding mass sporting events. We used contact tracing data routinely collected through telephone interviews and online questionnaires, to describe the potential public health impact of the large sporting and cultural events on potential transmission and incidence of COVID-19. Data from the EURO 2020 matches hosted at Wembley identified very high numbers of individuals who tested positive for COVID-19 and were traced through NHS Test & Trace. This included both individuals who were potentially infectious (3036) and those who acquired their infection during the time of the Final (6376). This is in contrast with the All England Lawn Tennis Championships at Wimbledon, where there were similar number of spectators and venue capacity but there were lower total numbers of potentially infectious cases (299) and potentially acquired cases (582). While the infections associated with the EURO 2020 event may be attributed to a set of socio-cultural circumstances which are unlikely to be replicated for the forthcoming sporting season, other aspects may be important to consider including mitigations for spectators to consider such as face coverings when travelling to and from events, minimising crowding in poorly ventilated indoor spaces such as bars and pubs where people may congregate to watch events, and reducing the risk of aerosol exposure through requesting that individuals avoid shouting and chanting in large groups in enclosed spaces.
Introduction
The UK Government Events Research Programme (ERP) [1] was developed by the UK government at the request of the Prime Minister to examine the risk of transmission of COVID-19 from attendance at events and explore ways to enable people to attend a range of events safely, through the study of a combination of testing, certification, non-pharmaceutical, behavioural and environmental interventions. The programme completed three phases of transmission and related studies incorporating a range of indoor and outdoor settings across cultural, sporting and business events. Phase 1 and 2 occurred at lower community COVID-19 prevalence (typically 1 in 500 to 1 in 1500) [2]. Key findings from the phase 1 events were that outdoor spaces are generally lower risk than indoor spaces [3]. The ERP phase 1 studies also demonstrated using CO2 monitoring linked to crowd movement data that higher risk areas could be readily identified such as indoor spaces related to toilets, food/drink concessions, entry/exit points and corridors; face covering compliance varied with attendance level and was lower in hospitality areas, when congregating in groups, in circulation zones and while exiting; and reduced social distancing compliance was linked with higher attendances and less effective crowd management strategies.
At the start of the phase 3 events on 13 June 2021, the England 7-day case rate was 43.5 per 100 000 and rose rapidly to a peak on 19 July 2021 at 543.3 per 100 000 of the population, owing to the rapid expansion and transmission of the Delta variant (Phylogenetic Assignment of Named Global Outbreak (Pango) lineage designation B.1.617.2) [4]. In phase 3 of the ERP, the events included increasingly higher numbers of attendees at higher capacity venues with later events moving towards full capacity [5]. The EURO 2020 matches at Wembley Stadium on 13th, 18th, 22nd, 26th, 29th June, 6th, 7th and 11th July, whilst not at full capacity attracted large numbers of fans in and around the venue many travelling nationally via coaches and public transport and were coupled with a relaxation of infection control measures.
The following sporting and cultural events were studied as part of the ERP and took place on dates overlapping with those of the EURO 2020 tournament (13th June-11th July 2021): eight EURO 2020 football matches at Wembley Stadium, five international cricket matches at various locations (details in Supplementary File Table S1), Download Festival (live music festival) in Leicestershire, Goodwood Festival of Speed motorsport event in West Sussex, Royal Ascot race meeting in Berkshire, All England Lawn Tennis Championships at Wimbledon, The Grange Festival (opera) in Hampshire and The British Open Golf at Sandwich, Kent. The non-pharmaceutical interventions included in the ERP varied, but all included a requirement to demonstrate immunity (through full vaccination with an approved vaccine or prior infection within 180 days) to COVID-19, or a negative lateral flow test taken within 48 h of the event, checked on entry through the NHS COVID-19 app [6].
We used contact tracing data routinely collected through telephone interviews and online questionnaires, to describe the potential public health impact of the large sporting and cultural events on transmission and incidence of COVID-19.
Methods
In line with all positive COVID-19 test results in England at the time, positive COVID-19 results from PCR or supervised LFD tests were automatically reported to NHS Test and Trace electronic system, and cases either self-completed contact tracing or over the phone with an agent [7]. Contact tracing data were analysed to identify cases that reported activities potentially associated with an event in the ERP. Results were generated using the data available as of 17th August 2021.
For each case, data were collected on the types of activity reported, dates of attendance, locations and any further information recorded in a free text description. To find cases who had attended an ERP event, this information was filtered using all of the following three criteria: (1) Date: The activity occurred within the date range of the ERP event.
(2) Location: The postcode reported for the activity undertaken matched a postcode (or postcode part) of the ERP event venue, or a keyword associated to location (e.g. 'Ascot') appeared in the free text description. (3) Activity: The activity was reported in a category which was relevant to the ERP event (e.g. horse races), OR (4) Keyword: The free text description contained a keyword relating to the event (e.g. 'racing').
In this analysis, eight events/groups of events which were part of the ERP were identified: Cricket (England vs. New Zealand test match and four one day international matches), Download (music) Festival, EURO 2020 (football), Goodwood Festival of Speed (motorsport), Royal Ascot (horse racing), The Open Golf, All England Lawn Tennis Championships at Wimbledon and The Grange Festival (opera). Fewer than 10 mentions of The Grange Festival were identified; these were excluded from further analysis. Details of each event and search terms used are available in Supplementary Materials Table S1.
Download Festival and Goodwood Festival of Speed allowed visitors to camp at the venues overnight, including the final night of each event. The following day (Download Festival: 21st June, Goodwood: 12th July) was included in the search to capture individuals still at the venue on those days following the events. Keywords were used to include individuals who reported camping in these locations during the (extended) event period.
Individuals were deemed to have attended an event whilst potentially infectious if they did so in the period from 2 days prior to onset of symptoms, or (if asymptomatic) test, onwards and to have potentially contracted COVID-19 at an event if they attended between 3 and 7 days prior to the onset of symptoms or test.
Individuals were counted once per date at each event for event date analysis and once per event for overall event and demographic analyses. As individuals may have attended events on multiple dates, sums of cases attending multi-day events may not match total counts of cases at those events.
To provide a review of general levels of activity in England in the same period, all activity events and household visitor events (in which a close contact attends the home of a positive case during the period when the case may be infectious) reported to contact tracing by all cases in England during the period 8th June 2021 to 19th July 2021 were counted, split by event category.
Overall prevalence in the English population, as calculated by the national Coronavirus Infection Survey [2], is provided for comparison.
Findings
Our primary analysis identified cases who reported activities during contact tracing which matched ERP events in this analysis on all three criteria. In total, 3714 cases reported attending ERP events during their infectious period (from 2 days before onset or test onwards) and 7396 cases attended during the period when they acquired their infection (between 3 and 7 days prior to symptom onset or test). Of all of these cases, 244 attended more than one event. Table 1 describes the total number of cases identified as attending each event. In total, 6376 cases were identified as attending EURO 2020 football events at Wembley during the period they were likely to have acquired COVID-19, and 3036 during the period they were likely infectious. Numbers in both categories increased substantially at the later matches, especially the Final. A smaller number of cases were identified at other events, such as the All England Lawn Tennis Championships at Wimbledon where there were similar numbers of spectators and venue capacity, but the total numbers of potentially infectious (n = 299) or acquired cases (n = 582) were much lower.
The eight events took place over variable durations of time: some on consecutive days and others on intermittent occasions. Table 2-C reports the number of cases identified by day of event. Particularly high numbers were identified at the Euro 2020 Final on 11th July, with close to half of all cases associated with the Euros coming from this date. The total number of cases who attended the Wembley Semi Final (7th July) and Final during the period when they likely acquired their infection was both high at 2092 and 3404 respectively. The number of cases who attended the Final and were potentially infectious was 2295. Figure 1 shows the case numbers per event per day. It should be noted that ONS-estimated prevalence was lower at the start of the study period (Panel 6) and that time trends within the data should be interpreted with caution.
Age and sex distributions are reported in Table 3. At EURO 2020 matches, 85% of cases were male, and the median age was 33 (IQR: 27-43). Overall the majority of cases which reported attending ERP events were male, but this varied by event. Cases identified at The Open Golf were 91% male, while at Wimbledon 52% were male. Age also varied by event, and this can be seen in age-sex pyramids in Figure 2. The pyramids show that while Download Festival and Wimbledon had a younger demographic amongst cases, a larger proportion of cases identified as attending The Open Golf, cricket events and Goodwood Festival of Speed were older.
As a secondary analysis, we identified and counted all types of activities reported by all cases in England during the period 9th June to 19th July 2021. Figure 3 shows the types of events reported each day (data available in Supplementary Table S3); spikes in activity can be seen on the days of England EURO 2020 football matches whether at home or away. Increases were seen in activities relating to bars and pubs, eating out and sports events on match dates. A large number of public and mass gatherings were also reported on the day of the EURO 2020 final (11th July).
Discussion
The increasing number of reported cases across all events reflects the increasing community prevalence of COVID-19 during that period. Both the EURO 2020 matches at Wembley and the All England Lawn Tennis Championships were mass spectator sporting events taking place on multiple days within a short period of time at an outdoor stadium in Greater London. There were similar numbers of spectators and high capacity in the stadia, reaching 75% for the later EURO 2020 matches and 100% on Centre Court at the Wimbledon final. Both required evidence of vaccination or negative LFD or natural immunity as a condition of entry. There are very markedly different numbers of positive cases reported as associated with these events, with those associated with the Wimbledon event more comparable with those reported from the other ERP events running concurrently, and with the numbers testing positive within the wider community at that time. This suggests that the EURO 2020 matches generated a level of COVID-19 transmission over and above that which would be more commonly associated with large crowds attending an outdoor sporting event with measures in place to mitigate transmission.
The number of potentially infected persons attending Wembley stadium increased as the tournament progressed, reaching more than 2000 at the EURO 2020 final despite event goers requiring a COVID pass for entry [8]. This raises questions on
Epidemiology and Infection
the utility of individuals self-reporting tests in reducing the prevalence of COVID infection at rare or special events and the longer term deliverability of self-testing as an option to mitigate disease transmission.
Research teams present at each of these events have verbally reported stark differences in crowd and spectator behaviour (personal communication from Dr Aoife Hunt, formal report in preparation). Whilst the Wimbledon crowds were well managed and largely compliant with the required risk mitigation, the initial reports from research teams indicate that spectators at the Wembley stadium became less compliant with mitigation such as face coverings as the tournament progressed. To manage the orderly ingress of spectators for the higher (75%) capacity events, spectators were admitted earlier than usual and alcohol was served within Wembley Stadium. The concourse areas became densely populated with shouting, chanting and boisterous behaviour with close contact in these areas before and during the semi-final and final matches lasting at least 1-2 h within the stadium indoor areas and many more hours outside. In addition to this, the carbon dioxide levels reported from the concourse areas were higher than those recorded at other high-risk settings in the ERP events, including the densely crowded areas at the Download music festival, and will have compounded the risk associated with the high numbers of spectators potentially infectious at the event itself (personal communication from Dr Liora Malki-Epshtein UCL, formal report in preparation). Finally, the public disorder offences occurring at EURO 2020 have been widely reported, including an undefined number of ticketless fans who gained entry to the stadium. Public disorder in and around the stadium meant that COVID-19 status checks were suspended for the Final [9].
The EURO 2020 events had an increasing impact on a national scale which was not observed for other events within the ERP, suggesting that there were additional factors associated with these events and that the risk of COVID transmission was not mitigated by the control measures in place for entry to the event itself. There was increasing national interest as the tournament progressed, as this was the first time an English team were in an international final for 55 years generating a sense of the final stages being a 'once in a generation' occasion. This will not be replicated for all sport tournaments taking place over the winter, nor for all football matches. However, previous crowd behaviours associated with football fans has underpinned the methods used to manage these crowds including the legislation in place
Epidemiology and Infection
governing alcohol consumption within football stadia. In general terms, this has the effect of concentrating people into as few areas as possible while crowd management strategies often hold groups until they can be moved en-masse in a controlled manner. To mitigate the risk of transmission of COVID-19, it would be preferable to dissipate the crowds across as wide an area as possible and manage the movement over long periods of time, as happened at other events including the Wimbledon tennis championships.
In addition to the cases associated directly with Wembley stadium, there was a noticeable national impact on COVID-19 case rates for key games including the Ukraine vs. England quarterfinal (3rd July in Rome), for the England vs. Denmark Semi-final (7th July) and for the England vs. Italy final (11th July), reflecting that in the later stages of the EURO 2020 tournament people came together across the country to watch the games and celebrate. There are higher proportions of events coded as pubs or bars on each of these dates compared to other dates for COVID-19 cases in England.
The case numbers associated with the events were detected using the routine reporting systems and were mainly from individuals who were symptomatic. As high proportions of cases, especially in young healthy individuals, are asymptomatic, this is likely to be an underestimate of the full impact of these events [10]. In addition, contact tracing is only undertaken for PCR test results and supervised LFD test results (those who are positive on home LFDs are requested to undertake an immediate PCR test) and recall bias of those contacted will vary. While there is no detailed age and sex breakdown for those who attended, it is highly likely that certain sports events in particular had a male and younger dominance. The age distribution also likely reflects the impact of vaccination; by 11th July 2021, those over 50 years were 80% fully vaccinated and under 40 were less than 30% fully vaccinated.
Contact tracing information can indicate events or locations individuals have attended while at risk of transmitting COVID-19 or places where transmission may have occurred. It is not possible to say with certainty how many individuals transmitted COVID-19 at an event or venue, nor exactly where an individual contracted the virus. The Euro Final match did not take place until 20.00 h, meaning that those attending may have been engaging in social activities during their journey to the match, and prior to entering the stadium itself. Transmission of infection may have occurred at the event itself or during any of the other reported activities associated with the event, of which attending a pub or restaurant is the most frequently reported.
Neither full vaccination nor a negative LFD test will completely eliminate the possibility of an infectious individual attending an event, but it should reduce the likelihood of someone transmitting highly infectious amounts of virus to a large number of individuals attending the event [11][12][13][14].
Conclusions
The EURO2020 tournament and England's progress to the EURO final generated a significant risk to public health across the UK even when England played overseas. This risk arose not just from individuals attending the event itself, but included activities undertaken during travel and associated social activities. For the final and semi-final games at Wembley risk mitigation measures in place were less effective in controlling COVID transmission than was the case for other mass spectator sports events. EURO2020-related transmissions have also been documented in Scotland [15], where 2632 individuals self-reported attending a EURO2020 event in the UK; and Finland, where 947 new SARS-CoV-2-positive cases were linked to travel to Moscow, Russia [16]. Whilst some of this may be attributed to a set of circumstances which are unlikely to be replicated for the forthcoming sporting season, other aspects may be important to consider including mitigations for spectators attending the venue to consider such as face coverings when travelling to and from events, minimising crowding in poorly ventilated indoor spaces such as bars and pubs where people may congregate either before entering the venue or to watch events, and reducing the risk of aerosol transmission through requesting that individuals avoid shouting and chanting in large groups in enclosed spaces. For larger events, it will be important to consider both the venue itself and other areas where fans without tickets for the venue will gather, and advice for the general population gathering in private homes or other locations in larger numbers than might otherwise be the case.
In particular, reducing the number of persons entering events or venues who are potentially infectious or at risk of severe disease or hospitalisation by promoting attendance by fully vaccinated individuals will be important whilst background prevalence rates remain at current levels. This will reduce the risk of transmission associated with the journey to and from the event and associated social activities. It will also be important that event organisers manage the density of crowds in areas such as hospitality and concessions on the concourses, and entry and exit points to the event.
Supplementary material. The supplementary material for this article can be found at https://doi.org/10.1017/S0950268822000188. | 2022-02-01T06:23:06.748Z | 2022-01-31T00:00:00.000 | {
"year": 2022,
"sha1": "6db52e0576b8eb334aaa94de20440e9644f02233",
"oa_license": "CCBYNCND",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/C09D657A780E289B2E3DE3183A394344/S0950268822000188a.pdf/div-class-title-public-health-impact-of-mass-sporting-and-cultural-events-in-a-rising-covid-19-prevalence-in-england-div.pdf",
"oa_status": "GOLD",
"pdf_src": "Cambridge",
"pdf_hash": "d9b822f47bfa9e900ed19bca4fcac3bae12b3cd3",
"s2fieldsofstudy": [
"Sociology",
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256013361 | pes2o/s2orc | v3-fos-license | Three dimensional bosonization from supersymmetry
Three dimensional bosonization is a conjectured duality between non-super-symmetric Chern-Simons theories coupled to matter fields in the fundamental representation of the gauge group. There is a well-established supersymmetric version of this duality, which involves Chern-Simons theories with N=2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{N}=2 $$\end{document} supersymmetry coupled to fundamental chiral multiplets. Assuming that the supersymmetric duality is valid, we prove that non-supersymmetric bosonization holds for all planar correlators of single-trace operators. The main tool we employ is a double-trace flow from the supersymmetric theory to an IR fixed point, in which the scalars and fermions are effectively decoupled in the planar limit. A generalization of this technique can be used to derive the duality mapping of all renormalizable couplings, in non-supersymmetric theories with both a scalar and a fermion. Our results do not rely on an explicit computation of planar diagrams.
Introduction and summary of results
Bosonization in three-dimensional quantum field theories is a recently conjectured duality, first developed in [1][2][3][4][5], between certain Chern-Simons theories with U(N ) or O(N ) gauge groups, coupled to matter fields in the fundamental representation. 1 We will refer to this class of theories as Chern-Simons vector models. The most basic example of 3d bosonization JHEP11(2015)013 erators are generally independent observables for any n, even in conformal field theories (CFTs). 4 Therefore, one can view our result as new evidence for bosonization, which goes beyond the one provided by matching the known planar 2-point and 3-point functions.
Let us now describe our derivation of the duality between the critical bosonic and fermionic models, which we will approach in two different ways. The first method uses a perturbative expansion in the multi-trace interactions of the N = 2 action. (Note that this is not ordinary perturbation theory, because we treat the gauge interactions exactly.) We begin by writing down the duality equation for certain correlators of the N = 2 theory, and expanding them perturbatively. By re-arranging the perturbative contributions we obtain an equality between correlators of the critical bosonic and fermionic theories, in agreement with the bosonization duality. 5 In the large N limit the perturbative expansion converges and the result is exact. We will use this method to derive the bosonization duality of the 4-point function of spin 1 currents, as well as of all 3-point functions (a known result). We believe that it is possible to extend the same argument to other higher-point functions, but this becomes tedious. Instead, in section 5 we give a simpler derivation of 3d bosonization that we will discuss in the rest of the introduction.
Our N = 2 theory contains a scalar field ϕ and a Dirac fermion ψ, both in the fundamental of the U(N ) gauge group. The gauge-invariant operatorφϕ has vanishing anomalous dimension, because it sits in the same multiplet as the conserved U(1) flavor current. Therefore, the double-trace operator (φϕ) 2 is relevant for large enough N . In the planar limit we may deform the supersymmetric theory by (φϕ) 2 and flow to an IR fixed point that is not supersymmetric. The IR theory includes a fermion and a Wilson-Fisher scalar, both coupled to a gauge field with Chern-Simons interactions. 6 The Giveon-Kutasov duality of the UV theory becomes a duality of the non-supersymmetric IR theory.
As we will show, in the planar limit of the IR theory the scalar and fermion are effectively decoupled for a large class of observables. In particular, planar correlators of single-trace operators that are composed of the scalar ϕ and the gauge field do not receive contributions from interactions with the fermion ψ. These correlators are therefore equal to those of the critical bosonic theory. Similarly, correlators of single-trace operators that involve only fermions and gauge fields are equal to those of the fermionic theory. Using this decoupling we will derive the duality of the critical bosonic and fermionic theories. For this derivation to work, we must know the mapping of all single-trace operators under the Giveon-Kutasov duality. We will determine this map by working out the arrangement of these operators inside multiplets of the N = 2 superconformal algebra. In the process, we will uncover signs in the duality map that were not noted previously. 4 Indeed, upon using the operator product expansion, such correlators with n ≥ 4 contain non-trivial contributions from 3-point functions with multi-trace operators; these were never computed explicitly in the theories discussed in this work. 5 This argument is close in spirit to an argument made in [2], where it was shown that certain scalar operators in the bosonic and fermionic theories do not acquire an anomalous dimension. This was done by perturbatively relating the 2-point functions of these operators to similar 2-point functions in the N = 2 theory, where supersymmetry implies that the corresponding anomalous dimensions vanish. 6 The duality of this non-supersymmetric theory was already considered at the level of the thermal free energy in [7].
JHEP11(2015)013
The way in which the double-trace deformation d 3 x g(φϕ) 2 is embedded within a supersymmetric deformation plays an important role in our argument. To understand this embedding, we will use the fact that the IR CFT at the end of the double-trace flow can be equivalently described by coupling the operatorφϕ to a background fieldD in the UV theory, and then makingD dynamical; this equivalence can be seen by using the Hubbard-Stratonovich trick. In the N = 2 theory, the background fieldD is the top component of the background vector multiplet that contains the U(1) flavor current. It follows that the supersymmetric completion of making a double-trace deformation and flowing to the IR CFT involves gauging the flavor U(1) symmetry. 7 Because we know how the flavor symmetry maps under Giveon-Kutasov duality, we can work out the exact mapping of our supersymmetry-breaking deformation.
We see that supersymmetry allows us to map the double-trace deformation across the duality. It is possible to extend this basic strategy to additional deformations by making other fields in the background vector multiplet dynamical with some particular weights, analogous to theD 2 term in the Hubbard-Stratonovich transformation. In particular, we will use this strategy to derive the large N duality map of the U(N ) Chern-Simons vector model containing both a scalar and a fermion with the most general renormalizable potential V (ϕ, ψ). In [6,7], the duality map of those couplings in V (ϕ, ψ) that contribute to the planar thermal free energy was determined. We find perfect agreement with [7], and also extend their results to all the other couplings in V (ϕ, ψ). The advantage of our method is that it provides a very simple derivation of the duality map, which does not require performing any complicated all-order computations.
There is one important subtlety in making background fields dynamical. Local terms in the action that are non-linear in the background fields contribute to contact terms of the correlation functions generated by those fields. Once the background fields are made dynamical, such terms become ordinary kinetic or interaction terms that can affect the correlators of the new theory even at separated points. The upshot is that, in order to derive a new duality using this strategy, one must make sure that the duality in the original theory extends to certain contact terms. In the present context, we will see that a crucial role in our derivation is played by the global Chern-Simons term of the background vector multiplet corresponding to the flavor U(1) symmetry, which must be added for the validity of the N = 2 duality [9,30].
The paper is structured as follows. In section 2 we present the Chern-Simons vector models and explain how planar 2-point correlators in the N = 2 theory are related to those of the non-supersymmetric theories in perturbation theory. In section 3 we determine the N = 2 duality map of all the bosonic single-trace operators. In sections 4 and 5 we then prove the non-supersymmetric duality for planar correlators in two different ways, using perturbation theory in the N = 2 multi-trace couplings, and using a double-trace flow. In section 6 we derive the mapping of all renormalizable couplings under the Giveon-Kutasov duality. Section 7 contains a discussion of our results, and the appendices contain our conventions and some technical proofs. 7 The Hubbard-Stratonovich transformations also contains the deformation d 3 x 1 4gD 2 that can be supersymmetrized to an N = 2 Yang-Mills term. This deformation is irrelevant and can be ignored in the IR CFT.
JHEP11(2015)013 2 Preliminaries
In this section we will define the field theories discussed in this work, give a review of their single-trace spectrum, and provide some properties of 2-point functions that will be needed in later sections. More details on our conventions can be found in appendix A.
U(N ) Chern-Simons vector models
Let us define the three main field theories to be discussed in this work. These are all Chern-Simons theories with U(N ) gauge group at level k, coupled to matter in the fundamental representation of U(N ). 8 The different theories are distinguished by their matter sector: • The theory F k,N has a single Dirac fermion ψ(x), and is commonly referred to as the regular fermion theory. Its Euclidean flat-space action is given by 9 • The theory B crit. k,N has a single complex Wilson-Fisher scalar ϕ(x), and is commonly referred to as the critical boson theory. One can flow to it by starting with the regular boson theory, deforming by a relevant double-trace interaction δS = d 3 xλ 4 2N (φϕ) 2 , and tuning the scalar mass to zero. The action of the deformed theory is given by The theory B k,N has a regular complex scalar coupled to a gauge field with Chern-Simons interactions, and its action is given by (2.4) without the double-trace deformation.
• The supersymmetric N = 2 theory with a single chiral multiplet in the fundamental representation will be denoted by T k,N . In Wess-Zumino gauge, the vector superfield 8 For a Yang-Mills-Chern-Simons theory with gauge group G and bare level k0, the renormalized Chern-Simons level in the IR is given by |k| = |k0| + h ∨ , where h ∨ is the dual coxeter number of G. In this work we exclusively use k to denote this renormalized level. In particular, for the U(N ) gauge group h ∨ = N , which implies that |k| ≥ N . 9 The gauge field is Aµ = A a µ T a where T a (a = 1, . . . , N 2 ) are hermitian generators of the gauge symmetry algebra in the fundamental representation, normalized such that trN (T a T b ) = 1 2 δ ab . The gauge covariant derivative acts on fundamentals as Dµϕ = ∂µϕ − iAµϕ, and on anti-fundamentals as Dµφ = ∂µφ + iφAµ. More details on our conventions are given in appendix A.
JHEP11(2015)013
V = V a T a contains the gauge field A µ , gaugino λ, and two real scalars σ and D. The chiral superfield Φ contains a complex scalar ϕ, a Dirac fermion ψ and an auxiliary field F . The flat-space Euclidean action is given by Here, D α andD α are supersymmetric covariant derivatives (see appendix A). After integrating out the auxiliary fields σ, D, λ and F , the action of T k,N becomes The actions (2.1), (2.4) and (2.6) formally define non-trivial CFTs in the infrared, and it should be understood that the notation F k,N , B crit.
k,N and T k,N refers to these CFTs, respectively. In this paper we will only consider the planar limit obtained by taking k, N → ∞ while keeping the 't Hooft coupling λ = N/k fixed. In this limit the CFTs are well defined. For the critical boson theory B crit.
k,N , taking the planar limit means that in practice we compute correlators at finiteλ 4 , and then take the limitλ 4 → ∞ compared to the external momenta, discarding any power-law divergences. 10 Three-dimensional bosonization duality is the conjectured equivalence B crit.
k,N F 1 2 −k,|k|−N , where k ∈ Z, while Giveon-Kutasov duality is the equivalence T k,N T −k,|k|−N + 1 2 of the N = 2 theory, where now k ∈ Z + 1 2 . Planar limit computations are not sensitive to the half-integer shifts in the duality map. We will therefore sometimes omit those shifts in our notations for simplicity.
Single-trace operators
To leading order in the large N expansion, all correlation functions factorize into products of correlators of single-trace operators. We will now review the spectrum of these operators in our theories, focusing on the conformal primaries. In the process, we will provide explicit expressions for all of these operators, and thus fix our normalization conventions for them.
JHEP11(2015)013
respectively, where α i = 1, 2 are spinor indices. It is convenient to suppress the spinor indices of the currents by introducing commuting polarizations y α , and defining J s (x; y) ≡ y α 1 · · · y α 2s J α 1 ···α 2s (x). In this notation, the currents in the boson and fermion models can be written explicitly as 11 where ∂ ≡ iy α y β γ µ αβ ∂ µ . We will always leave the U(N ) indices on the fields implicit, it being understood that they are contracted to form U(N ) singlets. When λ = 0 we can make J b s and J f s in (2.11) and (2.12) gauge invariant by simply replacing ordinary derivatives with covariant ones. The currents of spin s = 1 and s = 2 correspond to the U(1) current and the stress-tensor, respectively, and are therefore exactly conserved also in the interacting theories. As shown in [1,2], the conservation of the currents with s > 2 is only violated by multi-trace operators, implying that their anomalous dimensions vanish in the planar limit.
In addition, the fermion theory has a scalar operator of dimension 2 s and J f s . 12 These operators are packaged into multiplets of the 3d N = 2 superconformal algebra, as we now describe. When λ = 0, the N = 2 theory has a single conserved higher-spin multiplet J α 1 ···α 2s for each integer spin s ≥ 1. 13 The J α 1 ···α 2s are real superfields that satisfy the conservation constraint D α J αα 2 ···α 2s =D α J αα 2 ···α 2s = 0 on-shell. They can be written in components as where J, χ andJ are conserved currents of spin s, s+ 1 2 , and s+1, respectively. The omitted terms in (2.15) are determined in terms of these currents by the conservation constraints.
JHEP11(2015)013
More explicitly, up to an overall constant, the form of the higher-spin superfields (2.15) (in terms of the chiral superfield) is uniquely fixed and is given by where D ≡ y α D α andD ≡ y αD α . By expanding equation (2.16) in the superspace coordinates, the bosonic components of the J s superfields (2.15) can be expressed in terms of J b s and J f s . This results in the identifications When λ = 0 the J s are no longer conserved for all s ≥ 1 (i.e., D α J α··· andD α J α··· are no longer zero). As in the non-supersymmetric theories, one can show that the conservation is only violated by multi-trace operators, implying that the J s still have canonical dimension in the planar limit [31]. Note that J αβ is an R-multiplet, whose components include the U(1) R current J αβ , the supercurrent χ αβγ and the stress-tensorJ αβγδ , all of which are conserved currents also at finite N . 14 The fact that D α J αβ =D α J αβ = 0 is only violated by multi-trace terms implies that J αβ is, in fact, the exact R-current of the superconformal theory in the planar limit (see also [32]).
The scalar operators O b and O f are contained in a linear multiplet J 0 defined by the condition D 2 J 0 =D 2 J 0 = 0. In particular, J 0 =Φe −2V Φ, and after integrating out the auxiliary fields it can be written in components as Note thatJ µ is a conserved flavor current, and therefore J 0 has dimension ∆ = 1 for all N and k.
Relations between 2-point correlators
Some planar correlators of single-trace operators in the N = 2 theory can be written in terms of correlators in the non-supersymmetric theories. In this section we explain these relations, which are used extensively in this work. which are not shown explicitly. 15 Therefore, the momentum-space correlator in the supersymmetric theory can be written in terms of correlators of the regular bosonic and fermionic theories as follows. (2.20) The correlator can be computed explicitly using the known results for planar 2-point functions [5,12], but we will not require its explicit form. We see that and of the currents in the supersymmetric theory have similar contributions, that factorize through the multi-trace vertices of the supersymmetric theory. We claim that the relation (2.20) is not affected by renormalization, so it is an exact relation of the continuum theories in the planar limit. This follows from the fact that the theories involved are not renormalized in the planar limit: there are no logarithmic divergences, and therefore no need to introduce counter-terms for either the couplings or for the operators [2]. Next, consider the correlator O b O b T . A typical planar contribution is shown in figure 2, and we can write this correlator as The value of the contact term O b O f T can be shifted by introducing a conformallyinvariant term in the action of the form d 3 xDσ, whereD andσ source O b and O f respectively. Introducing this term would invalidate the equality (2.21) because this term Therefore, for the rest of this section we set this term to zero, as well as any other finite counter-term that can affect correlators at coincident points. The same idea can be used to compute correlators in the critical bosonic theory, where contributions factorize through the double-trace vertex (φϕ) 2 . The critical bosonic theory JHEP11(2015)013 has a scalar operator O b of dimension 2 + O(1/N ). As explained above, we compute correlators of this operator by flowing to it fromλ 4 O b . For example, to compute the 2-point function we consider the correlator Taking the IR limitλ −1 4 |p| → 0 and discarding a linear divergence, we find that Using the known result of the bosonic 2-point function, we find that Next, let us consider 2-point functions of currents. For a fermionic current J f s in the supersymmetric theory, perturbation theory in the multi-trace couplings tells us that Here, the currents have arbitrary polarizations, which we do not write explicitly to avoid clutter. We would now like to argue that the correlator J f s O f that appears on the righthand of (2.25) vanishes. This may seem obvious due to conformal symmetry, but this argument only applies to the correlator at separated points. If J f s O f contains a nonvanishing contact term, which is equivalent to a polynomial in the momentum, then this term would contribute to J f s J f s T even at separated points. This is because the overall term on the right-hand side of (2.25) would not be a contact term in this case. In appendix B we prove that all planar correlators of the form JO , with one current and one scalar operator insertion, vanish in our theories even at coincident points. We therefore have the relation A similar argument for bosonic currents leads to the following equalities for planar 2-point functions in Chern-Simons vector models.
These relations are exact in the planar limit, and hold even at coincident points (i.e. including contact terms). Similar relations for 3-point functions will be derived in section 4.
JHEP11(2015)013
3 Duality map of the N = 2 theory Our goal in this section is to determine how the single-trace conformal primary operators of the N = 2 theory T k,N map under Giveon-Kutasov duality. As we saw in section 2, for each spin s the theory T k,N has two single-trace conformal primaries J b s and J f s , but only one single-trace superconformal multiplet J s . What we will determine is how J b s and J f s mix under the duality. Let us first briefly summarize how the global symmetry charges of T k,N transform under Giveon-Kutasov duality [8,9]. The theory has two global U(1) symmetries, one of which is the flavor symmetry generated byJ under which ϕ has charge r = 1 2 and the gaugino λ has charge 1. 16 As we discussed in section 2.2, in the planar limit this is the exact R-current of the SCFT T k,N .
Let T ke,Ne denote the 'electric' N = 2 theory. Its 'magnetic' dual is given by a U(N m ) km Chern-Simons gauge theory coupled to a chiral superfield Φ in the anti-fundamental representation, with the identifications Moreover, Φ has charge −1 under the flavor U(1) and the R-charges are the same as in the electric theory (in particular, Φ has R-charge 1 − r = 1 2 ). By a suitable field redefinition, the magnetic theory can be written in terms of a chiral multiplet in the fundamental representation (with the same global symmetry charges as above); namely, it can be written as the theory T km,Nm .
Map of single-trace operators
Let us now deduce the duality map of single-trace operators in the theory T k,N . As we saw in section 2.2, the theory T k,N has one multiplet J s for each integer spin (s = 0, 1, 2, . . .). J s includes single-trace operators, plus possible multi-trace corrections that are not important for us. The dual theory T −k,|k|−N +1/2 has the same spectrum of single-trace multiplets. Because there is only one single-trace multiplet of each spin, under the duality T k,N → T −k,|k|−N +1/2 the J s multiplets must map to themselves. We will now determine the overall numerical factors that can appear in the transformation Note that |c s | depends on the overall normalization of the J s that was defined in section 2, but there can also be signs in the duality map. In fact we will show that, in our normalization, J s → (−) s+1 J s under Giveon-Kutasov duality, and in components The derivation is slightly technical and can be safely skipped by the reader. 16 The topological U(1) symmetry generated by J top. µ = ik 8π εµνρ tr(F µν ) is equivalent to the flavor current Jµ by the equations of motion.
JHEP11(2015)013
The multiplets J 0 and J 1 transform according to the duality maps of the flavor and R symmetries, respectively, which implies that c 0 = −1 and c 1 = 1. 17 In particular, the duality map of the components O b , O f , J b 1 and J f 1 , of the superfields J 0 and J 1 (see (2.15), (2.16) and (2.18)) is determined to be In determining the remaining constants c s it is useful to consider the duality map in the basis of J b s and J f s . Let M s be the 2-by-2 duality transformation matrix defined by To determine M s , first note that because J s with different s values do not mix we have 18 Plugging into (3.6) the expression (2.16) of J s andJ s in terms of J b s and J f s , we obtain Using also the fact that which is easy to prove diagrammatically, we conclude that the matrix of planar 2-point functions of J b s and J f s is proportional to the identity. Therefore, the transformation matrix M s must be proportional to an orthogonal matrix in order to preserve the matrix of 2-point functions.
The rest of the argument follows by induction, whose basis is given in (3.4). Assume we have already determined that s . Going to the J b s , J f s basis we conclude that M T s has an eigenvector (1, 1) with eigenvalue (−) s . To summarize, we learned that The equations (3.8) have two solutions for the duality map, given by
JHEP11(2015)013
In a free scalar theory, J b s 1 J b s 2 J b s 3 is non-zero if and only if s 1 +s 2 +s 3 = even (see e.g., [33]). Let us turn on a weak coupling, so that both the theory and its dual are interacting. If s is even then the left-hand side of (3.11) is still non-zero in the weakly-coupled theory, while the right-hand side is identically zero in the planar limit (cf. equation (4.9) below). If s is odd then we reach the same conclusion by considering equation (3.12). This concludes the derivation of the mapping (3.3).
Bosonization from perturbation theory
In this section we show that, for a large class of planar correlators, the supersymmetric duality and non-supersymmetric bosonization dualities are equivalent. For this purpose we use perturbation theory in the multi-trace couplings of the N = 2 theory, as explained in section 2.3. We will first prove this statement for all 3-point functions, and then extend the proof to the 4-point function of spin 1 currents, all at separated points.
3-point functions
The supersymmetric duality implies the following relations.
We will now prove that they are equivalent to the non-supersymmetric bosonization relations These hold in the planar limit at separated points, for any positive spins s 1 , s 2 , s 3 . 19 At the level of 3-point functions, the relations above imply the following mapping of operators between the bosonic theory B crit. ke,Ne and the fermionic theory F km,Nm : The minus signs in the duality map of the currents were not noticed previously; they are consistent with all the explicit computations of correlation functions that were done in the past, as those particular correlators were not sensitive to those signs. We begin with the 3-point function of fermionic currents, which can be written as (4.8) 19 The and its fermionic counterpart are pure contact terms, and will not be considered here.
JHEP11(2015)013
where the remaining terms all include factors of J f s O f F k,N . We show in appendix B that these 2-point functions vanish (also at coincident points), so we find an equality between the 3-point functions in the supersymmetric and fermionic theories. (As explained in section 2.3, factorization relations such as (4.8) hold when all the finite counter-terms that affect correlators at coincident points are set to zero.) Extending this argument to other 3-point functions, we find the relations These relations hold for any positive spins s 1 , s 2 , s 3 . The duality of the supersymmetric the- , and the equality (4.4) follows.
Next, consider correlators with two bosonic currents of spins s 1 , s 2 and with one scalar insertion. In the supersymmetric theory, we can write these as
ke,Ne
. (4.10) In the second line we used the following relation between the regular and critical bosonic theories.
The supersymmetric duality (4.2) implies that the correlator in (4.10) is equal to 20
4-point function of spin 1 currents
In this section we prove that the supersymmetric duality relation for the 4-point function of spin 1 currents, This is shown diagramatically in figure 3.
On the magnetic side, we have Using (4.5) and (4.11), we can write the 3-point function as This holds true at separated points, but in our derivation it will be important that it is also true at coincident points. The reason for this was explained in the discussion below equation (2.25): a scheme dependent contact term in the 3-point function (i.e. a polynomial in the momenta) will affect the 4-point functions (4.15) and (4.16) even at separated points.
JHEP11(2015)013
It is therefore important that all such terms map correctly under bosonization. We will prove that this is indeed the case below. Continuing with this assumption, equation (4.13) implies that Now, consider the critical bosonic theory. The 4-point function can be written as It is easy to check that the right-hand sides of (4.18) and (4.19) are equal, and this proves the bosonization relation (4.14). 21 Note that, again, we did not need to use any explicit expressions for the 2-point functions, but only simple relations that follow from perturbation theory in the multi-trace couplings.
It is left to show that the 3-point function J f 1 J f 1 O f maps correctly under the bosonization, including contact terms. We assume that universal (scheme-independent) contact terms agree under the duality, because such terms are physical observables. On the other hand, scheme-dependent contact terms (which correspond to polynomials in the momenta) can be shifted by local counter-terms that are composed of the background fields. Such contact terms might not agree under the duality unless we tune the corresponding counterterms, but we had already set all such counter-terms to zero. In other words, if we find scheme-dependent contact terms whose value does not map correctly under the duality, then our argument does not go through. The only scheme-dependent contact term that we can write down in J f µ J f ν O f is δ µν . This term is ruled out because it is not conserved.
Other correlators
It is plausible that the argument above can be generalized to other higher-point functions.
The argument for 4-point functions of currents with general spins goes through as-is, except 21 This follows from the relations and from equation (2.23).
JHEP11(2015)013
that one must now prove that the 3-point functions of the form JJO have no schemedependent contact terms (or that such contact terms, if they exist, map correctly under the duality). For other correlators such as JJJO , or for 5-point functions and above, perturbation theory in the multi-trace couplings becomes more cumbersome. One complication is that, for most correlators, both the (φϕ)(ψψ) and the (φϕ) 3 vertices of the supersymmetric theory appear in the factorization. Instead of following this route, in the next section we will present an argument that proves the bosonization of all planar correlators.
Bosonization from double-trace flow
In this section we will give a simpler derivation of the basic bosonization duality from the Giveon-Kutasov duality, which applies to all planar correlators in those theories. To make contact with the non-supersymmetric theories we add a (φϕ) 2 double-trace deformation to the N = 2 action and flow to a CFT in the IR. The induced duality of that nonsupersymmetric CFT will be shown to imply the duality B crit.
k,N F −k,|k|−N . The doubletrace deformation (φϕ) 2 is equivalent to coupling the flavor U(1) current to a background vector superfield, and making the top component of the latter dynamical. We therefore start in section 5.1 by reviewing how to carefully couple both sides of Giveon-Kutasov duality to this background vector multiplet.
Coupling to background vector multiplet
LetV be a background vector superfield for the flavor U(1) symmetry of the theory T k,N . The action of the electric theory T ke,Ne coupled toV is given by where S CS and S N were defined in (2.7) and (2.8). Below, we will always use a hat to denote a background field. In general, if we demand that our theory be gauge invariant in the background vector fields that source global symmetry currents, then we must add certain Chern-Simons terms in these background fields. This is due to the parity anomaly [34][35][36]. If we also insist on supersymmetry, then we need to add supersymmetric Chern-Simons terms in the background vector superfields. The same terms are necessary for the validity of the supersymmetric duality [9]. Indeed, these global Chern-Simons terms generate contact-terms in the correlation functions of currents; these terms must be added to the dual theories appropriately such that the contact terms match under the duality. In our case, we can account for the contact terms in the 2-point function of the flavor current multiplet by shifting the action of the magnetic theory by
JHEP11(2015)013
Here, we are only interested in the effect of these terms on the duality, so for convenience we moved the total contribution to the magnetic theory. The subscript F F denotes the fact that these terms affect the Flavor-Flavor 2-point function. The global Chern-Simons terms are determined on both sides of the duality by the parity anomaly, up to an integer shift. The integer part can be determined (for example) by comparing the S 3 partition function on both sides of the duality [30]. Notice that the global Chern-Simons terms include a term proportional toσD, which shifts the value of the correlator O b O f T . Therefore, another way to determine k F F is to demand that the correlator O b O f T (a pure contact term) maps correctly under the duality. 22 In the T theory, the Chern-Simons term (5.3) only serves to ensure that contact terms in certain 2-point functions agree under the duality. However, our next move will be to define a new theory, T , by making the fieldD dynamical. In this theory the termDσ becomes dynamical and affects correlators at separated points. Therefore, it is important to correctly add the Chern-Simons term to the T theory before proceeding.
Taking the global Chern-Simons term (5.3) into account, the action of the magnetic theory is given by Giveon-Kutasov duality then implies the following identity for the partition function Z k,N [V] of the theory T k,N : The identity (5.6) exhibits the equivalence T k,N T −k,|k|−N +1/2 at level of correlators of the current multiplet, obtained by taking derivatives with respect toV.
General bosonization argument
The general derivation of 3d bosonization from Giveon-Kutasov duality proceeds as follows. Consider the supersymmetric theory T k,N coupled to a background vector multipletV for the flavor U(1) symmetry. TheD (top) component ofV acts as a source for O b =φϕ (see (2.8)). Let us introduce a term − d 3 x 1 4gD 2 in the action, and makeD dynamical. This is equivalent to adding a double-trace O 2 b deformation, via the Hubbard-Stratonovich trick. To emphasize thatD is now dynamical we change our notation for it by removing its hat:D → D. We also introduce a sourceB 0 for the new dynamical field D. We preform this deformation on both sides of the duality by multiplying both sides of equation (5.6) by
JHEP11(2015)013
and path-integrating over D. We then flow to the IR fixed points. The resulting nonsupersymmetric CFTs in the IR will be denoted by T k,N . In the T k,N theories 1 4g D 2 is irrelevant and can be dropped (at least at large N ).
Notice that D now appears linearly in both the electric and magnetic actions. In the electric theory (5.1) the path integral over D leads to the constraint, O b = k eB0 . On the other hand, in the magnetic theory (5.5) the constraint we obtain is O b = k mB0 − k F F 2πσ , due to the contact term (5.3); this difference will be crucial to the derivation of the correct duality map. Plugging the constraints back into the actions of the electric and magnetic theories, we find 23 where the action S k,N is given by 11) and S CS , S b and S f were defined in (2.2), (2.5) and (2.3), respectively. The omitted terms in (5.9) and (5.10) contain additional couplings of sources to the operators (φψ) andJ µ (defined in (2.17)), as well as terms that depend only on the background fields. Those terms will not be important for us. The (ψϕ)(φψ) interaction is expected to be exactly marginal in the planar limit. In addition, it does not affect planar correlators of bosonic single-trace operators, and we can therefore ignore it for our purposes.
Moving on, we define the partition function of the T k,N theory At large N , from (5.4) we see that k F F → −k m /2, and the N = 2 duality (5.6) implies the identity The relation (5.13) should be understood as an identity of correlators of D and O f at separated points, obtained by taking derivatives w.r.t.B 0 andσ. In particular, we find that under T k,N → T −k,|k|−N , the operators O f and D get mapped to each other according to the prescription This agrees with the mapping (4.7) that was derived using perturbation theory in the multi-trace couplings of the N = 2 theory (note that D is equal to O b /N ).
JHEP11(2015)013
In the planar limit the above duality can be directly related to the 3d bosonization duality between the boson theory B crit.
k,N and the fermion theory F −k,|k|−N , whose actions were given in (2.4) and (2.1) . Indeed, it is not hard to verify that at the level of planar correlators of the operators D and O f , the theory T k,N factorizes into a decoupled product of the B crit.
k,N and F k,N CFTs. In particular, 24 Another way to reach this conclusion is to turn on a mass for the fermion. In the planar limit the scalar propagator does not receive corrections from the fermion, so the scalar remains massless and we flow to the critical bosonic theory in the IR. Under the duality (5.14), the deformation maps to a relevant deformation involving the bosonic D which does not correct the fermion propagator. In the IR the scalar decouples, and we flow to the fermionic theory. We conclude that under B crit. k,N → F −k,|k|−N , the operators D and O f also map to each other according to (5.14). In fact, one can verify that the k/4π factor in (5.14) agrees with known results for 2-point and 3-point functions [5,12]. Note that correctly accounting for the N = 2 duality map of the contact term in the 2-point function of the flavor U(1) current was crucial in deriving this factor. We view the above arguments as a proof that (5.14) must hold in any n-point function of D and O f in the theories B crit.
k,N and F k,N , given that Giveon-Kutasov duality of the theory T k,N is correct.
Including currents
The above arguments can be easily generalized to include correlators of the other singletrace operators J b s and J f s , given in (2.11) and (2.12). We simply couple these operators to sources in the N = 2 action and follow the same derivation leading to (5.13). In this case the duality map in the T k,N theory is the same as the one of the N = 2 theory, which was given in (3.3). As before, from the point of view of planar correlators of D, J b s , O f and J f s , the T k,N CFT factorizes into a decoupled product of the B crit.
k,N and F k,N theories. Therefore under B crit.
k,N → F −k,|k|−N , we must have that J b s → (−) s J f s in agreement with (4.7). There is an important loophole in the above argument that we must address. It is possible that for the N = 2 duality to be valid in the presence of sources for the currents, one must shift the action of the magnetic theory by a local functional of those sources and ofD. In the presence of such terms the constraint imposed by integrating overD would be modified, and our conclusions could be invalidated. Indeed, the flavor-flavor contact term (5.3), which includes a term proportional toDσ, was crucial in deriving the duality map (3.3). We will now show that it is not possible to write another local functional of the sources that would end up contributing to correlators in the bosonic and fermionic theories at separated points.
JHEP11(2015)013
To see this, let us denote the sources of J b s and J f s byB b s andB f s . We take these tensors to be symmetric and traceless. The local functionals we consider are of the form S c.t. (D,σ,B b s ,B f s , . . . ), where (. . . ) denotes the fundamental fields, and where all terms are at least quadratic in the sources. First note that we only have to consider functionals that are at most linear inσ andB b,f s , but can otherwise have any positive power ofD. This is because non-linear terms inσ andB b,f s would only affect contact terms in the transformed theories (in whichD is dynamical). We will show that there are no such terms that include a factor ofB b,f s ; similar considerations rule out terms that involve onlyD, or bothD and σ (except for the termDσ). The most general local functional we can write down, which satisfies the above requirements, is Here n > 0, and O is an operator of dimension ∆ and spin s. The operator O may be any product of a local operator with a differential operator whose derivatives act onB b,f , but its particular form will not be important. One can now easily check that the twist of O is ∆ − s = 1 − 2n < 0. In order to have negative twist, O must include factors of δ µν or µνρ , but then the counter-term vanishes by assumption (the sources are assumed to be symmetric and traceless). This concludes the proof.
Theories with one boson and one fermion
In this section we will derive the duality map of various supersymmetry breaking deformations of the N = 2 theory. In particular, we reproduce the duality map presented in [7] for the theory with both a scalar and a fermion, and also extend their results to other deformations.
6.1 N = 2 → N = 1 Let us start by breaking N = 2 supersymmetry only partially, such that we obtain an N = 1 duality. To do that we first rewrite the action (5.2) of the N = 2 theory T k,N in N = 1 language. 25 The N = 2 chiral superfield Φ(x, θ,θ) can be written in terms of an N = 1 complex scalar superfield φ(x, θ). The N = 2 vector multiplet V(x, θ,θ) decomposes into an N = 1 vector multiplet Γ α (x, θ) plus a real scalar multiplet B(x, θ). Similarly, we will denote the N = 1 components of the background vector multipletV, byΓ α andB. The superfield B is auxiliary and can be integrated out. After B has been eliminated the action of T k,N , defined in (5.2), can be written in terms of the remaining N = 1 variables as 25 More details on the N = 1 decomposition of N = 2 Chern-Simons matter actions can be found in [37].
JHEP11(2015)013
where D α φ ≡ D α φ − iΓ α φ and D αφ ≡ D αφ + iφ Γ α . Moreover, in terms of N = 1 variables the abelian Chern-Simons term S CS (V) that appears on the r.h.s. of the identity (5.6) is given by Let us now multiply both sides of the identity (5.6) by e −δS with δS ≡ kµ 2π(w−1)B + k 8πi(w−1)B 2 , and then path-integrate boths sides overB. The choice of parameters in δS is such that µ and w coincide with the definitions of [7]. AfterB has been eliminated by using its equations of motion, we are left with an identity that exhibits the self-duality of an N = 1 U(N ) Chern-Simons theory coupled to a fundamental scalar superfield φ with an arbitrary renormalizable superpotential. The action of this N = 1 theory is given by The duality map of the parameters in (6.4) is found to be 26 where we set the coefficient κ F F of the contact-term action in (5.6) to its large N value: κ F F → −k m /2. This is precisely the duality map for µ and w that was found in [7].
The same reasoning that carried us so far can be used to obtain the bosonization duality map for the most general renormalizable Chern-Simons vector model with one scalar and one fermion. We multiply the identity (5.6) by e −δS , where δS is now the most general renormalizable functional of the auxiliary fieldsσ,λ,λ andD in the background vector multiplet. In particular, δS is given by The full action of the electric theory, after integrating out the auxiliary fields of the dynamical vector multiplet V, can be written as Here, we set the background gauge field to zero, and defined χ ≡φψ andχ ≡ψϕ. The action of the magnetic theory is
JHEP11(2015)013
We now integrate over the background auxiliary fields on both sides of the identity (5.6), where the full actions on both sides are given by (6.7), (6.8). To match our conventions with those of [7] we introduce the parameters m b , m f , b 4 , x 4 , x 6 , y 4 and y 4 , and identify these with the parameters in δS according to , (6.12) , (6.13) (6.14) (6.15) After the auxiliary fieldsσ,D,λ andλ have been integrated out in the electric theory, we are left with the most general U(N ) Chern-Simons theory coupled to a fundamental scalar ϕ and fermion ψ, with the matter potential 27 Repeating this in the magnetic theory, we find that the self-duality map is (6.17) The mapping that we found agrees with [7]. The transformation rules for the couplings y 4 and y 4 are new. Note that the point x 4 = x 6 = −y 4 = 1 and m b = m f = b 4 = 0 is a fixed point of (6.17), which corresponds to the N = 2 Giveon-Kutasov duality.
Discussion
Let us summarize our results. We proved that all planar correlators of single-trace operators must map correctly under the 3d bosonization map given in (4.7) if the Giveon-Kutasov
JHEP11(2015)013
duality is correct. In the process we have uncovered signs in the duality transformation that were not noticed previously. Moreover, we gave a new derivation of the transformation (6.17) of the most general renormalizable matter potential V (ϕ, ψ) in the Chern-Simons vector model with both a scalar and a fermion; the transformation rule for some of the couplings in V (ϕ, ψ) was not known previously. The main advantage of our approach is that it is simple, and does not rely on making complicated computations. We exhibited the relation between the N = 2 theory and the non-supersymmetric bosonic and fermionic models in two different ways. In section 4 we showed that planar correlators of the N = 2 theory can be expressed algebraically in terms of correlators of the non-supersymmetric theories. On the other hand, in section 5 we have seen that these theories are related by a double-trace flow, followed by a mass deformation to decouple either the boson or the fermion. At large N , these two approaches are related. For example, the critical O(N ) model is related to the free O(N ) model by a double-trace flow, and the planar correlators of the critical O(N ) model are algebraically related to those of the free model. These relations can be seen by re-summing the perturbative series in the double-trace interaction, similarly to our approach in section 4. Alternatively, by re-writing the double-trace deformation using the Hubbard-Stratonovich trick, the correlators of the two theories are seen to be simply related by a Legendre transform [38][39][40]; this is similar in spirit to our approach in sections 5 and 6.
The relations between the N = 2 and non-supersymmetric theories can be used to derive the duality of the latter from that of the former. In order to apply this strategy, we derived the N = 2 duality map of our supersymmetry-breaking double-trace deformation. This map was shown to be related to the known transformation of the flavor U(1) current multiplet, via the Hubbard-Stratonovich trick. A crucial ingredient in the derivation was that we had to extend the N = 2 duality such that it held also for the contact term in the 2-point function of the U(1) current multiplet.
The manipulations used in sections 5 and 6 involved deforming the actions of two dual theories by background fields, and then path-integrating over them. These manipulations are rather formal, and one may ask whether they are still valid after renormalization. Here we rely on the fact that our theories are essentially finite in the planar limit. Indeed, in this limit the R symmetry of the N = 2 theory is not renormalized, and its supersymmetry-breaking multi-trace deformations do not lead to logarithmic divergences. Our path-integral manipulations are therefore completely well defined in the planar limit.
There are several goals one might hope to achieve through a better understanding of the relation between the N = 2 and non-supersymmetric dualities, which we leave to future work. Most importantly, this understanding could lead to evidence for non-supersymmetric bosonization at finite N . The main obstacle to extending our arguments (or those of [7]) to finite N , is that this would require taking renormalization effects into account. Moreover, recall that our argument relied on the decoupling of the Wilson-Fisher scalar and the fermion in the CFT T k,N , which arises by flowing to the IR from the N = 2 theory T k,N . At finite N this flow might require a fine-tuning of the classically marginal interactions (ψϕ)(φψ) and (ψϕ) 2 + c.c. . In the planar limit this subtlety was avoided because these deformations are exactly marginal. It would be interesting to check whether they become JHEP11(2015)013 relevant or irrelevant at finite N . Even if we could understand the T k,N → T k,N flow at finite N , beyond the planar limit it is no longer true that the scalar and fermion are decoupled in T k,N . We would then need to show that one can flow from T k,N to the bosonic and fermionic models.
There are other future research directions that are interesting even if we restrict ourselves to the large N limit. Supersymmetry makes it possible to compute many interesting observables, such as partition functions on curved manifolds and correlation functions of Wilson loops. These quantities are currently not known in the non-supersymmetric theories even in the planar limit. It would be interesting if we could use the supersymmetric results to learn about these observables in the non-supersymmetric theories. In addition, there is a large class of dualities in N = 2 Chern-Simons matter theories. It is possible that the simple arguments given in this paper could be extended to find new examples of non-supersymmetric dualities.
It is plausible that our arguments could be extended to the critical fermion and regular boson theories. For example, one could imagine that there is a flow to the N = 2 theory from a non-supersymmetric critical fermion plus regular boson model in the UV. The UV theory is self-dual and the fermion and scalar are decoupled at large N , similarly to the T k,N theory in this paper. It would be interesting to verify whether this scenario is correct and to flesh out its details.
There is an additional open question that is related to our study of n-point functions in Chern-Simons vector models. These theories are conjectured to be holographically dual to Vasiliev theories of higher-spin gravity in AdS 4 [41][42][43], which have an infinite tower of parity violating couplings. One of those bulk couplings was matched with the 't Hooft coupling on the CFT side [44], while the interpretation of the other couplings is unknown. The structure of Vasiliev's equations suggests that those additional parameters may only affect boundary 5-point functions and higher, which have never been computed. If these couplings do have a physical effect then it leads to a puzzle in the holographic duality, because there are no obvious marginal parameters on the CFT side that could correspond to those parameters. In particular, one would expect that the bosonic and fermionic models, that are dual to one another under bosonization, are holographically dual to bulk theories with generally different values of these parameters. The bosonization duality could then fail at the level of planar 5-point functions and higher. Since our results give evidence that all planar n-point functions agree under bosonization, they also give indirect evidence that those bulk couplings are not physical. It would be interesting to better understand this issue, for example by counting solutions to the conformal bootstrap, as was done in [45], but for theories with slightly broken higher-spin symmetry.
A Conventions
In this appendix we collect some details on our conventions regarding N = 2 supersymmetry in 3d. We work in 3d Euclidean space with flat metric δ µν = diag(1, 1, 1), where µ, ν = 1, 2, 3. The Dirac matrices are defined to be the usual Pauli matrices, (γ µ ) α β = σ µ , where α, β = 1, 2. Spinor indices are raised and lowered from the left with the antisymmetric tensors ε αβ and ε αβ , where ε 12 = −ε 12 = −1. When indices are suppressed their contraction is defined using the North-West to South-East convention, The spinors ψ andψ are independent in Euclidean space, whereas in Minkowski space they would be hermitian conjugates. In particular, the Grassmann coordinate on N = 2 superspace are given by two independent complex spinors θ α andθ α . The supersymmetric covariant derivatives are defined by and satisfy the algebra In order to construct supersymmetric actions we use the following conventions for superfields. Chiral superfields Φ(x, θ,θ) are defined by the constraintD α Φ = 0 and can be written in components as Similarly, anti-chiral superfieldsΦ satisfy D αΦ = 0, and are given in components bȳ A vector multiplet is described by a real superfield V, V † = V, whose components in Wess-Zumino gauge are Integration over superspace is defined by
B Proof that J O Correlators Vanish Exactly
In this section we prove that correlators of the form JO , where J is a current and O is a scalar, vanish exactly in the planar limit. The proof holds when all the finite counter-terms that affect contact terms of 2-point functions are set to zero (see section 2.3).
JHEP11(2015)013
Let J f s be a current with spin s > 0, and let O f be the scalar operator in the fermionic theory F k,N . The correlator J f s O f vanishes at separated points because of conformal symmetry. We will show that the correlator J f s O f vanishes exactly in the planar limit, even at coincident points. In other words, we will prove that this correlator does not contain contact terms. This is true both in the fermionic and in the N = 2 Chern-Simons vector models, and a similar result will be shown for correlators of the form J b s O b in the regular bosonic and N = 2 theories. Using perturbation theory, it is then easy to check that J f s O b and J b s O f also vanish in the N = 2 theory.
B.1 Fermionic case
In this section we distinguish between correlators that vanish only up to contact terms, and correlators that vanish exactly. Regarding the operator J f s , we assume that it is symmetric, conserved and traceless inside any planar 2-point function of single-trace operators, but only up to contact terms. 28 Consider the momentum-space correlator J f s (p)O f F in the fermionic theory. The most general form it can take is where P µ 1 ···µs (p) is a polynomial in the momentum p µ of dimension s, corresponding to contact terms in the x-space correlator. The corresponding correlator in the supersymmetric N = 2 theory is given by where c is a non-vanishing function of N, k. Here we are using the fact that the 2-point function in the supersymmetric theory can be written in terms of correlators of the nonsupersymmetric theories. This can be seen by using a perturbative expansion in the (φϕ)(ψψ) vertex of the supersymmetric theory, as discussed in section 2.3. Therefore, in order to show that the correlator vanishes in both theories it is enough to show it for the fermionic theory.
Let us now prove that the polynomial P (p) vanishes. First, let us show that P µ 1 ···µs (p) is symmetric, conserved and traceless. To see this, consider the 2-point function of the current in the supersymmetric theory. We can write it as 29
(B.5)
This is only possible if P µ 1 ···µs is conserved. A similar argument shows that P µ 1 ···µs is symmetric and traceless. We conclude that P µ 1 ···µs (p) is a conserved, symmetric, and traceless tensor of dimension s. It is easy to see that such an object must vanish. Indeed, define P s (p; y) ≡ y µ 1 · · · y µs P µ 1 ···µs (p), where the y are commuting and null polarizations (i.e., y·y = 0). Since P µ 1 ···µs (p) is symmetric and traceless, it is uniquely determined from P s (p; y). Moreover, because P s has dimension s and the y are null, it can only take the form P s = c (y · p) s , for some constant c. Finally, by imposing conservation, p · ∂ y P s (p; y) = 0, we conclude that c = 0.
B.2 Bosonic case
In this section we prove that J b s O b vanishes in both the regular bosonic (without a (φϕ) 2 deformation) and supersymmetric N = 2 theories, in the planar limit. Here J b s is a current with s > 1, and O b =φϕ is the bosonic scalar operator. As in the fermionic case, the correlator of the bosonic theory is proportional to the one in the supersymmetric theory, and therefore it is enough to show that the bosonic correlator vanishes. The most general form of this correlator in momentum space is where Q is again a polynomial in p. Q has dimension s − 1, which implies that it must include a factor of . If Q is symmetric then no such term can be written down, and therefore Q vanishes. It is left to show that Q is symmetric, and this can again be shown by considering the 2-point function of the current in the supersymmetric theory. This concludes the proof.
Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited. | 2023-01-20T15:17:39.327Z | 2015-11-01T00:00:00.000 | {
"year": 2015,
"sha1": "3cd385d7685558889780161d9d7c7af91be6af9b",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP11(2015)013.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "3cd385d7685558889780161d9d7c7af91be6af9b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
17446800 | pes2o/s2orc | v3-fos-license | Thrombospondin-2 Influences the Proportion of Cartilage and Bone During Fracture Healing
Thrombospondin-2 (TSP2) is a matricellular protein with increased expression during growth and regeneration. TSP2-null mice show accelerated dermal wound healing and enhanced bone formation. We hypothesized that bone regeneration would be enhanced in the absence of TSP2. Closed, semistabilized transverse fractures were created in the tibias of wildtype (WT) and TSP2-null mice. The fractures were examined 5, 10, and 20 days after fracture using μCT, histology, immunohistochemistry, quantitative RT-PCR, and torsional mechanical testing. Ten days after fracture, TSP2-null mice showed 30% more bone by μCT and 40% less cartilage by histology. Twenty days after fracture, TSP2-null mice showed reduced bone volume fraction and BMD. Mice were examined 5 days after fracture during the stage of neovascularization and mesenchymal cell influx to determine a cellular explanation for the phenotype. TSP2-null mice showed increased cell proliferation with no difference in apoptosis in the highly cellular fracture callus. Although mature bone and cartilage is minimal 5 days after fracture, TSP2-null mice had reduced expression of collagen IIa and Sox9 (chondrocyte differentiation markers) but increased expression of osteocalcin and osterix (osteoblast differentiation markers). Importantly, TSP2-null mice had a 2-fold increase in vessel density that corresponded with a reduction in vascular endothelial growth factor (VEGF) and Glut-1 (markers of hypoxia inducible factor [HIF]-regulated transcription). Finally, by expressing TSP2 using adenovirus starting 3 days after fracture, chondrogenesis was restored in TSP2-null mice. We hypothesize that TSP2 expressed by cells in the fracture mesenchyme regulates callus vascularization. The increase in vascularity increases tissue oxemia and decreases HIF; thus, undifferentiated cells in the callus develop into osteoblasts rather than chondrocytes. This leads to an alternative strategy for achieving fracture healing with reduced endochondral ossification and enhanced appositional bone formation. Controlling the ratio of cartilage to bone during fracture healing has important implications for expediting healing or promoting regeneration in nonunions.
INTRODUCTION
F RACTURE HEALING IS characterized by defined functional and morphological stages. There is initial formation of a hematoma and associated inflammatory cell influx, followed by vascularization and mesenchymal cell expansion, chondrogenic and osteogenic differentiation, endochondral and intramembranous bone formation, and finally remodeling. (1) At the cellular level, after the initial inflammatory phase, processes that influence healing include mesenchymal cell migration, proliferation, differentiation of the cells to either osteoblasts or chondrocytes, and apoptosis of cells within the callus. (1)(2)(3)(4) Angiogenesis and mechanical stability are likely crucial components regulating these cellular events and the resultant callus phenotype. (5,6) In the majority of clinical cases, fractures heal in an uncomplicated manner; however, in severe fractures, and in certain patient populations, such as diabetics and smokers, there is delayed healing and an increased incidence of nonunions. These cases are generally characterized by an increase in chondrogenesis and fibrous tissue development, with a resultant failure to undergo endochondral ossification. Absence of mechanical stability and reduced vascularity are believed to be important predictors of nonunion development and may result in dysregulation of mesenchymal fate. 1 A wide variety of secreted factors act locally within the healing fracture to regulate vascularity, mesenchymal cell function, and callus development. As examples, vascular endothelial growth factor (VEGF), (5,7) matrix metalloproteinase 9 (MMP9), (8) bone morphogenetic protein 3 (BMP-3), (9) and placental growth factor (10) have all been shown to regulate cell differentiation, callus size, and healing. In addition, extracellular matrix (ECM) proteins, including collagen (11) and osteopontin, (12) have been shown to affect fracture phenotype.
Thrombospondin-2 (TSP2) is another ECM protein that could play a role in regulating fracture healing. TSP2 is a secreted glycoprotein encoded by the Thbs2 gene and was the second member discovered in the family of five thrombospondin proteins. (13,14) TSP2 is a matricellular protein that modulates cell-matrix interactions and is highly expressed in developing and healing tissues. (15) Mice with a targeted disruption of the Thbs2 gene (TSP2-null) exhibit a complex phenotype. In the skeleton, TSP2-null mice possess greater cortical bone thickness as a result of an increase in endocortical bone formation. (16) The increase in bone formation is associated with an increase in mesenchymal progenitor (marrow stromal cell) number, as determined by colony forming unit-fibroblast (CFU-F), and stromal cells lacking TSP2 show an increase in proliferation. TSP2-null mice also exhibit atypical bone formation in response to mechanical loading. (17) In addition to a pronounced bone phenotype, TSP2-null mice show altered soft tissue wound healing (18) and enhanced recovery to ischemic injury of skeletal muscle (19) ; both occur secondary to an increase in vascularity. TSP2 directly inhibits endothelial cell growth, (20,21) and exogenously delivered TSP2 can regulate angiogenesis in vivo, particularly in association with cancer. (22,23) Considering the bone phenotype and the response to cutaneous wounding and muscle ischemia in the absence of TSP2, we hypothesized that TSP2-null mice would exhibit accelerated fracture healing. To test this hypothesis, we used the TSP2-null mouse and its coisogenic wildtype mouse in an in vivo model of fracture healing. We show here that the fracture callus in the TSP2-null mouse exhibits greater vascularity and cell proliferation, enhanced intramembranous bone formation, and reduced endochondral ossification compared with WT mice. The absence of TSP2 expedites callus bone formation by altering the differentiation fate of mesenchymal cells, shifting differentiation toward osteoblasts instead of chondrocytes. When TSP2 is delivered to calluses after fracture using adenovirus, a WT chondrogenic phenotype is restored.
Mice
All procedures were approved by the Institutional Animal Care and Use Committee. The mice used had a targeted disruption of the Thbs2 gene, which encodes the thrombospondin-2 protein (TSP2-null). (24) Coisogenic WT 129/SvJ mice were used for comparison.
Surgical procedure
We created closed, transverse fractures in both tibias of 63-to 70-day-old mice using methods similar to those described previously by Hiltunen et al. (25) Briefly, mice were anesthetized for all surgical procedures using isoflurane (Aerrane; Baxter, Deerfield, IL, USA), and 0.05 mg/ kg of butorphanol tartrate (Torbugesic-SA; Fort Dodge Animal Health, Fort Dodge, IA, USA) analgesic was administered subcutaneously shortly after anesthetic induction. Both legs were prepared for aseptic surgery. Mice were placed in dorsal recumbency on microwaveable heating pads for the duration of anesthesia to maintain normal body temperature. The stifle joint of the right leg was flexed, and a small incision was made just medial to the tibial tuberosity. A 26-gauge hypodermic needle was used to bore a hole in the cortex of the medial aspect of the tibial tuberosity, slightly distal to the stifle joint. A sterile, 0.009-in-diameter, stainless steel pin was inserted into the created hole and inserted down the length of the tibia in the intramedullary canal until resistance was felt, indicating full insertion. This served as an intramedullary pin that would provide stability at the fracture site. This procedure was repeated for the left leg. After pin insertion, the pins were cut to be flush with the cortex, and the skin defect was closed using tissue adhesive (Nexaband; Abbott Laboratories).
Fractures were created in both legs using a custom-made device that uses a sliding weight and guillotine mechanism. This device produces consistent controlled displacement, high-energy impact force sufficient to induce fractures in mouse tibias. Mice were placed in sternal recumbency, and each leg was individually placed in the guillotine and fractured. Whole body radiographs were generated using a microradiography system (Faxitron, Wheeling, IL, USA) to verify pin placement and fracture gap location. Fractures analyzed were typically midshaft, simple, transverse fractures, although occasionally fracture occurred in the distal one third of the tibia. Tape ''splints'' were placed on both tibias to provide initial rotational stability to the fracture region for the first 48 h.
Mice recovered after the procedure under heat lamps. Moistened food was placed on the cage bottom, and water was provided ad libitum. Mice were typically ambulatory within 1 h after surgery and were observed eating within a few hours. Mice were maintained in a cage with wireless tops to reduce climbing activity. No mortality was observed throughout the course of this study.
Tissue harvest and preparation
At harvest, all animals were anesthetized with isoflurane gas anesthetic and humanely killed by cervical dislocation. Right tibias were carefully dissected, the intramedullary pins were removed, and the tibias were placed in 4% paraformaldehyde for 24 h, decalcified in formic acid for 12 h, and transferred to 70% ethanol until further processing for histology or immunohistochemistry (IHC). Left tibias were similarly dissected, wrapped in saline-soaked gauze, and placed in storage at -208C until mCT scanning and torsional mechanical testing could be performed. mCT Samples were scanned using an eXplore Locus SP microCT system (GE Healthcare Preclinical Imaging, London, Ontario, Canada) and reconstructed at an 18-mm isotropic voxel size using the Feldkamp cone beam algorithm. A custom software analysis procedure was specifically developed to quantify the callus properties on these images using Microview (v 2.1.2 Advanced Bone Application; GE Healthcare Preclinical Imaging) similar to that described by Den Boer et al. (26) First, the image was reoriented so that the anterior-posterior and longitudinal axes were aligned with the principal image axes. In the second step, three independent reviewers scrolled through the image planes and measured the maximum callus width using a line that bisected the middle of the marrow cavity as well as the maximum callus length on the anterior side of the bone (Fig. 1A). The measurements for maximum callus width and maximum callus length were averaged across the three reviewers, and the average length was used to isolate the callus from the image of the entire bone (Fig. 1B). Next, the callus and cortical bone sections were manually segmented using a series of user-defined points with spline interpolation between these points (Fig. 1C). The points for cortical bone boundary were chosen on slices of the image not more than 30 CT slices (0.540 mm) apart, and spline interpolation was used to define the points in between. These points were reviewed and modified, and a reinterpolation was performed in an iterative process. A similar process was used to define the callus boundary. Next, a single point within the cortical region of interest was used to initiate a region-growing algorithm that detected the cortical bone by finding all connected voxels over a simple global threshold. This region-growing algorithm was confined by the cortical region of interest to ensure that mature bone within the callus, particularly near the proximal and distal ends, was not included in the cortical bone measurements (Fig. 1D). The cortical bone voxels were removed from the image so that it did not bias any measurements (Fig. 1E). Last, the region of interest surrounding the callus was identified (Fig. 1F), a global threshold was applied, and the callus volume, bone volume fraction, BMD, BMC, tissue mineral content (TMC), and tissue mineral density were calculated (TMD). Bone mineral measurements represent the mineral contained in the entire callus volume. Tissue mineral measurements represent the mineral contained within the volume defined as bone (Fig. 1G). The curve for TSP2-null mice is higher after 10 days, indicating that more bone is present at this time point. After 20 days, the curve for TSP2-null mice is lower, indicating that these mice have less bone in the callus. The shaded region indicates the region used to separate mineralized voxels from unmineralized voxels; the shaded region is used for TMC and TMD calculations, whereas the entire histogram is used for BMC and BMD calculations. The drop-off in the histogram near a grayscale value of 3500 HU is because of the removal of dense bone (which was predominantly residual cortical bone from the original cortices) from the image.
Mechanical testing
Tibias were secured in brass pots using a low melting temperature Cerro Alloy (McMaster Carr, Chicago, IL, USA) and mounted into a custom torsion testing device. This torsion tester was equipped with a 50 in.oz. reaction torque sensor (Model 2105-50; Eaton, Troy, MI, USA) and an RVDT (Model R30A; Lucas Control Systems, Hampton, VA, USA) for torque and angular displacement measurements, respectively. Raw torque data were conditioned with a strain gage amplifier (2100; Measurements Group, Raleigh, NC, USA), and angular displacement was conditioned with an LVDT amplifier (DTR-451; Lucas Control Systems) before collection. This device was interfaced with LabVIEW (v 7.0; National Instruments, Austin, TX, USA) for data collection and controlled using a custom program that interfaced using a data acquisition system (NI PCI-6251; National Instruments). The bones were tested at a constant displacement rate of 0.58/s until failure while being maintained moist at room temperature. Data were sampled at 1000 Hz at a displacement rate of 0.58/s and stored for analysis. Analysis was performed using a custom MATLAB (v 7.0.1; The Mathworks, Natick, MA, USA) script. In this script, the torque data were filtered with a third-order Savitzky-Golay FIR smoothing filter with a 0.5s window before analysis to remove noise. The stiffness was calculated based on a linear regression on the torque-displacement data in a user-selected region, and the script automated calculations for torque at failure, angular displacement at failure, and energy to failure.
Safranin-O histology
Safranin-O staining was performed on right mouse tibias that were paraffin embedded and serially sectioned (7 mm). Briefly, slides were deparaffinized, rehydrated, and exposed to 0.3% Fast Green FCF (Fisher, Pittsburgh, PA, USA). Slides were rinsed in 1% acetic acid (Fisher), immersed in 5.45% Safranin-O (Fisher), rinsed in dH 2 O, dehydrated, mounted using Permount (Biomeda, Foster City, CA, USA), and visualized using a microscope. These samples were used for measuring total callus area, chondrocyte area (i.e., Safranin-O-positive area), and area of woven bone. Hypertrophic chondrocyte areas were defined based on the characteristic appearance of those chondrocytes.
TUNEL assay
After tissue rehydration, slides were placed in 958C citrate buffer for 20 min and, after removal, allowed to cool for 20 min. Next, slides were incubated for 15 min in 3% H 2 O 2 and incubated for 2 h with the terminal transferase (TdT; Roche, Basel, Switzerland) and biotin-conjugated 16-dUTP (Roche) reaction mixture. Incubation with streptavidin-conjugated HRP and DAB chromogen and hematoxylin counterstain were carried out in the same manner as was performed for IHC. Mouse testicular tissue was used for controls. Positive control tissues were incubated with DNase. Negative controls used testicular tissue without the TdT and or without the 16-dUTP. All control tissues were run in parallel with samples.
Adenoviral delivery
TSP2 and control b-galactosidase adenovirus were generated by the University of Michigan Vector Core. TSP2-null mice tibias were fractured, and at day 3 after fracture, 10 ml of 1 3 10 8 TSP2 adenovirus or LacZ control adenovirus particles was injected into the fracture site using Luer Tip Hamilton syringes. Each mouse was injected with LacZ on one side and TSP2 on the contralateral side. The mice were given 10 days to heal after fracture and killed. Tissue was collected and processed as described in the IHC methods and stained according to the Safranin-O protocol detailed earlier. collagen type IIa collagen expression were measured at 34 magnification using Bioquant Image Analysis software (Bioquant Image Analysis, Nashville, TN, USA). Using the manual measure function of this software, we identified appropriate areas and outlined them. Three tissue sections per slide were measured, and the average was calculated.
For the quantification of PCNA and TUNEL labeling, we used a method similar to that described by Li et al. (2) Briefly, the length of the fracture callus was measured using the Bioquant software at 34 magnification. Within the proximal, middle, and distal third of the fracture callus, the total number of positive and negative cells was measured in three fields of view at 363 magnification for a total of nine fields per tissue section. These measurements were repeated on three tissue sections on the same slide for a total of 27 measures per sample. The average number of positive cells over all fields measured represents the percentage of positive cells within the fracture callus.
Total OCN, OSX, Sox9, and VEGFA areas were quantified, at 3200 magnification, using the SigmaScan Pro 5 software (Aspire Software International, Ashburn, VA, USA). Using the manual measure function of this software, positive areas within the total callus were identified and outlined. Three tissue sections per slide were measured, and the average was calculated.
Time course gene expression analysis for TSP2
WT (C57/B6) mice were placed in groups (n = 4 per time point) and given either 0 (no fracture), 5, 7, 10, 14, 18, or 21 days to heal and were subsequently killed. Fracture calluses were carefully dissected and immediately snap frozen in a liquid nitrogen bath. Frozen tissue samples were homogenized using a liquid nitrogen-cooled mortar and pestle apparatus, and mRNA was purified using TRIzol (Invitrogen, Carlsbad, CA, USA). Single-strand cDNA was synthesized from exactly 0.5 mg mRNA from each sample. Specific primers for b-actin (internal control) and TSP2 were used for real-time PCR analysis (Corbett Research, Carlsbad, CA, USA). Samples were denatured at 948C for 20 s, annealed at 588C for 30 s, and amplified at 728C for 30 s for a total of 30 cycles. C(t) results were compared with bactin expression and fold-change (relative to day 9 nonfracture controls) was determined using previously described methodology. (22) QPCR gene expression analysis for day 5 samples RNA fold expression levels were calculated using the double DCT method, and proper amplicon formation was confirmed by melt curve analysis.
Statistical analysis
One-way ANOVA was used to assess statistical significance between WT and TSP2-null samples at each time point. Results for the temporal analysis of gene expression were analyzed using one-way ANOVA with Tukey posthoc analysis. Animal numbers for each experiment varied and are indicated in the figure legends.
TSP2 is highly expressed in 5-day fracture callus
Previous research has shown that TSP2 is primarily expressed by mesenchymal cells and in mesenchymal origin tissues but is not expressed by hematopoietic cells or endothelial cells. (20,27,28) TSP2 expression was upregulated in response to fracture ( Fig. 2A). Levels of TSP2 increased 100-fold in day 5 fractures relative to unfractured day 0 controls and subsequently declined until day 18 when levels normalized. Immunolocalization showed that TSP2 was expressed broadly in the 5-day fractures. TSP2 is expressed highest in the mesenchyme associated with/around the fracture site (Fig. 2B). Fluorescence detection was relatively low in day 10 and day 20 calluses, but a positive signal was present relative to the negative control (TSP2-null).
TSP2-null mice show alterations in callus morphology
To determine whether an absence of TSP2 would alter fracture healing, we used high-resolution mCT to precisely measure the callus volume, anterior callus length, maximum width, bone volume, mineral content, and mineral density of healing tibial fractures in TSP2-null mice at 10 and 20 days after fracture (dpf) ( Table 1). Because of the low bone content of the callus, day 5 fractures could not be evaluated by mCT. Ten days after fracture, callus bone volume, BVF, BMC, BMD, and TMC in TSP2-null mice were significantly greater compared with the callus in WT mice. These data, considered in conjunction with a voxel Hounsfield distribution histogram (Fig. 1G), showed that the TSP2-null mice have more newly formed bone in the callus than WT. Callus shape is also different. Specifically, TSP2-null fractures show a greater callus length to width ratio because of a reduction in length. TMD, which reflects the density of the mineral in voxels identified as bone, was significantly less in the TSP2-null mice than in the WT mice (Table 1). At 20 dpf, the callus volume, width, and width:length ratio were similar between the genotypes. However, callus length was still slightly reduced in the TSP2-null mice. Surprisingly, BVF and BMD were significantly less in the TSP2-null mice after 20 days of healing (Table 1; Fig. 1G).
To evaluate the functional consequence of the alterations in bone and callus geometry in the absence of TSP2, tibias were evaluated using torsional testing. Again, because of the low level of bone in the calluses of day 5 fractures, mechanical testing was not performed for these specimens. Despite the increase in bone in the TSP2-null fracture calluses at 10 dpf, there were not any significant differences in the torsional mechanical properties (Table 2). However, in evaluating calluses 20 dpf, the energy to failure of the TSP2null calluses was significantly decreased.
TSP2-null mice show a reduction in callus cartilage
We next evaluated fractures histologically to gain more insight into the alterations in callus geometry and bone content. Qualitatively, there was an obvious reduction in the amount of cartilage in the TSP2-null mice at 10 dpf (Fig. 3A). When we measured the amount of safranin-O-positive cartilage using histomorphometry, the results indicated that fractures in the TSP2-null animals had 40% less cartilage (Fig. 3B). However, there was no difference in the percentage of cartilage that was composed of hypertrophic chondrocytes between the genotypes, suggesting that the progression of cartilage maturation is similar.
At day 20, both WT and TSP2-null specimens had very little cartilage. However, even at this time point, all of the WT samples showed some small amount of cartilage, whereas only one half of the TSP2-null specimens contained cartilage (results not shown).
TSP2-null fractures show differences in callus size and mesenchymal cell proliferation
Because the fractures of the WT and TSP2-null mice already showed substantial changes by day 10 and we were unable to evaluate day 5 fractures using mCT and mechanical testing, we performed a comprehensive histological and gene expression evaluation of fractures at day 5. The fracture calluses in both the WT and TSP2-null mice at 5 dpf were composed of undifferentiated mesenchymal cells without appreciable bone or cartilage, but TSP2null fractures had a 20% greater area than WT (Figs. 4A and 4B). Previous work has shown that TSP2 regulates mesenchymal cell proliferation, (16) and additional studies have shown that thrombospondins can regulate apoptosis. (29) Recognizing the importance of these processes in the size of the developing callus, PCNA was used to evaluate mesenchymal cell proliferation, and TUNEL staining was used to evaluate mesenchymal cell apoptosis. Calluses from TSP2-null mice showed a 25% increase in PCNA positive cells compared with those from WT mice (Figs. 4C and 4E), but levels of apoptosis were equivalent (Figs. 4D and 4E).
TSP2-null mice show decreased chondrogenic differentiation and increased osteoblast differentiation at day 5
There is no safranin-O-positive mature cartilage at 5 dpf (Fig. 4A), but mesenchymal cells are beginning to undergo chondrogenic differentiation. Using IHC, we measured the amount of collagen type IIa and Sox9 expression as indicators of early chondrocyte differentiation. Both type IIa collagen and Sox9 expression peak around 5 dpf and diminish with cartilage maturation and hypertrophy. (30,31) The TSP2-null mice had significantly less type IIa collagenpositive area (Figs. 5A and 5B) and reduced expression of Sox9 (Figs. 5C and 5D). To examine osteoblast differentiation of the callus mesenchymal cells, we evaluated osteocalcin and osterix expression. Expression of both osteocalcin (Figs. 5E and 5F) and osterix was significantly increased in TSP2-null fractures (Figs. 5G and 5H). We also harvested total RNA from day 5 fractures and found that the expression of osteocalcin RNA was significantly increased in the TSP2-null mice, whereas type II collagen RNA was significantly decreased (Fig. 5I). These data show that TSP2-null mice have a reduction in chondrogenic differentiation and enhanced osteoblast differentiation.
TSP2-null fractures show enhanced vascularity and a reduction in markers of hypoxia-inducible factor activity
Thrombospondins are potent regulators of angiogenesis, and previous work has shown that TSP2-null mice show accelerated dermal wound healing (18) and enhanced recovery to ischemia in muscle, (19) caused in part by enhanced vascularity. Because the development of vascularity in the callus is an important part of the fracture healing response, vWF expression within the callus was used to determine vessel density. Importantly, the number of whole blood vessels within the callus that could be identified by their labeling with anti-vWF antibody and the presence of a distinct lumen was 2-fold greater in the TSP2-null mice compared with the WT mice at 5 dpf (Figs. 6A and 6B).
Because hypoxia-inducible factor (HIF) activity is recognized positive regulator of chondrogenic differentiation, (32) we hypothesized that the increased vascularity in TSP2null mice would result in reduced hypoxia and lead to a reduction in HIF activity. As a surrogate of HIF activity, (33) we evaluated VEGFA expression using IHC and quantitative RT-PCR and Glut-1 expression using quantitative RT-PCR. VEGF expression was reduced in tissue sections (Figs. 6C and 6D), and both Glut-1 and VEGFA were decreased in TSP2-null fractures as measured by qPCR (Fig. 6E).
Delivery of TSP2 adenovirus to TSP2-null fractures increases chondrogenesis in 10-day fractures
To examine the temporal requirement of TSP2 to influence fracture cell fate, TSP2 was delivered to fractures in TSP2-null mice 3 dpf during the time of mesenchymal cell influx and neovascularization, and fractures were evaluated using histology at day 10. Adenovirus delivered in this manner was effectively taken up by the cells of the mesenchymal callus (Fig. 7A). TSP2 overexpression resulted in enhanced callus cartilage in TSP2-null mice (Fig. 7B).
DISCUSSION
Previous work has shown that TSP2 acts to regulates the time course of dermal wound healing (18) and expedites recovery to muscle ischemia. (19) In this study, we found that TSP2 alters the temporal progression of bone regeneration. Ten days after fracture, TSP2-null mice have substantially less cartilage and an increase in the amount of bone. At day 5 when chondrogenic differentiation is just beginning, but there is no mature cartilage, chondrocyte markers are reduced and osteoblast markers are increased in TSP2-null mice. Thus, whereas undifferentiated mesenchymal cells in the central portion of the callus are undergoing differentiation to chondrocytes in WT mice, in TSP2-null mice, fewer cells become chondrocytes and more become osteoblasts. Importantly, the ratio of total cartilage that is hypertrophic (mature) cartilage in WT and TSP2-null mice is equivalent
1050
TAYLOR ET AL.
at day 10, suggesting that chondrogenic maturation is normal in the callus of TSP2-null mice.
In the fracture gap, undifferentiated mesenchymal cells that contribute to healing are derived from periosteum, endosteum, surrounding fascia and muscle, and marrow. These mesenchymal cells differentiate to become either osteoblasts or chondrocytes or they undergo apoptosis. The osteoblasts participate in direct intramembranous bone formation, whereas the chondrocytes form cartilage that will subsequently undergo endochondral bone formation.
The molecular mechanisms dictating the fate decision of callus mesenchyme in regenerating bone remains unclear. Whereas a variety of possible factors may contribute to this fate decision, vascularity and oxygen tension in the fracture gap likely plays a prominent role. Hypoxia and the resultant induction of Hif1a has been shown to be prochondrogenic. (32,34,35) Indeed, in our study, we concluded that the dominant mechanistic explanation for the shift in mesenchymal cell fate in WT and TSP2-null mice is that a substantial increase in vascular density in TSP2-null mice alters tissue oxygen tension. Enhanced oxemia promotes the differentiation of bipotent cells to become osteoblasts rather than chondrocytes.
Thrombospondins have been extensively studied as endogenous inhibitors of angiogenesis for 20 yr. (36)(37)(38) Mice lacking TSP1 and TSP2 show increases in angiogenesis, (24) FIG. 5. The fracture callus of TSP2-null mice 5 days after fracture has reduced chondrogenesis and increased osteoblast differentiation. Immunohistochemistry and histomorphometry were used to examine the differentiation of mesenchymal cells in the 5d calluses. (A and B) Type IIa collagen and (C and D) Sox9 expression were evaluated to determine areas of neochondrogenesis, whereas (E and F) osteocalcin and (G and H) osterix expression were used to indicate areas of new bone formation (magnification bar = 100 mm). Positive staining is either brown (A) or red (C, E, and G). Blue staining in C, E, and G represents DAPI-stained nuclei. B, bone; C, callus tissue; F, fracture. Values are mean ± SE of WT (n = 13) and TSP2-null (n = 12) mice. *Significantly different from WT; p < 0.05. (I) RNA was extracted from day 5 calluses and gene expression was evaluated using quantitative real-time PCR. Values are mean ± SE of fold-change in TSP2-null (n = 6) compared with WT mice (n = 5); *DCT significantly different from WT (p < 0.05). and overexpression of TSP prevents tumorigenesis in a number of tissues. (39) Furthermore, in wound healing (18) and tissue ischemia, (19) an absence of TSP2 alters healing through an increase in vascularity. Although TSP1 and TSP2 show considerable homology, data suggest that the TSP1 and TSP2 effects on endothelial cells is different. Whereas TSP1 requires binding to CD36 and activation of downstream kinases to induce apoptosis, (40) TSP2 seems to regulate endothelial cell proliferation through a non-CD36-meditated mechanism that requires VLDLR. (20,21) A variety of earlier studies have shown a link between angiogenesis and fracture healing. Inhibition of angiogenesis has been shown to cause a decrease in callus mineralization and callus volume. (5,41) In contrast, treatment of fracture calluses with an adenoviral vector carrying a construct encoding VEGFA leading to more vascularity resulted in a reduced amount of cartilage at 2 wk after fracture (42) -a phenotype similar to that seen with TSP2null mice. Indeed, levels of the direct transcriptional targets of Hif, VEGFA, and Glut-1 (33) were significantly reduced in TSP2-null mice.
It is likely that an increase in Hif1a in oxygen-depleted tissues impacts chondrogenic and osteogenic differentiation. Increased activation of HIF1a increases the differentiation of mesenchymal progenitors to chondrocytes, (32,43) and Hif1 directly regulates Sox9 activity. (44) Conversely, increased HIF decreases markers of osteoblast differentiation in vitro. (45) Thus, in the absence of TSP2 when there is higher vascularity, oxygen tension is increased, and Hif1a levels are reduced, favoring osteogenesis.
Although alterations in vascularization and oxemia offer a reasonable explanation for the TSP2-null fracture callus phenotype, an absence of TSP2 could be influencing frac-ture phenotype through at least two other mechanisms. Increased proliferation of mesenchymal cells in the callus could have a negative influence on chondrogenic differentiation because, to undergo chondrogenic differentiation, cells must first exit the cell cycle. (46)(47)(48) It is likely that TSP2-null cell proliferation does account for the 20% increase in callus size at day 5 and the alteration in the widthlength ratio observed at day 10. Second, it is possible that the mesenchymal cells that are recruited into the fracture site of the TSP2-null mice are phenotypically different than the cells of the WT. For example, TSP2-null mice have an increase in marrow CFU-F (16) ; thus, marrow could make a greater contribution to the healing in TSP2-null than WT mice. These alternative explanations can not be ruled out at this time and need to be further studied in models where proliferation of cells is manipulated and by studying earlier time points after fracture in which cell origin can be better discerned.
Surprisingly, despite the significant alterations in callus bone and cartilage after 10 days of healing, we did not detect any changes in the biomechanical properties. Because the biomechanical properties of the healing construct can be attributed both to density and to the amount of tissue, it is plausible that the biomechanical advantage of a higher volume of bone in TSP2-null mice is offset by the decrease in TMD at 10 days. In contrast to these results, the energy to failure was significantly lower in TSP2-null mice after 20 days of healing, whereas the TMD was not different. These results are likely attributable to the decrease in the volume fraction of bone present in the calluses of TSP2-null mice at 20 days. Interestingly, this may suggest that a florid chondrogenic response provides for a mechanically advantageous callus. (C and D) VEGF expression in calluses was evaluated using immunofluorescence (magnification bar = 100 mm). B, bone; C, callus tissue; F, fracture. Values are mean ± SE of WT (n = 13) and TSP2-null (n = 12) mice. *Significantly different from WT (p < 0.05). (E) RNA was extracted from day 5 calluses and gene expression was evaluated using quantitative real-time PCR. Values are mean ± SE of fold-change in TSP2-null (n = 6) compared with WT mice (n = 5). *DCT significantly different from WT (p < 0.05).
1052
TAYLOR ET AL.
The high expression of TSP2 early in fracture healing (Fig. 2) and the phenotype of TSP2-null fractures suggests that TSP2 plays an important role in the regulation of early fracture mesenchyme. Multiple factors must act in concert to influence the fate of early callus mesenchyme. By regulating the balance of proangiogenic factors and factors that regulate mesenchymal cell proliferation and differentiation, we may be able to better influence fracture healing clinically. Indeed, as proof-of-principle, by delivering TSP2 adenovirus, we were effectively able to generate a more chondrogenic fracture callus. | 2014-10-01T00:00:00.000Z | 2009-01-05T00:00:00.000 | {
"year": 2009,
"sha1": "c97c953b4a79806ecc92715850107405561a1d4b",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1359/jbmr.090101",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "c97c953b4a79806ecc92715850107405561a1d4b",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
79617874 | pes2o/s2orc | v3-fos-license | Radiological Correlation Between the Anterior Ethmoidal Artery and The Supraorbital Ethmoid Cell in Relation to Skull Base
Background: The anterior ethmoidal artery (AEA) is an important landmark in functional endoscopic sinus surgery. Iatrogenic injury may result in retraction of the artery into the orbit, with intra-orbital bleeding and possible blindness. Computerized Tomography (CT) scans are the gold standard in diagnosing paranasal sinus diseases. These scans are used as road maps while operating on the paranasal sinuses. We undertook this study to determine the reliability of identification of the artery on the coronal CT scan and to determine whether a correlation exists between the pneumatization of the suprabullar recess and the vertical distance of the artery from the base skull. Methods: 80 randomly selected CT scans were studied. The AEA was identified on each side and the vertical distance between the artery and the base skull was measured. The CT scans were divided into two groups based on whether the supraorbital cell was present or absent. These groups were each further subdivided into 3 groups depending on the vertical distance between the anterior ethmoidal artery and the base skull. Result: The AEA was reliably identified in 98.75 % of the cases. When the supraorbital cell was absent, the mean distance between the artery and the base skull was 1.2 mm; while when the cell was present, the mean distance was 4.52 mm which is statistical highly significant (p value < 0.05). Conclusion: The orbital beak and superior oblique muscle are reliable landmarks to identify the anterior ethmoidal artery. There exists a strong correlation between the vertical distance of the artery from the base skull and the presence of the supraorbital ethmoid cell.
Introduction
The lack of knowledge on endonasal surgical techniques, most specially its anatomy, was the main factor related to the high rate of complications seen associated to this surgical procedure during the 80's 1 . Thus, identifying sinonasal anatomic structures and knowing its boundaries are essential for both the efficacy and the safety of nasosinusal endoscopic surgeries, regardless of the technique used [2] . The anterior ethmoidal artery (AEA) is an important landmark in functional endoscopic sinus surgery (FESS) and in endoscopic orbital decompression [3] . Iatrogenic injury to this artery during surgery may cause serious complications, such as intense bleeding, CSF leak, artery retraction towards the intra-orbital region and orbital hematoma formation. The development of a retro-orbital hemorrhage increases the pressure in this compartment; unless this is decompressed within approximately an hour it can lead to blindness [4,5] .
The anterior ethmoidal artery crosses three cavities: the orbit, the ethmoid labyrinth and the anterior fossa of the skull. In enters the olfactory fossa through the lateral lamella of the cribriform plate along the anterior ethmoidal sulcus, which is the point of greatest frailty of the whole anterior skull base. At this point the bone is extremely thin, and is considered as a high-risk area in nasal endoscopic surgery. In its course through the ethmoid labyrinth, the position of the anterior ethmoidal artery relative to the ethmoidal roof is very variable; the artery thus becomes vulnerable to injury during surgical procedures. [6.7] Computerized Tomography (CT) scans are accepted as the gold standard in diagnosing diseases involving the paranasal sinuses. These scans are used as road maps while operating on the paranasal sinuses. Hence, it is essential for the otolaryngologist to be able to read the CT scan by himself. [8] Accurate localization of the AEA pre-operatively would help in avoiding damage to the artery. We undertook this study to determine the reliability of identification of the artery on the coronal CT scan and to determine whether a correlation exists between the pneumatization of the suprabullar recess and the vertical distance of the artery from the base skull.
Materials and Methods
This prospective descriptive cross-sectional study was conducted in Department of Otorhinolaryngology and http://www.aamsjournal.com head and neck surgery and department of Radiodiagnosis and imaging, B.P. Koirala Institute of Health Sciences Dharan,Nepal during a period from August 2014 to July 2015. A total of eighty patients (160 AEA) diagnosed clinically as chronic rhinosinusitis, who underwent CT scan of nose and paranasal sinus, were taken up for this study. The patients aged below 12 years, history of surgery or trauma in the para nasal sinuses or the skull base, Congenital anomalies of the face, paranasal sinus malignancies and Osteofibrous lesions were excluded from the study.Detail medical history was taken, thorough clinical examinations were performed and proforma were filled up. Ethical approval was obtained from the Institutional ethical review board (IERB) of B.P. Koirala Institute of health sciences. Written informed consent taken from participants prior to the study.
CT scans were performed using 16-slice multi-detector CT scanner (SIEMENS). The slice thickness of the scans was 3 mm. These planes were made with patients in ventral decubitus, using perpendicular sections to the hard palate, from the anterior border of the frontal sinus to the anterior border of the clivus. The AEA was identified on coronal CT scans (bone windows) on each side and its distance from the base skull measured individually. The bony canal of the AEA was identified running across the ethmoidal cavity. The CT scans were divided into 2 groups, those with supraorbital cells and those in which supra-orbital cell (SO cell) is absent. Each group was further subdivided into 3 subgroups based on the distance of the AEA from the base skull as follows: Group I -< 2.5 mm, Group II -2.5 -5 mm and Group III -> 5 mm.
Data from filled proforma were entered in Microsoft Excel 2007 (Microsoft, Redmond, WA, USA) and were analyzed by using SPSS (Software Package for Social Sciences)16 for windows software. A descriptive analysis was made of the frequency distribution of qualitative variables. The chisquare test or student t Test as applied as appropriate for comparing the prevalence of categorical variables. P values below or equal to 0.05 was defined as statistically significant.
Result
The eighty patients' CT scan of nose and PNS were studied and the following observations were made. The age ranges from 15years to 65years, the chronic rhinosinusitis was most common in the age group of 30 to 40 years. Out of total 80 patients, 30 (37.5%) patients were from this group. The 46 (58%) of the patients were female and 34 (42%) were of the males. Ratio between female and male was 1.3:1. The commonest mode of presentation was nasal obstruction 77.5% followed by nasal discharge 42.5%, headache 31.25%, anosmia 26.25%, nasal mass 22.5% and halotosis18.75%.
A total of 160 anterior ethmoidal artery were taken for the study and among them 156 (97.5%) AEA could be identified and 4 (2.5%) could not be identified. [ Table-1] Most of the AEA were found at lower to the skull base level i.e. 105 (67.3%) and at skull base level 51 (32.7%).
[ Table- 2] When the SO cell was absent[ fig.1], the anterior ethmoidal artery was seen close to the base skull (< 2.5 mm) in 88.63% (78 out of 88) of the sides, while in the remaining 11.37% of the sides, it was seen at a distance of more than 2.5 mm (Groups II & III) from the base skull. However, when the SO cell was present [ fig.2], the anterior ethmoidal artery was seen at a distance of more than 2.5 mm (Groups II & III) from the base skull in 88 % (60 out of 68) of the sides, whereas it was close to the base skull in 11only 12% of the sides (Group I) [ Table-3]. When these two groups were analyzed for statistical significance (using the Chi-square test), the p value was 0.0001, which is statistically highly significant.
When the SO cell was present, the mean distance of anterior ethmoidal artery from the skull base was 4.91mm whereas when the SO cell was absent, the mean distance of anterior ethmoidal artery from the skull base was 1.3 mm.[ Table- 4] When the mean distance of the AEA from the base skull for each group was compared (using the t-test), we found that the p value was 0.0001, which is statistically highly significant.
Discussion
Correctly identifying the position of anterior ethmoidal artery prior to surgery helps minimize the complications that can occur in endoscopic sinus surgery such as perioperative bleeding, retro-orbital hemorrhage and a CSF leak. As Ohnishi et al. points out the area of the anterior ethmoid artery is prone to surgical complications. 9 A thorough review of the literature revealed a few studies done with regards to the position of the AEA in the ethmoid cavity [2][3][4]6,7] . All the studies state that the vertical distance of the AEA from the base skull simple and reliable method of identifying the AEA on the CT scan. In addition, we present our data on the variability of the vertical distance of the AEA from the base skull. We have also found a correlation between the position of the AEA and the presence or absence of the supraorbital ethmoid cell.
We were able to identify the anterior ethmoidal artery in 97.5% of our patients. This figure is comparable to international literature. Joshi AA et al studied 50 patients and able to identify the anterior ethmoidal artery in 97% of patients [8] , similarly Mc Donald SE et al did study and found that the anterior ethmoidal foramen was visualized in 95 per cent of cases bilaterally [3] .
We used coronal CT scan in the bone windows to identify the AEA. The AEA was identified at the level of the orbital beak formed at the junction of the superior and the medial orbital wall. Another prominent landmark at this site was the presence of the superior oblique muscle in the orbit.
Also, the orbital beak is a constant bony landmark when compared to the vertical attachment of the middle turbinate which can be variable in position. However, the most important advantage of this method is that these landmarks are preserved even in extensive pathologies of the paranasal sinuses [8] .
In our present study only 32.7% of the anterior ethmoidal arteries lay in the base skull, while the remaining 67.3% of the anterior ethmoidal arteries had a mesentery by which they were suspended below from the base skull. The study done by Joshi AA et al showed that only 20% of the arteries lay in the base skull, while the remaining 80% of the arteries had a mesentery by which they were suspended below from the base skull [8] . These figures indicate that the AEA is at risk during surgery in majority of the cases if it is not assessed carefully on the CT scan.
The presence of a supraorbital cell influences the relationship between the anterior ethmoidal artery and the skull base. The pneumatization of the ethmoid sinuses and the existence of a supraorbital cell, which is found in a variable percentage. There exists a strong correlation between the vertical distance of the AEA from the base skull and the presence of the supraorbital ethmoid cell. In our study, it is shown that when the SO cell was absent, the anterior ethmoidal artery was seen close to the base skull (< 2.5 mm) in 88.63% (78 out of 88) of the sides, while in the remaining 11.37% of the sides, it was seen at a distance of more than 2.5 mm (Groups II & III) from the base skull. However, when the SO cell was present, the anterior ethmoidal artery was seen at a distance of more than 2.5 mm (Groups II & III) from the base skull in 88 % (60 out of 68) of the sides, whereas it was close to the base skull in 11only 12% of the sides (Group I). When these two groups were analyzed for statistical significance (using the Chi-square test), the p value was 0.0001, which is statistically highly significant. These findings are comparable with the study done by Joshi AA et al. They found that When the SO cell was absent, the anterior ethmoidal artery was seen close to the base skull (< 2.5 mm) in 75.9% (41 out of 54) of the sides, while in the remaining 24.1% of the sides, it was seen at a distance of more than 2.5 mm (Groups II & III) from the base skull. When the SO cell was present, the anterior ethmoidal artery was seen at a distance of more than 2.5 mm (Groups II & III) from the base skull in 86% (37 out of 43) of the sides, whereas it was close to the base skull in only 14% of the sides (Group I). 8 This study showed that when the SO cell was present, the mean distance of anterior ethmoidal artery from the skull base was 4.91mm whereas when the SO cell was absent, the mean distance of anterior ethmoidal artery from the skull base was 1.3 mm. When the mean distance of the AEA from the base skull for each group was compared (using the t-test), we found that the p value was 0.0001, which is statistically highly significant. It means in cases where the supraorbital ethmoid cell is present, the AEA crosses the ethmoid cavity at a much lower level as compared to when the supraorbital ethmoid cell is absent.
Hence it must be kept in mind that the anterior ethmoidal artery is more susceptible to injury in cases when the supraorbital ethmoid cell is present. The position of the anterior ethmoidal artery may show variations between the two sides in a single patient. The possibility of such a variation must be known to the endoscopic sinus surgeon. Identification of the anterior ethmoidal artery preoperatively on the CT scan will help to minimize chances of damage to the artery during surgery.
Conclusion
The orbital beak and superior oblique muscle are reliable landmarks to identify the anterior ethmoidal artery. There exists a strong correlation between the vertical distance of the artery from the base skull and the presence of the supraorbital ethmoid cell. | 2019-03-17T13:12:31.997Z | 2018-03-04T00:00:00.000 | {
"year": 2018,
"sha1": "f7251674cc164bdc44ca3f256f483679e22640d0",
"oa_license": "CCBY",
"oa_url": "https://www.pacificejournals.com/journal/index.php/aams/article/download/aams1892/pdf_12",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f2d84cace30aa6c2d33eb0657f823dd0676f7565",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15273564 | pes2o/s2orc | v3-fos-license | Evaluation of Physical and Mechanical Properties of Porous Poly (Ethylene Glycol)-co-(L-Lactic Acid) Hydrogels during Degradation
Porous hydrogels of poly(ethylene glycol) (PEG) have been shown to facilitate vascularized tissue formation. However, PEG hydrogels exhibit limited degradation under physiological conditions which hinders their ultimate applicability for tissue engineering therapies. Introduction of poly(L-lactic acid) (PLLA) chains into the PEG backbone results in copolymers that exhibit degradation via hydrolysis that can be controlled, in part, by the copolymer conditions. In this study, porous, PEG-PLLA hydrogels were generated by solvent casting/particulate leaching and photopolymerization. The influence of polymer conditions on hydrogel architecture, degradation and mechanical properties was investigated. Autofluorescence exhibited by the hydrogels allowed for three-dimensional, non-destructive monitoring of hydrogel structure under fully swelled conditions. The initial pore size depended on particulate size but not polymer concentration, while degradation time was dependent on polymer concentration. Compressive modulus was a function of polymer concentration and decreased as the hydrogels degraded. Interestingly, pore size did not vary during degradation contrary to what has been observed in other polymer systems. These results provide a technique for generating porous, degradable PEG-PLLA hydrogels and insight into how the degradation, structure, and mechanical properties depend on synthesis conditions.
Introduction
Hydrogels have been investigated extensively for tissue engineering applications primarily due to mechanical properties of similar magnitude to many soft tissues [1], [2]. Poly (ethylene glycol) (PEG)-based hydrogels have received significant attention due to their biocompatibility, the relatively straightforward options for incorporation of peptide adhesion sequences [3] and growth factors, [4] and the ability to control mechanical properties based on polymerization conditions. While significant amounts of research have shown how modifications in the chemical composition of PEG hydrogels can modulate biological response, there has been little research into the role of the physical architecture.
Porous structure of biomaterials has been shown to play a role in regulating cell response and tissue integration. However, unlike ECM where cells can often migrate through pores between the solid structure, the cross-links in PEG hydrogels must be cleaved to enable cells to migrate and tissue to invade [1,8]. Introduction of pores into PEG hydrogels could be used to improve biological response and lead to improved outcomes in biomedical applications. The introduction of pores not only provides more space for cell migration and tissue invasion but also increases the surface area to volume ratio which can enhance cell seeding [5] and enable more efficient mass transport. [6] A number of strategies have been employed to generate porous materials, including gas foaming [7], polymer-polymer immiscibility [8], and particulate leaching [9]. These techniques are most commonly applied to hydrophobic polymer scaffolds, but recent studies have demonstrated that hydrogels with an interconnected porous structure can be produced using particulate leaching techniques [10,11]. PEG hydrogels generated with this technique support the formation of vascularized tissue in vivo in a pore size dependent manner [1]. While these materials have shown promise for tissue engineering, PEG hydrogels do not exhibit significant degradation in vivo.
PEG hydrogels can be made degradable under physiologic conditions through the inclusion of hydrolysable monomer units [12] or peptide sequences that are degraded by cell enzymes [12,13]. While enzymatically degradable PEG hydrogels are popular as ECM mimics, they restrict cell migration and tissue invasion to enzymatic processes and can result in significant intrasubject variability. Materials degraded by hydrolysis allow for more controlled and less variable degradation kinetics. Poly ( Llactic acid) (PLLA) is a hydrophobic, biodegradable polymer [14,15] that can be introduced into PEG systems to allow for controlled degradation via hydrolysis [16]. Hydrogels formed by polymerization of poly(ethylene glycol)-co-( L -lactic acid) diacrylate (PEG-PLLA-DA) degrade into products that are easily processed by the body [17]. Porous PEG-PLLA-DA hydrogels could maintain many of the advantages of porous PEG systems while exhibiting controlled degradation properties.
While various biodegradable porous scaffolds have been studied extensively [18,19] these studies have largely focused on hydrophobic foams. There has been little evaluation of the structure of porous hydrogels and the influence of the degradation process on their properties. In addition, the majority of imaging techniques require destruction or modification of the samples from their native state in order to image. Autofluorescence exhibited by PEG-PLLA-DA hydrogels [20] allows the unique opportunity to monitor the 3D architecture of a porous hydrogel during the degradation process in fully swelled conditions.
Our goal is to optimize the design of porous hydrogels that coordinate the processes of vascularized tissue invasion with polymer degradation. In order to achieve this goal we must first gain an understanding into the properties of porous hydrogels and how they change during degradation. Here, we applied a particulate leaching technique to generate porous PEG-PLLA-DA hydrogels and examined the influence of polymer concentration and particulate size on the mechanical properties, pore structure, and degradation rate of the resultant hydrogels. To our knowledge, this is the first study that was able to evaluate the structure of porous hydrogels during degradation without sample processing or labeling with exogenous agents. This information could be used to help optimize hydrogel design for applications in tissue engineering.
Synthesis of PEG-PLLA-DA
The method to synthesize PEG-PLLA-DA was performed as described by Chiu et al. [21] Briefly, all glassware were dried in a vacuum oven at 120uC for 24 hours and cooled under vacuum. Ten grams of PEG (MW = 3400) and 2.12 g of 3,6-Dimethyl-1,4dioxane-2,5-dione were placed into a 50 mL centrifuge tube and lyophilized. A round bottom flask was vacuumed and filled with argon. The lyophilized PEG and 3,6-Dimethyl-1,4-dioxane-2,5dione were placed in the flask and then 80 mL of stannous octoate added as an initiator. The flask was submerged in a constant temperature oil bath at 140uC for 4 h. The product was then dissolved in 20 mL of anhydrous DCM and filtered using a GF/F filter. The resulting polymer was precipitated in 1.5 L of ice-cold diethyl ether three times and lyophilized. Based on 1 HNMR analysis, these conditions result in approximately 10 lactide units per PEG macromer.
The lyophilized PEG-PLLA was acrylated as described previously. [21] Briefly, a three neck round bottom flask was vacuumed and filled with argon. Ten grams of PEG-PLLA was placed in the flask. Sixty mL of anhydrous DCM was injected into the flask, and 0.67 mL of triethylamine was added and stirred for 5 minutes. Acryloyl chloride (0.76 mL) was added dropwise. The flask was allowed to react for 24 hours at room temperature in the dark. The product was washed with 9.52 mL of 2 M K 2 SO 4 and allowed to separate overnight. The organic phase was collected and precipitated in 2 L of ice-cold diethyl either. The extent of reaction, structure and purity of the products were determined by 1 H NMR (Advance 300 Hz; Bruker, Billerica, MA). Products were dissolved in CDCl3 for 1 HNMR with 0.05% v/v tetramethylsilane (TMS) used as an internal standard. Acrylation efficiency was 9362%.
Porous PEG-PLLA-DA Hydrogel Generation
The method for generating porous PEG-PLLA-DA hydrogels involved a salt leaching procedure with the polymer dissolved in an organic solvent. Lyophilized PEG-PLLA-DA polymer was dissolved in 1 mL of DCM and 2-hydroxy-2-methyl-propiophenone added as a photoinitiator (5% (w/v)). 250 mg of sieved salt and 250 mL of precursor were placed in a 1.5 mL centrifuge tube. The tube was vortexed for 45 seconds and placed upside down allowing the salt to settle in to the cap for 20 seconds. The concentration of PEG-PLLA-DA was varied from 12.5 to 50% (w/ v) and the salt crystals used were selected by sieving in the following ranges: 150-100, 100-50 and 50-25 mm.
A microscope slide was used to cover the solution, carefully avoiding bubble formation. The solution was polymerized by irradiation under UV for 10 minutes. The sample was rotated 180u and polymerized for an additional 10 minutes. The microscope slide was removed and the DCM evaporated in a fume hood overnight. Resulting gels were placed in a 50 mL sterile centrifuge tube with 20 mL DI water with 4 mg/mL of gentamicin sulfate, and then immediately exposed to a vacuum (0.035 mBar) for 15 minutes to remove air trapped in the porous gels and to replace DCM with water. Water was changed 2 times a day until the salt was completely leached out.
Swelling Tests
The porous PEG-PLLA-DA hydrogels were placed in individual 15 mL tubes with 5 mL of PBS (2% sodium azide and 4 mg/mL of gentamicin) and incubated at 37uC. PBS was changed every day until hydrogels completely degraded. Porous PEG-PLLA-DA hydrogels were weighed at various time points.
Structural Analysis
The structure of porous PEG-PLLA-DA hydrogel could be imaged by confocal microscopy due to autofluorescence exhibited by the hydrogels [20]. A PASCAL laser scanning microscopy system from Carl Zeiss MicroImaging, Inc. (Thornwood, NY), was used for confocal imaging. The hydrogel was imaged using a 488 nm laser with a 505 nm low pass filter. Images had x and y resolution of 3.5 mm/pixel and z resolution of 1.8 mm/pixel. The samples were scanned 180 mm deep from the surface at 10 mm intervals, collecting 18 slices in total. Each stack was imported into AxioVision 4.5 (Carl Zeiss, Göttingen, Germany) in order to allow quantification of pore size. Pore size was defined as the longest axis of a given pore and was selected with the built-in caliper tool in AxioVision. Ten pores were selected at each 10 mm thick slice and pore size pooled with the values obtained from the other 19 slices. The average of the pooled values is the pore size value for that sample at that time point. This process is repeated for all samples at each time point.
Compression Testing
Compression testing was conducted at a constant strain rate of 0.5 mm/min using a RSA3 (TA Instruments) [9], [22]. Samples were formed to match the plate size (15 mm) and then compressed. The strain and normal force were recorded and used to calculate the compressive modulus for each sample. The initial diameter and area of each gel were measured and recorded before testing. Compressive moduli of the gels were found by plotting a stress-strain graph with the strain going up to 0.1 or 10%. Strain zero corresponds to the first acceptable stress value. None of the gels were fractured during compression.
RGD Conjugation
The method for conjugation of peptides to acryl-PEG was performed as described previously [21]. A solution of 50 mM NaHCO 3 (pH 8.3) was prepared as a buffer. Ten milligrams of YRGDS (American Peptide, Sunnyvale, CA) was dissolved in 5 mL of 50 mM NaHCO 3 . Acryl-PEG-SVA (3400 Da; Laysan, Arab, AL) was dissolved in 7 mL of 50 mM NaHCO 3 and then added drop-wise into the stirred YRGDS solution in the dark. The molar ratio of YRGDS to acryl-PEG-SVA was 1:1.5. The solution was stirred for 2 h at 4uC in the absence of light. The final product was dialyzed (2000 Da molecular weight cut-off) in 2 L of DI water for 24 h (with replacement after 12 h). The resulting product was lyophilized and stored at 280uC until use.
Cell Culture
The cell culture and cell seeding methods have been described previously. [10] Briefly, NIH 3T3 fibroblasts (Cambrex, Walkersvile, MD) were maintained in complete media (Dulbecco's modified Eagle's medium, 10% fetal bovine serum, and 1% penicillin-streptomycin). The cells were passed when flasks reached 90% confluency. Gels were placed into 48-well plates, and incubated in complete media for 1 h. After removing media, gels were air dried in a culture hood for 1 h. Five thousand PKH26 stained 3T3 fibroblasts in 0.5 ml was added directly to the gel surface. Samples were incubated at 37uC, 5% CO 2 overnight and imagined at varying time points. Gels were imaged using confocal microscopy (488 nm laser with a 505 nm low pass filter).
Statistics
Data are presented as means 6 standard deviation. Significant differences between groups of data were determined by analysis of variance with Holm-Sidak post-test. In all cases, p,0.05 was considered statistically significant.
The Influence of Polymer Concentration on Polymer Properties
The autofluorescence of PEG-PLLA-DA allows imaging of hydrogel structure nondestructively using confocal microscopy [20]. This is highly advantageous relative to traditional techniques for characterizing material structure, such as scanning electron microscope (SEM), because it avoids processing of samples by drying or fixation that may alter the architecture. We first investigated the effect of polymer concentration on hydrogel properties. Samples were generated with the same range of particulate size (150-100 mm) at varying polymer concentrations of 12.5, 25, and 50% PEG-PLLA-DA. Figure 1 displays confocal images of 12.5% (w/v) porous PEG-PLLA-DA hydrogels at days 1, 3, and 7. The structure of porous hydrogels at different time points can also be seen for 25% (Fig. 2) and 50% hydrogels (Fig. 3).
The confocal images show a porous structure in all hydrogels throughout the degradation process. At the first time point, the hydrogels exhibited pores with structure and size consistent with the salt crystals used as the pore agent. Interestingly, the intensity of the autofluorescence decreased as the hydrogels degraded. For example, at early time points (Fig. 2 A&D), confocal images were bright and hydrogel structures could be easily discerned. By day 14, the intensity of the images decreased and borders appeared blurry (Fig. 2 C&F). Regarding polymer concentration, the 50% group (Fig. 3 B) had higher autofluorescence compared to the 25% group (Fig. 2 C) likely due to higher polymer concentration. However, even with the lower fluorescence signal from the later time points, quantification and analysis of scaffold architecture was still possible from the confocal images (Fig. 4). The size of the pores remained constant throughout the majority of the degradation time for all conditions. Prior to complete degradation of the hydrogels there was a slight increase in mean pore size.
The wet weight of the hydrogels was also quantified which provides information on the degradation rate (Fig. 5). The initial wet weight depended on the percentage of polymer used and increased as the materials degraded. The time to complete degradation varied with polymer concentration, with 12.5, 25, and 50% gels degraded in 7, 16, and 26 days respectively.
The Influence of Polymer Concentration on Mechanical Properties
The compressive moduli of 12.5, 25, and 50% porous PEG-PLLA-DA hydrogels, generated with salt size ranging from 150-100 mm, were quantified. Figure 6 shows typical curves generated for the hydrogels, illustrating the rapid decrease in stiffness as the hydrogels degrade. At day 1 there were significant differences in mechanical properties between 12.5, 25, and 50% porous PEG-PLLA-DA hydrogel at the same pore size (150-100 mm) (Fig. 7). The compressive modulus was higher for hydrogels with greater polymer content. The compressive moduli decreased rapidly as the hydrogels degraded (Fig. 7).
The Influence of Particulate Size on Pore Structure and Degradation
We also investigated the effect of pore size on the mechanical and degradation properties of the hydrogels. Twenty-five percent PEG-PLLA-DA hydrogels were generated with salt crystal sizes ranging from 50-25 mm, 100-50 mm, and 150-100 mm. The structure of the hydrogels imaged under swelled condition can be seen in Figure 8. In all cases, an interconnected porous structure is apparent with pore size increasing with the salt crystals used. The size and shape of the pores within the hydrogels were consistent with the salt crystals used as the pore forming agent.
Pore sizes agreed well with salt crystal sizes at day 1 ( Fig. 9) and the mean pore size remained constant for most of the degradation process with a slight increase prior to complete hydrogel degradation (Fig. 9). The wet weight initially increased as the hydrogels degraded and then decreased towards the end of the process. The degradation rate of the hydrogels did not appear to depend on pore size as all conditions degraded in 15-17 days (Fig. 10).
The Influence of Particulate Size on Mechanical Properties
The stiffness of the gels decreased throughout degradation (Fig. 11). The compressive moduli were significantly different between the different pore sizes at time points up to one week (p,0.001). Gels with smaller pores were stiffer then gels with larger pores, and the stiffness of the gels decreased rapidly throughout degradation.
The Influence of Copolymer Concentration on Cell Adhesion
A cell adhesion sequence (YRGDS) was incorporated in the hydrogels to determine whether the porous gels could support the adhesion of cells. Fibroblasts spread and lined the edges of pores in all polymer conditions (Fig. 12). Cells on 50% porous hydrogels made with crystal size 150-100 mm were imaged over time ( Fig. 13). At day 1, cells appeared to line the edge of the pores (Fig. 13A). The cell organization changed as the gels degraded and, by day 22, cells had formed multicellular aggregates within the gels prior to complete degradation (Fig. 13 C).
Discussion
The ability to modulate and control tissue response to implanted biomaterials is essential to the fields of tissue engineering and regenerative medicine. We have previously investigated porous PEG hydrogels and found that these hydrogels support vascularized tissue formation [1]. Under the conditions investigated, hydrogels with pores ranging from 150-100 mm supported the most rapid vascularization in vitro and in vivo. In these studies, there were no signs of degradation exhibited by either porous or nonporous PEG hydrogels. However, the success of these materials in clinical applications requires that they degrade in a controlled fashion as new vascularized tissue develops.
Hydrogels generated from PEG-PLLA-DA copolymers have been investigated in many applications in regenerative medicine. Materials based on PEG-PLLA copolymers have been applied as biological coatings, tissue engineering scaffolds, and drug delivery systems, but there has been little investigation into the design and optimization of PEG-PLLA-DA hydrogels with porous structure [23]. The polymer conditions used here are similar to those that have been described in other studies [24]. However, the generation of pores offers a number of advantages over nonporous structures. This includes enhanced nutrient transport and higher surface area to volume ratio. While these hydrogels do not allow invasion via protease-mediated degradation [25], the reliance on hydrolytic degradation allows the potential to decouple tissue invasion from hydrogel degradation. In this study, we generated porous PEG-PLLA-DA hydrogels by solvent casting with DCM, particulate leaching, and photopolymerization. This particulate leaching technique has been commonly used for hydrophobic polymer foams, but we show that it also serves as a simple method for generating pores in PEG-PLLA-DA hydrogels.
Studies have been performed examining the structure of hydrophobic polymer foams during degradation, but research has not been performed into porous hydrogels systems. The equal availability of water throughout the polymer volume results in differences in structural changes as the materials degrade relative to hydrophobic materials. Autofluorescence exhibited by the PEG-PLLA-DA allowed the unique ability to characterize 3D polymer structures when they are fully swelled which are the conditions used in bioreactors and cell culture [10], [20]. The origin of this autofluorescence is still not clear but it appears to result from a synergetic effect of both lactate units and diacrylate groups in the PEG-PLLA-DA backbone. However, the fluorescence not only allows imaging of the polymer structure with confocal microscopy but can be exploited to monitor degradation as the intensity is proportional to the number of PEG-PLLA chains present. [20].
The hydrogels exhibited an interconnected porous structure with initial pore size correlating well with the size of the particulates selected. The pore size and structure remained consistent throughout degradation and did not depend on polymer concentration. In studies with hydrophobic polymer foams, pores have been shown to decrease in size and number while the scaffolds degraded [26]. In addition, the overall architecture of the pores in PLGA scaffolds changes, losing the crystal shape that results initially from the particle leaching technique. The hydrogels used in this study, however, maintained their size and structure as they degraded. While the dissolution of porous polymer foams exhibit a bulk degradation mechanism, the change in pore . Wet weight of porous PEG-PLLA-DA hydrogels versus time for hydrogels generated with salt size ranging from 150-100 mm at various polymer concentrations (*indicates statistical difference between all groups at that time point, p,0.001). The significant reduction in weight seen on day 15 for 25% gels is due to the fact that these gels were highly degraded at that time point, and the gels had reduced greatly in size. doi:10.1371/journal.pone.0060728.g005 structure occurs as the chains on the surface of the pore having greater access to water than those within the structure. This results in more rapid degradation at the surface of the pore and a change in size throughout degradation. Consistent with experimental and computational models of nonporous PEG-PLLA hydrogels [27,28], the porous hydrogels exhibit a bulk mechanism of degradation which means that the overall structure, including pore size, is maintained up to the point of a nearly instantaneous dissolution of the final volume. The rapid decrease observed in wet weight in the 25% hydrogel is an indication of this rapid dissolution. However unlike the polymer foams, hydrogels rapidly absorb water resulting in all chains having equal access to water whether they are on the surface of a pore or part of the bulk structure. Swelling ratio studies support the concept that pore size does not influence access to water. Hydrogels swell as they degrade [29] and the wet mass was independent of pore size at all time points. The equal access of the polymer chains to water allows maintenance of pore structure as the hydrogels degrade until they reach the point of complete dissolution.
The compressive moduli of the hydrogels depended on both pore size and polymer content. As expected, the modulus increased with polymer content, which agrees with literature showing increasing crosslink density with polymer content [30]. In addition, the stiffness decreased with increasing pore size. This result agrees with previous studies which investigated the influence of pore size on porous poly (propylene fumarate) scaffolds [9]. Increasing polymer concentration could increase the mechanical properties of the hydrogels while keeping pore size constant. The mechanical properties also diminished rapidly during incubation suggesting a bulk mechanism of degradation, which is consistent with swelling and pore size observations. We have previously shown that pore size plays an important role in biological response to porous PEG-based hydrogels [1]. However, mechanical properties and degradation time also contribute to biological response [31], [32]. While our results suggest varying polymer concentration may allow for the design of hydrogels with mechanical properties independent of pore size, it does not appear the degradation time can be easily decoupled from mechanical properties using this approach.
Porous hydrogels supported cell attachment under all conditions following the incorporation of cell adhesion sequences. As the hydrogels degraded the structure of the cells in the gels changed. The cells eventually assembled into aggregates before the complete degradation of hydrogels. The change in cell morphology is somewhat surprising considering that the pore size and structure do not change as the gels degraded. However, the reduction in hydrogel stiffness and possible local change in ligand density with degradation could influence cell behavior. It is well-established that cell migration is influenced by hydrogel stiffness [33] and cells exhibit a round shape and less stress-fiber formation in polyacrylamine gels with lower stiffness [34]. Cell behavior also is dependent on ligand density [35], which may change locally as degradation reduces the dangling ligands present in the gels [36]. These results suggest how changes in hydrogel properties despite constant pore structure could change biological response to the materials. Future studies will examine cell behavior and tissue formation within porous PEG-PLLA-DA hydrogels and the role of stiffness and ligand density on the response.
Conclusion
A technique was developed for the application of salt leaching methods to generate porous PEG-PLLA-DA hydrogels. This study demonstrates the influence of polymer concentration and pore size on the mechanical, structural, and degradation properties of these porous hydrogels. The architecture of porous PEG-PLLA-DA hydrogels was monitored without drying or destruction by using the polymer's intrinsic fluorescence. Interestingly, this allowed for the determination that pore size and structure remained constant during degradation. Results from this study provide a better understanding of the mechanical property and architecture of porous PEG-PLLA-DA hydrogels during degradation, which helps to design biodegradable, porous scaffolds for tissue engineering. | 2018-04-03T02:48:13.425Z | 2013-04-09T00:00:00.000 | {
"year": 2013,
"sha1": "72606a493e4f6a7480ad1eeec690c103be7bce9f",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0060728&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "72606a493e4f6a7480ad1eeec690c103be7bce9f",
"s2fieldsofstudy": [
"Materials Science",
"Engineering",
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
2244144 | pes2o/s2orc | v3-fos-license | Three Cases of Descemet's Membrane Detachment after Cataract Surgery
Descemet's membrane detachment (DMD) is an uncommon condition with a wide range of etiologies. More than likely, the most common cause is a localized detachment occurring after cataract surgery. We report three cases of Descemet's membrane detachment that occurred after uncomplicated phacoemulsification cataract surgeries. The first patient was managed without surgical intervention, the second patient was treated using an intracameral air injection, and the last patient was treated with an intracameral perfluoropropane (C3F8) gas injection. All three patients recovered their vision following the reattachment of Descemet's membrane. The three patients were treated according to the extent of the detachment.
INTRODUCTION
Descemet's membrane detachment (DMD) is an uncommon but serious complication of intraocular surgery.It was first reported in 1928 by Samuels, and since then it has been reported most often after cataract surgery.DMD has also been known to occur after other various ophthalmic procedures including: cyclodialysis, iridectomy, trabeculectomy, holmium laser sclerostomy, penetrating keratoplasty, full-thickness lamellar kerato-plasty, pars plana vitrectomy and viscocanalostomy. 1 The natural history of DMD has long been an area of controversy, and the appropriate timing for intervention remains unclear.Most DMDs remain small and localized to the wound, but some cases present with large, extensive detachments that result in severe corneal edema and a marked reduction in visual acuity.Traditional treatment regimens have included: observation, intracameral injections of air or viscoelastic, transcorneal suturing, and even corneal transplantation.During the past few years, intracameral injections with sulfur hexafluoride (SF6) or perfluoropropane (C3F8) gas have gained increasing acceptance as an efficient and effective treatment option for DMD.
2
We report three cases of DMD that occurred after uncomplicated cataract surgery and describe their management and visual outcome.
CASE REPORT Case 1
A 73-year-old man underwent uncomplicated phacoemulsification cataract surgery of his right eye under topical anesthesia with temporal corneal incision.He had no past ocular disease or trauma history.He had a medical history of diabetes and mild bronchial asthma.The patient was also taking oral medication for diabetes, but he was not taking any medication for bronchial asthma.His preoperative corrected vision was 20/40, and his intraocular pressure was 17 mmHg.On postoperative day one, his corrected visual acuity was 20/200, and his intraocular pressure was 16 mmHg.The patient's cornea was edematous with Descemet's folds.The anterior chamber was deep with cells 4+.He was instructed to use ofloxacin eye drops and 0.12% prednisolone eye drops every two hours for one week, and then four times a day thereafter.The cornea was still edematous when he visited our clinic one week later.Three weeks after surgery, his visual acuity was 20/100 and his intraocular pressure was 19 mmHg.The patient complained of a foreign body sensation and visual disturbances.The corneal edema had improved but the slit lamp exam revealed DMD at the superonasal area (Fig. 1).There was no direct trauma to the superonasal cornea during the surgery.Because the size of the DMD was small and the location was peripheral, we decided to observe the patient for a follow-up period without surgical intervention.Postoperative use of ofloxacin eye drops and 0.12% prednisolone eye drops was maintained four times a day.During the follow-up period, an intracameral air injection was not needed because the size of the DMD decreased and the patient's vision improved.Two months after the surgery, the DMD had completely reattached and the patient's corrected visual acuity had improved to 20/30.Ofloxacin eye drops and 0.12% prednisolone drops were maintained for two more weeks.Three months later, the cornea was clear, and his corrected vision was 20/20.
Case 2
A 61-year-old woman underwent uncomplicated phacoemulsification cataract surgery of her left eye under topical anesthesia with temporal corneal incision.She underwent a successful cataract surgery of her right eye three months earlier in our clinic using the same method.The patient had no past medical history, and no history of any ocular trauma.Her preoperative corrected vision was 20/50 and her intraocular pressure was 12 mmHg.One day after surgery, her uncorrected visual acuity was 20/30 and her intraocular pressure was 14 mmHg.Ofloxacin eye drops and 0.12% prednisolone eye drops were prescribed for use postoperatively every two hours, as in the first patient.Two days after surgery, the patient's uncorrected visual acuity was 20/20 and the intraocular pressure was 12 mmHg, but the cornea was slightly edematous and a subendothelial opacity was observed in the center of the cornea.A tear in the Descemet's membrane was observed in the temporal area.The anterior chamber was deep with cells 1 to 2+.Five days after surgery, her visual acuity decreased to 20/25 and the temporal cornea was edematous with a large DMD (Fig. 2A).The patient complained of blurred vision and mild irritation of her left eye.The postoperative medications of ofloxacin eye drops and 0.12% prednisolone eye drops were maintained every two hours.On the next day, the patient underwent an intracameral air injection in the operating room (Fig. 2B).The patient was instructed to maintain a supine position.One day after the air injection, her uncorrected visual acuity was 20/40 and her intraocular pressure was 19 mmHg.The corneal edema had decreased significantly.The Descemet's folds remained but the endothelium was successfully reattached.One week after the air injection, her uncorrected visual acuity was 20/25.Ofloxacin and 0.12% prednisolone eye drops were reduced to four times a day.She was closely followed-up for three months and by that time the cornea had cleared completely.Her corrected vision increased to 20/20.She maintained stable vision for 10 months during follow-up.
Case 3
A 74-year-old woman was referred to our clinic for DMD of the left eye.She underwent uncomplicated phacoemulsification cataract surgery with a temporal corneal incision at a local eye clinic one week before referral to our clinic.She had received cataract surgery in her right eye four years earlier at another clinic.She had no remarkable medical history or ocular trauma history.She had an intracameral air injection at the local clinic four days after the surgery on her left eye due to the DMD.On examination in our clinic, her uncorrected visual acuity was counting fingers at 30 cm in her left eye, and her intraocular pressure was 17 mmHg.Slit lamp examination showed extensive DMD with only a small portion of attached endothelium in the center of the cornea (Fig. 3A).The next day, an intracameral gas injection of 14% C3F8 was performed on the left eye in the operating room.The interface fluid between the Descemet's membrane and the stroma was removed using an ab externo stab incision of the corneal stroma in mid-periphery (Fig. 3B).The patient was instructed to maintain a supine position.One day after the gas injection,
A B A B
her uncorrected visual acuity was counting fingers at 30 cm, and her intraocular pressure was 10 mmHg.Only a small area of detachment remained in the inferior cornea.The patient was hospitalized for close follow-up.Two days after the gas injection, her uncorrected visual acuity was 20/400, her intraocular pressure was 15 mmHg, and a small area of detachment was still visible.The postoperative medications of ofloxacin eye drops and 0.12% prednisolone eye drops were maintained every two hours.Three days after the gas injection, the detachment had resolved completely, but the intraocular pressure was elevated to 52 mmHg.The gas in the anterior chamber had increased and some gas could also be observed in the posterior chamber, resulting in an iris bombe.
To reduce intraocular pressure after mannitolization, we prescribed Timolol (Timoptic XE ) eye drops once daily, atropine 1% eye drops three times daily, and oral acetazolamide 500 mg divided into four doses.Four days following the gas injection, her intraocular pressure was 7 mmHg, and the gas bubble was still visible in the anterior chamber.The Descemet's membrane remained well-attached.Eight days after the gas injection, her corrected vision was 20/30 and her intraocular pressure was 6 mmHg.Two weeks after the gas injection, the attached Descemet's membrane appeared stable and the size of the gas bubble had decreased to 25% of the vertical chamber height.
One month following the procedure, the patient's corrected visual acuity remained stable at 20/30, and the intraocular pressure was 13 mmHg.The gas bubble in the anterior chamber decreased significantly, to only a small bubble in the superior portion.
DISCUSSION
In this article, we report three cases of DMD that were managed with different treatment methods.In the first case, the DMD was not located adjacent to the incision wound, while in the second patient, a typical large DMD following cataract surgery could be observed.The unusual superonasal location of the DMD in the first case leads us to suspect endothelial trauma during cataract surgery, although we do not recall touching the endothelium with surgical instruments in that area.Phacoemulsification energy also may have caused the unpredicted damage.In the second patient, any of the reported mechanisms of DMD listed below could have occurred, although we were not aware of any inadvertent event during surgery.The third patient had radial deep stromal opacities in the contralateral eye which were believed to be scars from previous DMD.Since this patient seemed to have a predisposition to Descemet's membrane separation and because the air injection performed in the other clinic was not successful, we carried out an intracameral C3F8 gas injection and a paracentesis of fluid.Both appeared to aid in the reattachment of the DMD.
DMDs are usually small and localized to the corneal wound, with minimal or no effect on corneal clarity and vision.][4] Mackool and Holtz classified DMDs into a planar type, with the Descemet's membrane separated less than 1 mm from the corneal stroma, and a non-planar type, with a separation of greater than 1 mm.These two types can be further divided by whether the detachments are limited to the peripheral cornea, or if they involve both the peripheral and central cornea.They reported that planar detachments are more likely to resolve spontaneously, and non-planar detachments should be repaired early. 5,6The first case in our report showed planar type DMD, and the second and third cases were non-planar type DMD.Assia et al divided DMDs into detachments with or without scrolling.Detachments without scrolling are more likely to resolve spontaneously.6][7] Engaging the Descemet's membrane during intraocular lens implantation or with the irrigation/aspiration device (when mistaken as an anterior capsular remnant) can also lead to exten-sive DMD.Some have reported that inadvertent injection of viscoelastic material by inserting the cannula between Descemet's membrane and the corneal stroma may be the most common cause of Descemet's membrane detachment with the current surgical techniques. 8e understood that DMD could also be associated with the characteristics of the viscoelastic material used.We used Viscoat in Cases 1 and 2. Cohesive viscoelastic material such as Healon can be completely removed from the anterior chamber by aspiration.It takes a greater effort to remove dispersive substances such as Viscoat from the anterior chamber, and complete removal is difficult to achieve.The inspiration/aspiration device may be closer to the corneal endothelium, causing the Descemet's membrane to separate from the stroma.
9-11
Although this association is hardly conclusive due to the small number of cases in our experience, we believe that this is an interesting possibility that merits further investigation.
Bilateral cases of DMD have been reported during or after otherwise uneventful surgery. 1 In our third case, the patient had endothelial scarring, presumed to be from old DMD in the right eye, which had undergone cataract surgery four years earlier.From this, we are led to suspect that some eyes may have a predisposition to DMD.The characteristics of patients who are at increased risk of DMD may be another topic that needs further study.In summary, we reported three cases of DMD that received different treatments according to the severity of their conditions, and found favorable results.The time for intervention and the type of surgery used remain controversial.In addition, despite several recent reports in favor of early intracameral gas injection, it would be advisable to make intervention and treatment decisions on a case by case basis.
Fig. 2 .
Fig. 2. A. A large Descemet's membrane detachment after cataract surgery.B. Anterior chamber air injection was done to help reattach Descemet's membrane.
Three Cases of Descemet's Membrane Detachment after Cataract Surgery In Sik Kim, 1 Jung Chul Shin, 1 Chan Yeong Im, 2 and Eung Kweon Kim 1 1
The Institute of Vision Research and Department of Ophthalmology, Yonsei University College of Medicine, Seoul, Korea; 2 Department of Ophthalmology, College of Medicine, Kunkook University, Seoul, Korea. | 2016-05-04T20:20:58.661Z | 2005-10-31T00:00:00.000 | {
"year": 2005,
"sha1": "94cdebacf3fab06b688dade7e7fdaf2b162816a0",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3349/ymj.2005.46.5.719",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "1f6e352bae7b2f8623561941445a2362cd7387b6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237592257 | pes2o/s2orc | v3-fos-license | Cyclic connectivity index of fuzzy incidence graphs with applications in the highway system of different cities to minimize road accidents and in a network of different computers
A parameter is a numerical factor whose values help us to identify a system. Connectivity parameters are essential in the analysis of connectivity of various kinds of networks. In graphs, the strength of a cycle is always one. But, in a fuzzy incidence graph (FIG), the strengths of cycles may vary even for a given pair of vertices. Cyclic reachability is an attribute that decides the overall connectedness of any network. In graph the cycle connectivity (CC) from vertex a to vertex b and from vertex b to vertex a is always one. In fuzzy graph (FG) the CC from vertex a to vertex b and from vertex b to vertex a is always same. But if someone is interested in finding CC from vertex a to an edge ab, then graphs and FGs cannot answer this question. Therefore, in this research article, we proposed the idea of CC for FIG. Because in FIG, we can find CC from vertex a to vertex b and also from vertex a to an edge ab. Also, we proposed the idea of CC of fuzzy incidence cycles (FICs) and complete fuzzy incidence graphs (CFIGs). The fuzzy incidence cyclic cut-vertex, fuzzy incidence cyclic bridge, and fuzzy incidence cyclic cut pair are established. A condition for CFIG to have fuzzy incidence cyclic cut-vertex is examined. Cyclic connectivity index and average cyclic connectivity index of FIG are also investigated. Three different types of vertices, such as cyclic connectivity increasing vertex, cyclically neutral vertex and, cyclic connectivity decreasing vertex, are also defined. The real-life applications of CC of FIG in a highway system of different cities to minimize road accidents and a computer network to find the best computers among all other computers are also provided.
Introduction
Graphs are convenient tools to explain associations between different types of entities under examination. Vertices or nodes denote entities, and edges or arcs explain the vertices' connections. A mathematical structure to describe unpredictability and equivocacy in daily life strong product for interval-valued FGs was provided by Rashmanlou, and Jun [27]. Sunitha and Vijayakumar [28] defined complement of a FG. Mordeson and Nair [29] introduced and examined the concepts of chords, twigs, 1-chains with boundary zero, cycle vectors, coboundary, and cocycles for FGs. They have also shown that although the set of cycle vectors, fuzzy cycle vectors, cocycles, and fuzzy cocycles do not necessarily form vector spaces over the field Z 2 of integers modulo 2, they nearly do. Later on, different mathematicians participated in the development of graphs and FGs. Their achievements can be seen in [30][31][32][33][34][35].
There is a flaw in FGs because they do not give any clue of the impact of a vertex on edge. This lack of FGs become the fundamental cause to establish the scheme of FIG. The proposal of FIGs was first initiated by Dinesh [36]. For example, in a highway system, if vertices represent various cites and edges serve as highways, introducing the degree of connection between city L and the highway LM joining cities L and M permits a profound analysis of the highway system. This connection could be the ramp system joining L and LM. We indicate this relationship by the ordered pair (L, LM). Malik et al. [37] applied FIGs in different types of applications. Mathew and Mordeson [38] proposed the idea of cut pairs and fuzzy incidence trees in FIGs. They also discussed some vital properties of FIGs. Three different types of nodes, including fuzzy incidence connectivity enhancing node, fuzzy incidence connectivity reducing node, and fuzzy incidence connectivity neutral node in FIGs was introduced by Fang et al. [39]. Like node and edge connectivity in graphs, Mathew et al. [40] discussed these concepts for FIGs. Mordeson and Mathew [41] developed fuzzy end nodes and fuzzy incidence cut vertices in FIGs. Nazeer et al. [42] presented the idea of intuitionistic fuzzy incidence graphs (IFIGs) as a generalization of FIGs along with their certain properties. They introduced a variety of operations in IFIGs. They also provided a fascinating application of the product of IFIGs. The idea of order, size, domination, strong fuzzy incidence domination, and weak fuzzy incidence domination in FIGs was proposed by Nazeer et al. [43]. Nazeer and Rashid [44] presented the idea of picture FIGs. They introduced picture fuzzy cut-vertices, picture fuzzy bridges, picture fuzzy incidence cut pairs, and picture fuzzy incidence cut-vertices. More extensive and comprehensive work on FIGs, can be seen [45][46][47][48].
Connectivity parameters are connectivity measures of any system. In graphs, the connectivity between any two vertices is 1, and in FGs, it is from closed interval [0, 1]. There are certain motives to propose the concept of CC in FIGs. Firstly, in FGs, we can only compute the CC from vertex l to vertex m and from vertex m to the vertex l but if someone is interested in examining the CC from vertex l to an edge lm, then FGs are not enough to answer this question. Therefore, we propose the concept of CC in FIGs because FIGs permit us to find the CC from vertex l to an edge lm due to the presence of an incidence pair in FIGs. Secondly, in FIGs, the CC from vertex l to an edge lm and vertex m to an edge lm may or may not be the same. Thirdly, we cannot apply graphs and FGs to the applications of the highway systems of different cities and networks of different computers provided in Section 5 due to the non-availability of the influence of a vertex on and edge. Fourthly, the objective to introduce these ideas to FIGs is that Mathew and Sunitha [16] initiated the notion of CC in FGs. Later, Binu et al. [49] initiated an idea of cyclic connectivity index (CCI) and average cyclic connectivity index (ACCI) of FGs. We extended their work for FIGs. This paper establishes CC, CCI and ACCI of FIGs.
The other part of this article is constructed as follows. Section 2 consists of some introductory outcomes essential to comprehend the remaining portion of the article. CC, fuzzy incidence cyclic cut-vertex (FICCV), fuzzy incidence cyclic bridge (FICB) and fuzzy incidence cyclic cut pair (FICCP) of FIG are explained in Section 3. The formula to determine CCI, the way to manipulate ACCI of FIG, and three different types of vertices, namely, cyclic connectivity increasing vertex (CCIV), cyclically neutral vertex (CNV), and cyclic connectivity decreasing vertex (CCDV) are described in Section 4. The real-life applications of CC of a FIG in a highway system of different cities to reduce road accidents and a computer network to find the best computers sharing the maximum amount of data among all other computers are discussed in Section 5. A comparative analysis of our study with the existing study is provided in Section 6. Section 7 carries some conclusions and future directions.
Preliminaries
This section carries some elementary and rudimentary definitions and results of FIGs. These will be useful to understand the contents of the article.^indicates the minimum operator, and _ denotes the maximum operator in this article. Definition 1. [41] A fuzzy subset (FSS) of a set is a function of the set into the closed interval Definition 3. [41] Let G = (V, E, I) be an IG. A sequence of distinct vertices P 1 : k 0 , (k 0 , k 0 k 1 ), k 0 k 1 , (k 1 , k 0 k 1 ), k 1 , . . ., k n−1 , (k n−1 , k n−1 k n ), k n−1 k n , (k n , k n−1 k n ), k n is called an incidence path and vertices k 0 and k n are said to be connected. The incidence strength (I s ) of P 1 is defined as η (k 0 , k 0 k 1 )^η(k 1 , k 0 k 1 )^. . .^η(k n−1 , k n−1 k n ) and is expressed by ðI s P 1 Þ. A sequence P 2 : k 0 , (k 0 , . ., k n−1 , (k n−1 , k n−1 k n ), k n−1 k n , (k n , k n−1 k n ), k n , (k n , k n k n+1 ), k n k n+1 is another incidence path between k 0 and k n k n+1 . The I s of P 2 is defined as
Cycle connectivity of fuzzy incidence graphs
In this section, we present the novel idea of connectivity named as Here inG, abca, abcda and adca are all FICs. There are three FICs passing through a and c comprising, abca, adca and abcda with I s =^{η(a, ab) = 0. 6 Next, we will propose a fascinating result related to FIC in the form of a proposition. We can easily calculate the O of any FIC by just applying this result. This proposition will help us to save time and energy. Also, this will be helpful to avoid very long calculations.
Proof. It follows by Proposition 1 that each I p is a I s p in a FIC. Therefore, the O of a FICG is the I s ofG. Now, we are going to introduce an actual result in the form of a theorem. With the help of this theorem, we will be able to compute O of any CFIG. By applying this theorem, we do not have to need to do complicated calculations. We have to use the theorem and get the required result.
Proof. Suppose the conditions of the Theorem. Since any three vertices ofG are adjacent becauseG is a CFIG also any three vertices are in 3 vertices FIC. SinceG is a CFIG, by Proposition 2 all I p are I s p in CFIG. Therefore, to calculate the smallest I s of FIC inG, it is enough to calculate the smallest I s of every 3 vertices FIC inG. SinceG is a CFIG therefore to examine a 4 vertices FIC, C = abcda inG (it is notable that the case is same for n vertices FIC) there will be parts of two 3 vertices FIC in C, namely C 1 = abca and Consider, η(a, ac) = j, then I s (C 1 ) = I s (C 2 ) = I s (C) = j. Suppose η(a, ac) > j, since I s (C) = j then either C 1 or C 2 will have I s equal to j. Now, I s (C) =^{I s (C 1 ), I s (C 2 )}. Thus the I s of a 4 vertices FIC is same as the I s of a 3 vertices FIC inG. From all 3 vertices FIC, the 3 vertices FIC devised by three vertices with largest vertices strength will have the greatest strength. Therefore, the FIC
Definition 15. A vertex l in a FIGG s said to be a FICCV if
OðG À lÞ < OðGÞ:
Definition 16. An edge (l, m) in a FIGG is said to be a FICB if
OðG À ðl; mÞÞ < OðGÞ:
PLOS ONE
To show that k n−3 < k n−2 . Assume that k n−3 = k n−2 . Then C 1 = h n h n−1 h n−2 and C 2 = h n h n−1 h n−3 have the equal I s , and hence the deleting of h n−2 , h n−1 or h n−3 will not lessen OðGÞ. This contradiction shows that k n−3 < k n−2 .
Conversely, assume that k n−3 < k n−2 . Now, we have to show thatG has a FICCV. Since k n−2 � k n−1 � . . . � k n and k n−3 < k n−2 , all FICs ofG have I s less than that of I s of h n−2 h n−1 h n−3 . Hence the removal of h n−1 , h n−2 or h n−3 will become the cause of reduction of OðGÞ therefore, G has a FICCV. Proof. By given statement of theorem.G is a CFIG with |σ � | = n � 3 therefore Proposition 2 yields thatG will be without any δ − IPr. This means all I p inG are I s p and as stated in Proposition 1 every FIC is a strong FIC inG. Suppose l and m are any two vertices ofG then we have to calculate I s of all FICs contain vertices l and m. After this, we have to compute O which is the maximum value of I s of all FICs containing pair of vertices l and m. Similarly, we have to compute O up to n (total number of vertices) and take the minimum value of all O of the CFIGG. Also, the total number of edges for a CFIG is always equal to n
PLOS ONE
Hence from Eqs (1) and (2) it can be concluded that Here, we are going to present a very foundational concept of ACCI of a FIG. In enormous networks, the sturdy flow among different vertices is mandatory to sustain trustability and devotedness. To guarantee the firmness of the exchange of data in the complete or portion of the network, measuring the average value of the cyclic data exchange is vital. Therefore, we discuss the ACCI of a FIG. The ACCI of any FIG is denoted by In a FIG an isolated vertex is 1 : a, b, d, a, C 2 : b, c, d, c and C 3 : a, b, c, d, a with I
Real-life applications of cycle connectivity
In daily life, O has various uses. Here, we have proposed two critical real-life applications of O of FIGs. In the first application, we have taken a highway system of different cities and apply the idea of O of FIG to find the roads which are becoming the leading cause of maximum accidents. In the second application, we have taken a network of different computers sharing data. We have applied the idea of O to the network of different computers and find which computer/computers are transferring the maximum amount of data to other computers.
Application of cycle connectivity in highway system
Due to the huge traffic on roads, the percentage of accidents is increasing day by day. To minimize these accidents government should take some serious steps to lessen the percentage of road accidents. Here, we are presenting a graphical model of Thus OðGÞ ¼ 0:3 is representing that the roads joining cities c 1 c 2 , c 1 c 8 and c 2 c 8 are the main roads which are becoming a main cause of highest percentage of road accidents. So, the government should focus on these roads by making more speed breakers, speed bumps and deploying more traffic wardens on these roads. In this way, they can minimize the percentage of road accidents.
We have used FIGs in our application. The FIGs are more instrumental and effective than graphs. We cannot use graphs to explain the above phenomenon because graphs do not show the impact of a vertex on an edge. Another thing, in graphs, the O between each pair of vertices is always equal to 1, and we are unable to find which roads are becoming the main cause of maximum road accidents, but in FIGs, the O between each pair of vertices will be different. Therefore, FIGs are more helpful and useful than graphs.
Application of cycle connectivity in a computer network
In a network of different computers, computers are sharing data with each other. We want to find which computer/computers are best in performance among all other computers and sharing maximum data with all other computers in a network. This can be done by computing O between each pair of computers in a network. The pair of computers which have a maximum O will be the required computers transferring maximum data to all other computers in a network. Here, we are presenting a graphical model of FIG to explain this phenomenon. As an example, assume a network of FIG comprising of eight vertices. The vertices are showing the eight distinct computers in a network. The MSV of the vertices is indicating data store in each of these computers, the MSV of the edges is demonstrating the total amount of data that can be transferred from one computer to another computer and the MSV of the I p is representing the amount of data which one computer is transferring to another computer. For example, an I p (a, ab) is indicating the transfer of data from computer a to computer b and an I p (b, ab) is showing the transfer of data from computer b to computer a. LetG ;h ¼ 0:01 and OG g;h ¼ 0:2. Thus OG g;h ¼ 0:2 is representing the maximum O between computers g and h. Therefore, computers g and h are best computers in performance among all other computers and sharing maximum data with all other computers in a network.
Comparative analysis
Here, we are going to compare our model with the existing model. In Fig 9 a FIG is indicating a Since in case of graph the O between each pair of vertices is equal to 1. Therefore, we are unable to find the roads which are becoming a main reason of maximum accidents. Hence our model is better than the previous one.
Similarly, in Fig 10 a FIG is representing a network of different computers. Computers are sharing data. We want to find which computer/computers are best in performance among all other computers and sharing maximum data with all other computers in a network. This can be done by computing O between each pair of computers in a network. The pair of computers which have a maximum O will be the required computers transferring maximum data to all other computers in a network. The vertices are showing the eight distinct computers in a network. The MSV of the vertices is indicating data store in each of these computers, the MSV of the edges is demonstrating the total amount of data that can be transferred from one computer to another computer and the MSV of the I p is representing the amount of data which one computer is transferring to another computer. In the case of graph the O between each pair of vertices is OG a; Therefore, pervious model is not helpful to find which computer/computers are transferring maximum amount of data. Thus, our model is more effective and beneficial than the previous one.
Conclusion
In this article, we advanced the theory of FIGs. The notion of connectivity is indivisible from the theory of FIGs. There are a variety of parameters that command the connectivity of a network. In this article, the authors attempted to make up a new connectivity idea named as O, the best computers among all other computers in a network is also provided. A comparative analysis of our study with the existing study is also provided. More related ideas will be contemplated in the upcoming papers. | 2021-09-23T05:12:11.608Z | 2021-09-21T00:00:00.000 | {
"year": 2021,
"sha1": "fac216a10cfd384628ae85df1e1703316c7373dc",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0257642&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fac216a10cfd384628ae85df1e1703316c7373dc",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
207870190 | pes2o/s2orc | v3-fos-license | Focus on What's Informative and Ignore What's not: Communication Strategies in a Referential Game
Research in multi-agent cooperation has shown that artificial agents are able to learn to play a simple referential game while developing a shared lexicon. This lexicon is not easy to analyze, as it does not show many properties of a natural language. In a simple referential game with two neural network-based agents, we analyze the object-symbol mapping trying to understand what kind of strategy was used to develop the emergent language. We see that, when the environment is uniformly distributed, the agents rely on a random subset of features to describe the objects. When we modify the objects making one feature non-uniformly distributed,the agents realize it is less informative and start to ignore it, and, surprisingly, they make a better use of the remaining features. This interesting result suggests that more natural, less uniformly distributed environments might aid in spurring the emergence of better-behaved languages.
Introduction
Recent work on language emergence in a multi-agent setup showed that the interaction between agents performing a cooperative task while exchanging discrete symbols can lead to the development of a successful communication system [4,2,15,3]. This language protocol is fundamental to solve the task, since lack of communication results in performance decrease [13].
One of the simplest configurations is the setup where two artificial agents play a referential game [12,6,1]. An agent receives a target object, and it must communicate about it to another, which is then tasked to recognize the target in an array of similar items. The set of all distinct messages exchanged by the agents is their communication protocol. The emergent protocol is strictly dependent on the distribution of items used in the game. In fact, when the agents interact while playing the game, the language gets grounded in the environment represented by the objects. Ideally, as it is the case for natural language, we would like to see an emergent protocol that reflects the structure of the environment. For instance, when using messages of 3 symbols, if the target object is a red, full, square and its generated description is qry, q could refer to the fact the object is red, r could be related to its texture being full and y could communicate that the object to be described is a square, and this would suffice to discriminate it among set of similar items. Similar results have indeed been found in experiments on artificial language evolution with human subjects [11,17].
However, analyzing agents language is not trivial, as it does not necessarily possess properties of natural language such as consistency or compositionality. Different works have proposed a qualitative analysis of the object-message mapping performed by the agents [4], others have performed an analysis of the purity of learned symbols with regards to clustered items [12].
In this work our goal is to understand if the agents develop a communication strategy that focus on describing salient features of the objects. We study the symbol-object mapping in a simple referential game, and how such association changes when the objects in the environment have different statistical properties, trying to assess how much information is shared between the object and the emergent protocol. Borrowing from information theory, mutual information (MI) has been used to asses form-to-meaning mapping in natural [16] as well as emergent quantifies how many bits of information can we obtain about a distribution by observing a different one. Using mutual information between object and message distribution, we show that when the environment is uniformly distributed, the agents communicate about an arbitrary subset of the features of the objects. When instead an environment where some features are less informative than others is used, the agents learn to ignore those uninformative features, while adapting to better use a subset of the remaining ones.
Experimental Setup
Two agents, a sender and a receiver, cooperate in a referential game where the sender has to describe a discrete-valued target vector to the receiver, that must distinguish it from a distractor (random baseline performance is consequently at 50%).
Data In order to test form-to-meaning mapping variations with objects having different distributions, we generate two sets of 5-dimensional discrete vectors. In one, the feature vectors are uniformly distributed, in the other one feature has a highly skewed value distribution. In the first configuration (uniform environment) each value of the 5 features can be between 1 and 4, each with probability 0.25. In the second configuration (skewed environment), the first feature takes value 1 with probability 0.75, value 2 with probability 0.15, and values 3 and 4 with 0.05 probability. This leads to configurations with 4 5 = 1024 distinct vectors each. We randomly sampled target-distractor pairs to generate training, validation and testing partitions (128K, 16K and 4K pairs, respectively). The test data was used to asses the generalization capabilities of the agents on unseen pairs.
Game and Network Architectures
The sender and receiver agents are parametrized with two Vanilla RNNs. The sender linearly transforms the target vector and feeds it into its RNN. Through the Gumbel-Softmax trick [14,7], it then generates a message that is passed to the receiver. The receiver processes the message through its own RNN generating an internal representation of the target description. It linearly processes the pair of vectors it is fed (where target/distractor order is randomized), and it computes a similarity score through a dot product between each element in the pair and the message representation. The highest score is then used to recognize the target. Both sender and receiver RNNs are single-layer networks with 50 hidden units and embed the messages into vectors of dimensionality of 10. For the reparametrization through the Gumbel-softmax, temperature τ is kept fixed at 1.
Training and hyperparameter search We train the agents to optimize the cross entropy loss using backpropagation and the Adam optimizer [10]. Vocabulary was fixed to 1100, which in our setup ensures that the agent could in principle develop a one-to-one mapping between input vectors and symbols. The agents were trained for 50 epochs (after which, they had always converged). We conducted an hyperparameter search varying batch size (32, 64, 128, 256, 512) and sender and receiver learning rates (from 0.01, until 0.0001). We cross-validated the top 20 performing models with 5 different initialization seeds, using the uniform data-set. The best performing model uses batches of size 64, with both sender and receiver trained with a learning rate of 0.001 and an accuracy of 99.04%. For both the uniform and skewed data-sets experiments, we will report results averaged 1,000 runs (10 data-set initialization seeds times 100 network initialization seeeds). A small number of runs did not complete. All experiments were performed using the EGG toolkit [8].
Mutual information as a measure of form-to-meaning mapping We want to measure to what extent the sender is referring to each feature of the target vector. We measure the consistency in feature-symbol mapping as the mutual information between each feature distribution and the symbol protocol distribution as follows [9,5]: where H(v i ) is the entropy of feature i in the target vectors and H(v i |m) is the entropy of the conditional distribution of the i-th feature given the messages. MI is a positive metric, and in our setup it is upper bounded by the first term of the difference in eq. 1, H(v i ). This corresponds to a value of 2 in the uniform data configuration, where each feature in the target vector can equiprobably take a value between 1 and 4. In the skewed setup, MI is bounded to 2 for features 2, 3 and 4, and to value of 1.15 for the non-uniform feature. A high feature-protocol MI implies the agents are consistently using symbols to denote the values of the feature. A low value can be interpreted as the agents ignoring the feature when describing the vector.
Results
Uniform Environment Averaged accuracy and MI results are reported in Table 1. While showing good performance, the agents do not reach perfect accuracy, suggesting that they are not making full use of the feature space. Indeed, although MI has similar average values across features, we constantly observed that, in each run, three of them had a higher MI compared to the remaining two, in line with our hypothesis that the agents rely on an arbitrary subset of features to describe the target vector. In this way, they pay a small accuracy cost in exchange for lower protocol complexity.
In the uniform setup, the choice of the feature subset is arbitrary. We looked next at whether we can influence it by making a feature less informative than the others. Table 1: Top: results of the experiments performed with the uniform data distribution (U) and with the skewed one (S). Unique target vectors and unique msgs represent the distinct input vector count seen by the sender and the unique symbols count produced, respectively. H(msgs) is the entropy of the emergent protocol, the higher the value the more the message distribution approaches a uniform one. H(vectors) is the entropy of the target vectors. Bottom: mutual information and standard deviation between each target vector feature and and the message distribution.
Skewed Environment Results are also presented in Table 1. We see that accuracy is comparable in both uniform and non-uniform conditions. The unique target vectors are less in the skewed configuration than in the uniform one. We also found a lower entropy in this configuration as having a non-uniform feature makes sampling the same object more likely. The number of unique messages produced, as well as their entropy, is only very slightly lower in the skewed configuration (8% less message for 30% less target vectors), making us think that messages are used effectively. Similarly to the uniform environment, also in the the skewed configuration we see close average values of MI across all the uniform features, although, again, in specific runs 3 arbitrary features would have particularly high MIs. We confirm that making one feature in the environment non-uniform leads the agents to ignore it (MI value of 0.07). More surprisingly, we see higher average MI values for the remaining features compared to the uniform simulation (features 2, 3 and 4 show a 30% MI increase, feature 5 a 26% increase). This suggests that making one feature less informative has led the agents to make a better use of the other features, compared to when they have to choose from a larger set of equally informative ones.
Conclusion and future work
In order to understand agents' communicative strategy and how it relates to salient features of the items in the environment, we presented quantitative evidence of the relation between objects in the environment and the language protocol developed. Using the referential game setting and tools from information theory we showed that the agents rely on an arbitrary subset of the input features to describe target objects. We also discovered that if we assign a more skewed distribution to a feature of the environment, they learn to ignore it, as it is less informative in terms of trying to discriminate a target among similar items. Moreover, with this non-uniform distribution, the agents adapt to make a better use of the remaining ones. As future work, we plan first of all to understand how consistent this effect is, what causes it, and under which natural distributions it is more likely to emerge. We want moreover to extend the analysis to games using variable length messages, studying how form-to-meaning mapping might relate to language properties like compositionality. Another direction will be to analyze the mapping in richer environments having a greater number of objects and dimensions. We believe that a better understanding of the object-symbol mapping as well as of the related strategies used to develop the emergent protocol will be useful to design better environments to test neural artificial agents and enforce the emergence of a more human-like language protocol. | 2019-11-05T15:55:19.000Z | 2019-11-05T00:00:00.000 | {
"year": 2019,
"sha1": "c2af041e30e09c97b3026d30920dc995c8aa8169",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "c2af041e30e09c97b3026d30920dc995c8aa8169",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
53502091 | pes2o/s2orc | v3-fos-license | Chronic Sleep Disruption Advances the Temporal Progression of Tauopathy in P301S Mutant Mice
Brainstem locus ceruleus neurons (LCn) are among the first neurons across the lifespan to evidence tau pathology, and LCn are implicated in tau propagation throughout the cortices. Yet, events influencing LCn tau are poorly understood. Activated persistently across wakefulness, LCn experience significant metabolic stress in response to chronic short sleep (CSS). Here we explored whether CSS influences LCn tau and the biochemical, neuroanatomical, and/or behavioral progression of tauopathy in male and female P301S mice. CSS in early adult life advanced the temporal progression of neurobehavioral impairments and resulted in a lasting increase in soluble tau oligomers. Intriguingly, CSS resulted in an early increase in AT8 and MC1 tau pathology in the LC. Over time tau pathology, including tangles, was evident in forebrain tau-vulnerable regions. Sustained microglial and astrocytic activation was observed as well. Remarkably, CSS resulted in significant loss of neurons in the two regions examined: the basolateral amygdala and LC. A second, distinct form of chronic sleep disruption, fragmentation of sleep, during early adult life also increased tau deposition and imparted early neurobehavioral impairment. Collectively, the findings demonstrate that early life sleep disruption has important lasting effects on the temporal progression in P301S mice, influencing tau pathology and hastening neurodegeneration, neuroinflammation, and neurobehavioral impairments. SIGNIFICANCE STATEMENT Chronic short sleep (CSS) is pervasive in modern society. Here, we found that early life CSS influences behavioral, biochemical, and neuroanatomic aspects of the temporal progression of tauopathy in a mouse model of the P301S tau mutation. Specifically, CSS hastened the onset of motor impairment and resulted in a greater loss of neurons in both the locus ceruleus and basolateral/lateral amygdala. Importantly, despite a protracted recovery opportunity after CSS, mice evidenced a sustained increase in pathogenic tau oligomers, and increased pathogenic tau in the locus ceruleus and limbic system nuclei. These findings unveil early life sleep habits as an important determinant in the progression of tauopathy.
Introduction
Brainstem locus ceruleus neurons (LCn) are sole providers of noradrenaline to the cortices (Foote et al., 1983), and these neu-rons help coordinate increased neuronal activity with regional blood flow and glial responses that are critical to optimal cognitive performance and brain health (Aston-Jones et al., 1994;Usher et al., 1999;Strange and Dolan, 2004;Sara, 2009;Bekar et al., 2012). LCn, however, evidence heightened susceptibility in many neurodegenerative processes (Braak et al., 2011). Degeneration of LCn is evident in mild cognitive impairment and early Alzheimer's disease (AD), and the extent of LCn loss predicts cognitive decline (Kelly et al., 2017). LCn seem particularly vulnerable in several tauopathies, including AD. Indeed, somatodendritic tau tangles may be observed in LCn decades before deposition of cortical tangles or clinical AD (Braak et al., 2011). Interestingly, injection of pathogenic tau preformed fibrils into the hippocampus (HC) and other forebrain regions leads to a pronounced concentration of tau aggregates in LCn, relative to other brainstem ascending neuronal groups (Iba et al., 2013), and injection of preformed fibrils into the LC results in widespread cortical tau pathology (Iba et al., 2015). In addition to LC vulnerability in tauopathies, injury to the LC can accelerate the temporal progression of pathology in some murine models of AD and Down's syndrome (Lockrow et al., 2011), whereas lesioning LCn in the P301S murine model of tauopathy hastens cognitive decline and gliosis (Chalermpalanupap et al., 2018). Collectively, these findings suggest important feedforward influences between LCn injury and the progression of tauopathy.
Chronic sleep disruption can impart LCn injury and, thus, could influence the progression of tauopathies. Both chronic short sleep (CSS) and chronic fragmentation of sleep (CFS) in young adult mice induce lasting metabolic stress to and degeneration of LCn . Intriguingly, neuronal activation increases brain tau, tau pathology, and even propagation of tau via exosomes (Yamada et al., 2014;Wang et al., 2017). LCn demonstrate sustained heightened activity across wakefulness (Aston-Jones and Bloom, 1981), which could, therefore, influence LC tau. Importantly, while increased neuronal activity results in a rapid increase in tau (within hours), tau clearance from the interstitial space is protracted (Yamada et al., 2014). We therefore hypothesized that repeated daily exposures to CSS would result in an accumulation of LC tau and potentially induce important pathogenic post-translational modifications of tau, which in a murine model of tauopathy would increase tau pathology and hasten the temporal progression of tauopathy.
While LCn show early involvement in tauopathies, cortical tau likely contributes significantly to both forebrain neurodegeneration and neurobehavioral impairments. We therefore also examined the effects of CSS on tau in tauopathy-vulnerable rostral brain regions and determined whether CFS, as a distinct second form of sleep disruption, influences tau pathology and the temporal progression of behavioral impairment in the murine tauopathy model.
Materials and Methods
Animals. Studies were performed at the University of Pennsylvania in accordance with the National Institutes of Health Office of Laboratory Animal Welfare Policy and the Institutional Animal Care and Use Committee at the University of Pennsylvania. Male and female mice expressing the human P301S tau mutation (PS19 strain) under regulation of the mouse prion promoter on a mixed C57BL/6NJ (B6) background (B6N.Cg-TgPrnP-MAPT P301S PS19Vle/J) were studied, along with WT littermates from hemizygous males and WT females bred in our colony. For each long-term recovery experiment numbers of males and females were equal across sleep conditions. A separate group of no recovery and age-matched rested controls was added, comprised of all males. Tissue from microtubule-associated protein tau knock-out B6.129X1-Mapt tm1Hnd /J (Tau Ϫ/Ϫ ) mice served to substantiate specificities of tau antibodies. Mice were housed in a light/dark environment with lights on from 7:00 A.M. to 7:00 P.M. and fed ad libitum standard rodent chow and water throughout experimentation.
Chronic sleep disruption. Two distinct forms of sleep disruption were examined: CSS and CFS. CSS was achieved using a continuously monitored enriched environment in which novel climbing toys were periodically exchanged whenever a mouse became behaviorally quiescent (Vankov et al., 1995;Léger et al., 2009;Gompf et al., 2010). A temporal overview of the protocol and general experimental design is presented in Figure 1A. On days 1-3 of each of 4 consecutive weeks, mice were placed along with littermates in the novel environment with standard bedding, chow, and water bottles for the first 8 h of the lights-on period. Rested controls (Rest) were placed in a similar environment for the first hour of the lights-on period, a time when mice are generally awake. Ambient lighting and temperature were held similar to the home cage environment. Plasma corticosterone levels are not elevated in this chronic paradigm . A second approach to chronic sleep disruption, CFS, was implemented to help distinguish tau and behavioral responses to disturbed sleep from responses secondary to an enriched environment and/or increased locomotor activity, present only in the CSS paradigm. Home mouse cages were placed atop a rotor table (rotation speed 1 Hz; 5 s every minute; 24 h/d for 4 weeks). This paradigm increases the arousal frequency from 30/h to 60/h without significantly influencing total wake or sleep time in 24 h .
Neurobehavioral testing. Motor and memory impairments, hyperactivity, and disinhibition have been identified early in P301S mice (Yoshiyama et al., 2007;Dumont et al., 2011;Takeuchi et al., 2011;Przybyla et al., 2016). The hindlimb retraction assay and ledge walk assessment tests were performed as previously described (Guyenet et al., 2010) on CSS and Rest mice 1 and 3 months after CSS (ages 5 and 7 months). Mice were habituated to handlers for 3 d before testing. Tests were scored on an integer scale 0 -3, with 0 as normal performance and 3 as continuous clasping of hind limbs (limb retraction assay) or inability to move forward along the ledge (Guyenet et al., 2010). Each of these tests was repeated 4 times and an average score was used for the time point.
A second group of mice Rest and CSS P301S mice was assessed for open field activity, novel and spatial object recognition memory at 7 months (4 months after CSS or Rest conditions). Locomotor activity (distance traveled/min in open field) and percentage of time in center 10 cm ϫ 10 cm were quantified, in an open field, measuring 50 ϫ 50 cm 2 , and illuminated from above by 25 lux. Mice were placed individually into the arena and monitored for 10 min by a video camera (Sony CCD IRIS). Transitory data were analyzed with tracking using the image processing system EthoVision 3.1 (Noldus Information Technology). The novel and spatial object memory protocols were adapted from detailed published protocols (Sinha et al., 1999;Youmans et al., 2011). Mice were habituated to a 60 cm ϫ 50 cm ϫ 30 cm chamber without objects for 5 min for three consecutive mornings with lighting as above. The spatial object test was run 1 week and the novel object test the following week. For the spatial object test, on test day, mice received two 5 min training sessions, 30 min apart, followed by the test 3 h later in which one of two objects was moved. Between training and testing sessions, mice were left unperturbed in home cages. The only difference in the novel object test was that, during the test phase, one object was randomly replaced with a novel object. Scorers reviewing videos were blinded to the conditions and genotypes of mice. The percentage of time spent attending the original object relative to time spent attending the moved or novel object was determined for each mouse for both trial and test conditions.
Immunoblotting. Mice designated for protein lysate assays were decapitated, and brains were frozen on dry ice, sectioned coronally 0.3-to 0.5-mm-thick on a cooling block, and then, using an #11 gauge scalpel and dissecting scope, LC and EC tissue was immediately excised and homogenized on ice in TBS lysis buffer with protease inhibitor mixture (P8340, Sigma), phosphatase inhibitor (Halt, 1862495, Thermo Fisher Scientific), and 1% Triton to improve capture of all tau (Sahara et al., 2002). LC-enriched samples were taken to include much of the LC nucleus and dendritic field but likely included some of the lateral dorsal tegmentum and mesencephalic trigeminal neurons, whereas ECenriched samples may have included some of the external capsule. Protein extracts (20 g/sample, BCA measured) were run on SDS-PAGE and transferred to nitrocellulose membranes. Loading buffer (TBS 927-50000, Odyssey) was used to enhance phosphorylated target protein signal. -Mercaptoethanol and DTT were removed from loading buffer to capture oxidized tau oligomers (Kim et al., 2015). Gels were imaged and analyzed with Odyssey CLx Imager with Odyssey Application software, version 3.0.16 (Li-Cor). Mean integrated densities for 50 -80 kDa and 90 -160 kDa were normalized to ␣-tubulin, as total tau was altered by CSS.
Unbiased optical fractionator stereology for neuronal count estimates, as previously detailed (West and Gundersen, 1990), was performed for LCn and BLA/LA nuclei. LCn were immunolabeled with anti-TH (LS-C124752). LCn and amygdala sections were mounted, dried, and counterstained with Giemsa (to identify neurons and nuclei), and sections were confirmed to span the entire rostral-caudal nucleus. A 100ϫ oil objective was used to count neuronal nuclei in focus within the probe boundaries (DM4B, Leica Microsystems). For LCn, all neurons (TH ϩ and TH Ϫ ) with soma Ͼ 10 m diameter within the confines of the bilateral LC nuclei were counted using a probe size of 50 m ϫ 50 m and a counting frame of 100 m ϫ 100 m (StereoInvestigator version 11.09, MicroBrightField Biosystems). This strategy, using a 1:2 series of sections provided Ͼ200 counts, allowed a Gundersen coefficient of error (for m ϭ 1) Ͻ 0.09 in all subjects. Anatomical boundaries of the amygdala subnuclei (lateral and basal amygdala) were delineated as previously described, using fiber tract landmarks and cytologic features of neurons within each group (Chareyron et al., 2011). A 1:3 series was Figure 1. Temporal overview of study design and effects of CSS on neurobehavioral performance in P301S mutant mice. A, Schematic of the CSS paradigm, where total sleep deprivation (SD, red bars) occurred at the onset of the first three lights-on (light blue) periods of the week (L1, L2, and L3) for 8 consecutive hours. Mice were returned to home cages for the last 4 h of the L1, L2, and L3 periods and the ensuing lights off (dark blue bars, D1, D2, and D3) periods. Mice were left undisturbed in home cages for days 4 -7 each week. The pattern was repeated weekly for 4 consecutive weeks. B, Following CSS and Rest control conditions, mice recovered 3 months before behavioral tests at ages 5 and 7 months and then an additional 2-3 months before undergoing the specified protein assays at ages 9 -10 months. C, D, Individual and group (mean Ϯ SE) scores for the ledge walk test, where higher scores, to a maximum of 3, indicate greater impairment (n ϭ 11: 7 male, 4 female/group). Individual data points: blue represents male; pink represents female. E, F, Individual (same color scheme/gender) and group scores for hindlimb retraction, where scores to a maximum of 3 indicate impairment (n ϭ 11: 7 male, 4 female/group). G, Locomotor distance (mm) per minute in novel environment (mean Ϯ SE) for 10 min in Rest (black) and CSS (blue) mice. n ϭ 11 (8 male, 3 female/group). H, Individual data points (blue represents male; pink represents female) for ratio of time spent in center of open field relative to edges (also n ϭ 11: 8 male, 3 female/group). Error bars indicate mean Ϯ SE. I, Spatial object memory response expressed as percentage preference to moved object. n ϭ 11 (8 male, 3 female/group). Individual data points, coded as above, with paired responses for before move (trial, circle) and after move (test, square). J, Novel object memory test with paired individual data points, coded as above, for preference to original object (trial, circle) and novel object replacement (test, square). C-F, I, J, Repeated-measures ANOVA. G, Two-way ANOVA. H, t test. *p Ͻ 0.05, **p Ͻ 0.01, ***p Ͻ 0.001, ****p Ͻ 0.0001. To identify argyrophilic pathology, including neurofibrillary tangles (NFTs) and neuropil threads, Gallyas silver impregnation was performed on mid LC and BLA/LA (Ϫ5.40 to Ϫ5.80 and Ϫ1.34 to Ϫ2.30 bregma, respectively) 60 m mounted sections (2 or 3/region/mouse) as previously detailed (Kuninaka et al., 2015). Timing of developer exposure was optimized to provide absence of signal in WT mice and some strong neuronal labeling in 10-month-old male P301S mice in the amygdala and/or piriform cortices. Sections were imaged using the DM5500B microscope, before and after nuclear fast red counterstain (N8002, Sigma-Aldrich), and the percentage area of tangles was measured in the above regions, as above for AT8 and MC1 analyses using ImageJ.
Additional sections were analyzed for MC1 and GFAP responses in mice without prolonged recovery after CSS. Sections were processed as above, with the exception that secondary antibodies were conjugated with AlexaFluor probes: 488, 555, or 594 (Invitrogen) for visualization using confocal microscopy (SP5/AOBS, Leica Microsystems). Anti-TH labeling was used to highlight the LC region, and DAPI nuclear labeling was used to delineate hippocampal CA1. Confocal laser intensities, nanometer range, detector gain, exposure time, amplifier offset, and depth of the focal plane within sections per antigen target were standardized across compared sections (Panossian et al., 2011). Percentage area coverage in 2 sections of LC nucleus (Ϫ5.40 to Ϫ5.80 bregma) and the CA1 (Ϫ1.22 to Ϫ2.54 bregma) regions was assessed for MC1 and GFAP, using 8-bit grayscale inverted montaged images across 17 m z axis, standardized thresholds with average percentage areas obtained per section/ mouse and analyzed across groups.
Statistical analysis. When a single variable was compared across two groups, the Student's t test (unpaired) was implemented with Bonferroni correction for multiple comparisons; and when three groups were compared, one-way ANOVA with Bonferroni's multiple comparisons test was used. Within-animal memory testing was performed using repeatedmeasures two-way ANOVA with Holm-Sidak multiple-comparison test. Repeated-measures ANOVA was also used to assess changes within animal in neurobehavioral performance over time, using Tukey's multiple-comparison analysis for overall significant interaction(s). For comparisons of Ͼ2 groups across genotype and sleep conditions, twoway ANOVA was used with Tukey's multiple-comparison post hoc analyses. The cutoff for significant statistical power for all analyses was a multiple-comparisons-corrected p Ͻ 0.05.
Hastened neurobehavioral deterioration in P301S mice following CSS
We first determined whether early-life CSS would influence the progression of known neurobehavioral impairments in P301S mice by assessing motor behavior and spatial memory (Yoshiyama et al., 2007;Xu et al., 2014). Motor performance was examined at ages 5 and 7 months (1 and 3 months after CSS or Rest conditions), using a ledge walk test and the hindlimb retraction test, both analyzed as n ϭ 11 (7 male, 4 female)/group. The former assesses agility of movement, and the latter is one of the earliest motor deficits observed in P301S mice (Yoshiyama et al., 2007). Overall, there were both age and sleep condition effects on ledge walking ability. Individual and group data are presented in Figure 1C, D. In Rest mice, ledge scores were unchanged from 5 to 7 months of age (t ϭ 1.5, not significant); whereas in CSS mice, ledge scores deteriorated from 5 to 7 months (t ϭ 4.5, p Ͻ 0.001). Although there was no effect of sleep condition on the ledge test performance at age 5 months (t ϭ 1.6, not significant), CSS mice age at 7 months showed poorer performances, relative to agematched Rest mice (t ϭ 8.4, p Ͻ 0.0001). Similarly, there were both age-and sleep-dependent effects on limb retraction, as shown in Figure 1E, F. In Rest mice, hindlimb retraction was unchanged from ages 5 to 7 months (t ϭ 0.6, not significant) yet worsened in the CSS group (t ϭ 4.9, p Ͻ 0.001); and as observed with the ledge walk, CSS mice at 7 months evidenced poorer performances than Rest mice at 7 months (t ϭ 3.9, p Ͻ 0.001). A second group of mice was used for locomotor, open field, and memory testing with n ϭ 11 (8 male, 3 female/group). CSSexposed mice showed increased locomotor activity for the first 2 min of the assay (t ϭ 2.9 and t ϭ 2.9, p Ͻ 0.05; Fig. 1G). CSS mice also showed increased relative time in the center:edges in the open field assay (t ϭ 3.4, p Ͻ 0.01; Fig. 1H ). With spatial memory testing also using an n ϭ 11 as 8 male, 3 female/group, neither Rest nor CSS mice showed increased place preference for the recently moved object, relative to the unmoved object (t ϭ 0.4 and t ϭ 0.2, not significant, respectively; Fig. 1I ). In contrast, only Rest mice showed place preference for a novel object (Rest, t ϭ 6.7, p Ͻ 0.0001; CSS, t ϭ 2.3, not significant; Fig. 1J ), and CSS mice showed less preference for the novel object relative to Rest mice (t ϭ 5.1, p Ͻ 0.0001). By 8 months, 2 of 11 CSS mice evidenced hunch spines, poor ambulation, and weight loss, whereas no mice within the Rest group evidenced severe motor deficits before 9 months of age. Overall, CSS accelerates deterioration in neurobehavioral performance in P301S mice, whereas short-term spatial memory is already impaired early in the course of disease.
CSS results in a sustained increase in soluble tau, including oligomers
The LC is one of the first sites with hyperphosphorylated tau, and the EC may be an early site of tau seeding (Braak et al., 2011;Kaufman et al., 2018). Soluble tau oligomers are implicated in both behavioral impairments and neurodegeneration (Santacruz et al., 2005). Nonreducing conditions have unveiled the presence of disulfide oligomeric tau, otherwise obscured in standard reducing gels (Fá et al., 2016). In preliminary studies, we compared reducing and nonreducing conditions for LC MC1 tau and found a robust shift to larger bands, 90 -160 kDa under nonreducing conditions, bands that were undetectable in reducing conditions ( Fig. 2A). The presence of oligomeric tau in nonreducing conditions was confirmed using an oligomer-specific tau antibody (clone TOMA-1), which also showed a prominent band near 160 kDa, without a signal in the reduced buffer ( Fig. 2A). Thereafter, sustained effects of CSS on soluble tau in the LC and EC using nonreducing lysates were examined for monomeric (50 -80 kDa) and oligomeric tau (90 -160 kDa) in Rest and CSS mice, with n ϭ 9 (5 male, 4 female)/group for LC and n ϭ 14 (9 male, 5 female)/ group for EC where protein was more abundant. For all tau antibodies assessed, lysates from Tau Ϫ/Ϫ mice showed negligible immunoreactivity between 50 and 160 kDa. Representative Rest, CSS, and Tau Ϫ/Ϫ images are provided in Figure 2B, C. P202 tau (50 -80 kDa) was higher in mice exposed to CSS, relative to agematched Rest mice in the LC (Fig. 2D; t ϭ 2.3, p Ͻ 0.05) and in the EC, relative to Rest mice ( Fig. 2H; t ϭ 3.2, p Ͻ 0.05). In contrast, AT180 tau (50 -80 kDa) was unchanged in the LC ( Fig. 2E; t ϭ 0.6, not significant) yet increased in EC (Fig. 2I; t ϭ 2.6, p Ͻ 0.05). CSS increased monomeric LC MC1 tau ( Fig. 2F; t ϭ 3.8, p Ͻ 0.01) without affecting monomeric EC MC1 tau ( Fig. 2J; t ϭ 0.5, not significant). CSS did not influence monomeric tau5 in either the LC (Fig. 2G; t ϭ 0.2, not significant) or EC ( Fig. 2K; t ϭ 0.4, not significant). We next examined the CSS response to tau oligomers specifically at 90 -160 kDa bands. Oligomer bands were not detected for P202 in the LC (Fig. 2 B, L). P202 oligomeric band density was evident in the EC and increased in response to CSS ( Fig. 2P; t ϭ 4.2, p Ͻ 0.001). CSS increased AT180 oligomers in the LC (Fig. 2M; t ϭ 2.5, p Ͻ 0.05) and increased AT180 oligomers in the EC (Fig. 2Q; t ϭ 2.8, p Ͻ 0.05). Similarly, CSS increased MC1 oligomers in both the LC (Fig. 2N; t ϭ 2.3, p Ͻ 0.05) and the EC (Fig. 2R; t ϭ 4.7, p Ͻ 0.001). Tau5 oligomers were also increased in both the LC (Fig. 2O; t ϭ 2.8, p Ͻ 0.05) and the EC (Fig. 2S; t ϭ 2.4, p Ͻ 0.05). Collectively, these findings demonstrate that CSS induces sustained increases in phosphorylated and MC1 tau, including soluble tau oligomers, in two regions with established heightened vulnerability in tauopathies, the LC and EC, and overall effect sizes on AT180, MC1, and Tau soluble oligomers are comparable for LC and EC. , and total tau (Tau5). C, Representative NRed gels from EC in Rest, CSS, and Tau Ϫ/Ϫ mice for the same antibodies. D-G, Individual normalized immunodensities at 50 -80 kDa (monomeric) for LC lysates to P202, AT180, MC1, and Tau5. Individual data points: blue represents male; pink represents female. n ϭ 9 (5 male, 4 female) for the LC samples. Error bars indicate mean Ϯ SE. H-K, Individual normalized immunodensities at 50 -80 kDa for EC for the same antibodies and conditions where n ϭ 14 (9 male, 5 female) for the EC samples. L-O, Individual normalized immunodensities at 90 -160 kDa for LC lysates to the same tau antibodies/conditions from the same gels analyzed for monomeric LC. P-S, Individual normalized immunodensities at 90 -160 kDa for EC lysates and antibodies, analyzed on gels used for monomeric analysis. Data were analyzed with unpaired t tests. *p Ͻ 0.05, **p Ͻ 0.01, ***p Ͻ 0.001.
Sustained increase in tau pathology following CSS
To determine whether CSS influences neuronal tau, we examined AT8 (PSer202 andPThr205 tau) and MC1 tau immunohistochemistry within the LC and several forebrain regions particularly sensitive to tau pathology: the EC, basolateral amygdala, and HC (Braak et al., 2011). To relate to soluble tau effects of CSS, mice were examined 6 months after the completion of CSS to identify lasting effects in n ϭ 13 with 8 male and 5 female/group. P301S mice exposed to CSS had markedly greater AT8 within LCn than Rest P301S mice ( Fig. 3A; t ϭ 7.0, p Ͻ 0.0001). A similar response was observed for MC1 immunoreactivity in the LC in response to CSS (Fig. 3B; t ϭ 4.9, p Ͻ 0.0001). Representative . Lasting increases in AT8 and MC1 tau pathology within the LC and EC in response to CSS. A, Individual percentage area with dense AT8 immunoreactivity within LC nucleus in Rest and CSS-exposed mice (n ϭ 13: 8 male, 5 female/group). Individual data points: blue represents male; pink represents female. Black lines indicate mean Ϯ SE. B, Individual data points for percentage area with dense MC1-tau labeling within the LC nucleus for Rest and CSS-exposed mice (n ϭ 13: 8 male, 5 female/group), labeled as in A. C, Representative images of AT8 (top) and MC1 (bottom) immunoreactivity labeled alkaline phosphatase (AP; navy blue) in 60 m coronal sections from the LC from male mice exposed to Rest (left) and CSS conditions (right). D, E, Individual percentage area of AT8 (D) and MC1 (E) labeling in the EC in Rest and CSS-exposed mice (n ϭ 8 male, 5 female/group). Data points are color-coded for sex as above. Error bars indicate mean Ϯ SE. F, Images in male mice for EC AT8 (top) and MC1 (bottom) panels in Rest (left) and CSS (right) mice. Data were analyzed with unpaired t tests. ***p Ͻ 0.001, ****p Ͻ 0.0001. Scale bar, 200 m. images in male mice are shown in Figure 3C. A similar response pattern was observed with increased AT8 in the EC, relative to Rest mice (t ϭ 4.1, p Ͻ 0.001), and MC1 was increased in CSSexposed P301S mice (t ϭ 4.8, p Ͻ 0.0001). Thus, consistent with soluble tau findings, CSS results in sustained hyperphosphorylated tau and MC1 pathology in both the LC and EC.
Findings were next extended to the BLA/LA and HC in the same groups of mice. CSS resulted in an increase in AT8 percent area within the amygdala (Fig. 4A; t ϭ 3.4, p Ͻ 0.01) and in the HC ( Fig. 4D; t ϭ 3.1, p Ͻ 0.01). Examples of the tau immunohistochemistry are shown in Figure 4C, F. CSS also resulted in a sustained increase in MC1 in the amygdala ( Fig. 4B; t ϭ 4.5, p Ͻ 0.001) and the HC (Fig. 4E; t ϭ 7.1, p Ͻ 0.0001). To gain insight into whether CSS influences the formation of NFTs, we next examined Gallyas silver staining within the LC and BLA/LA (amygdala) regions (n ϭ 6: 3 male, 3 female/group). Despite the robust increase in AT8 and MC1 within LCn somata, only rare LCn were labeled with silver, and there was no effect of CSS on LCn Gallyas impregnation NFT-like inclusions (t ϭ 0.0, not significant; Fig. 5 A, B,E). In contrast, CSS significantly increased NFT-like inclusions in amygdala (t ϭ 4.3, p Ͻ 0.01; Fig. 5C-E). Thus, CSS effects on AT8/MC1 tau immunoreactivity do not predict silver impregnation responses to CSS. . In light of the above observed effects of CSS on both soluble tau oligomers and tau pathology in the LC in P301S mice, we examined whether the P301S mutation increases susceptibility to CSS LCn degeneration by examining stereological counts in age-matched WT and P301S mutants exposed to CSS or Rest control conditions. Neurons within the confines of the LC nucleus were counted as TH ϩ or TH Ϫ and analyzed collectively with n ϭ 8 (5 male, 3 female/group). Consistent with previous reports, CSS induced loss of LCn in WT mice (Tukey q ϭ 8.0, p Ͻ 0.0001; Fig. 6A). Representative images are presented in Figure 6B. In Rest P301S mice, LCn counts were not significantly different from Rest WT (q ϭ 2.9, not significant). A large reduction was observed, however, for P301S mice exposed to CSS (q ϭ 11.0, p Ͻ 0.0001), so that LCn counts in P301S mice exposed to CSS were ϳ30% lower than counts in CSS-exposed WT mice (q ϭ 4.6, p Ͻ 0.05), demonstrating that CSS can further LCn degeneration in P301S mutant tau mice. Unable to obtain reliable (reproducible) counts of the densely packed EC neurons in layers II/III in 60 m tissue sections, we examined the BLA/LA, where neurons were easy to isolate for counting in 60 m sections, and boundaries for the BLA/LA are readily defined by surrounding white matter tracks and distinct neuronal morphologies (Chareyron et al., 2011). Sample sizes were n ϭ 7 (4 male, 3 female/group). There was no genotype effect under Rest conditions on BLA/LA neurons (q ϭ 3.2, not significant; Fig. 6C). Representative images of the BLA/LA are shown in Figure 6D. CSS resulted in a significant reduction in BLA/LA in WT mice (q ϭ 7.2, p Ͻ 0.001) and in P301S mice (q ϭ 9.9, p Ͻ 0.0001), so that the BLA/LA neuron estimates in P301S mice following CSS were significantly lower than in WT mice after CSS (q ϭ 5.9, p Ͻ 0.01). Overall, neuron loss in both the LC and BLA/LA occurs in response to CSS, with both LCn counts and BLA/LA counts lower in P301S mice exposed to CSS than in WT mice exposed to CSS.
CFS also hastens neurobehavioral decline and tau pathologic changes
Because the CSS paradigm increases spontaneous exploratory behavior in a novel environment, which could confound effects of sleep disruption, we also explored the effects of CFS on MC1 tau immunohistochemistry and motor performance in n ϭ 11 mice (8 male, 3 female/group). Overall, CFS increased LC and EC MC1, as illustrated in Figure 7A-C. Relative to Rest mice, MC1 percent area increased in the LC (t ϭ 3.0, p Ͻ 0.01; Fig. 7B) and also in the EC (n ϭ 11: 8 male, 3 female/group; Fig. 7C; t ϭ 3.5, p Ͻ 0.05). We then mapped densely labeled MC1 somata across the brain and found that the density of MC1-labeled neurons in CFS exposed mice was greater overall within areas also affected in Rest mice, as summarized in the brain maps of MC1 neurons (Fig. 7D). With the ledge walk test, there was no progression observed in the Rest (t ϭ 2.4, not significant), whereas CFSexposed mice from 5 to 7 months deteriorated in performance (t ϭ 3.8, p Ͻ 0.01). There were also differences in ledge walk scores between the Rest and CFS mice, at 5 months (t ϭ 3.3, p Ͻ 0.01) and at 7 months (t ϭ 4.2, p Ͻ 0.001). Similarly, there were CFS effects on hindlimb retractions scores for the mice at 5 months and 7 months (t ϭ 2.7, p Ͻ 0.05 and t ϭ 3.6, p Ͻ 0.01, respectively) with no progression in mice for Rest (t ϭ 2.2, not significant), yet a progression in CSFS-exposed mice CSS (t ϭ 3.8, p Ͻ 0.01).
Sleep disruption activates glia within regions of tau pathology in P301S mice
Astrocytes and/or microglia are implicated in synapse loss, tau propagation, and neurodegeneration in tauopathies (Asai et al., 2015;Hong et al., 2016;Liddelow et al., 2017). As indices of microglial and astrocyte activation, we examined the percentage area coverage for astrocyte-specific (GFAP) and microglialspecific (Iba-1) and CD68 immunoreactivity within the HC, as a representative tau-susceptible region. Having identified increased tau pathology in the HC in both CSS and CSF mice, we examined both forms of sleep disruption here, matching sexes across sleep condition with n ϭ 11 (8 male, 3 female/group). GFAP percentage coverage of CA1 was increased in CFS, relative to Rest (q ϭ 5.3, p Ͻ 0.001) and in CSS relative to Rest (q ϭ 9.6, p Ͻ 0.0001). GFAP signal was higher in CSS than in CFS (q ϭ 4.3, p Ͻ 0.01, as summarized in Fig. 8B). Overall results were similar for CD68, where CD68 increased in CFS relative to Rest (q ϭ 4.3, p Ͻ 0.01) and was further increased in CSS relative to Rest (q ϭ 10.5, p Ͻ 0.0001), so that CD68 in CSS was higher than in CFS (q ϭ 6.2, p Ͻ 0.0001). There was no significant increase in Iba-1 for CFS relative to Rest (q ϭ 2.6, not significant), although there was a large increase in Iba-1 percent area in CSS mice, relative to Rest (q ϭ 10.0, p Ͻ 0.0001) and relative to CFS (q ϭ 7.4, p Ͻ 0.0001). Collectively, the data show that both astrocyte and microglial reactivity is evident in a sustained fashion following early life sleep disruption.
CSS increases MC1 tau in the LC acutely, whereas effects of CSS on cortical tau are evident only over time
A second set of mice was randomized to CSS or Rest conditions for 4 weeks and then examined for tau and glial responses immediately following CSS within the LC and HC (n ϭ 5 males/group). Representative images of MC1 in the two regions are presented in Figure 9A-D. CSS increased MC1 coverage within the LC (q ϭ 9.8, p Ͻ 0.0001) without increasing MC1 immunoreactive area in the HC (q ϭ 1.8, not significant), as summarized in Figure 9E. Within the same sections, examined as a triple label, CSS resulted HC. A, B, Individual data points with n ϭ 13 (8 male, 5 female) for amygdala percentage area with dense AT8 (A) and MC1 (B) labeling. Blue dots represent male; pink dots represent female. Black lines indicate group mean Ϯ SE, analyzed with unpaired t tests. C, Representative images of AT8 (top) and MC1 (bottom) IHC (blue represents AP) at bregma Ϫ1.70 in male Rest (left) and male CSS (right) mice. Regional references are as follows: LA and BLA nuclei within the amygdala, the amygdala cortex (AmC), and piriform cortex (Pir) in Rest mice. D, E, Individual data points for HC percentage area with dense AT8 (D) and MC1 (E) labeling. Blue dots represent male; pink dots represent female. F, Six months after CSS exposure (n ϭ 13: 8 male, 5 female). Black lines indicate group mean Ϯ SE. C, Representative images of AT8 (top, bregma Ϫ1.70) and MC1 (bottom, bregma Ϫ2.06) IHC (blue represents AP) in male Rest (left) and CSS (right) mice. Regional references are radiatum layer (Rad) and lacanosum moleculare (Mol) in the Rest mice. **p Ͻ 0.01, ***p Ͻ 0.001, ****p Ͻ 0.0001. Scale bars: C, F, 500 m.
in an immediate increase in GFAP coverage within the LC (q ϭ 9.6, p Ͻ 0.0001) and increased GFAP coverage in the HC (q ϭ 2.5, p Ͻ 0.05; Fig. 9J ). Representative images of the GFAP response in the same sections imaged for the MC1 response (Fig. 9A-D) are presented in Figure 9F-I. In summary, P301S mice evidence an early MC1 tau pathology response in the LC, whereas CSS effects in the HC develop only after time, and astrocyte reactivity early on appears more pronounced in the LC than in the HC.
Discussion
Sleep loss increases brain amyloid- (A) levels and A amyloid plaque in transgenic mouse models of AD (Kang et al., 2009;Xie et al., 2013). These sleep loss effects on A and amyloid are believed to be exclusively extracellular and limited to the forebrain. In AD, however, tau is implicated in amyloid-induced neural injury (Roberson et al., 2007;Ittner et al., 2010). Yet effects of sleep loss on tau and the progression of tauopathy have been largely unexplored. The present studies examined the effects of chronic sleep disruption on tau protein biochemistry, neuroanatomy, and behavior in a murine model overexpressing human P301S mutant tau. We found that early life CSS advanced the temporal progression of tauopathy, manifesting as a worsening of neurobehavioral impairment and sustained increases in soluble tau oligomers, AT8 and MC1 tau pathology within the LC, HC, EC, and other regions susceptible to tau accumulation, and greater NFT in the amygdala. Moreover, CSS furthered neurodegeneration of LC and amygdala neurons and activated glia in tau-affected regions, with all of these effects evident months after CSS. A second form of sleep disruption, CFS, also advanced neurobehavioral impairment and increased tau pathology. Collectively, the findings identify chronic early life sleep disruption as an important modifier of P301S tauopathy and demonstrate that chronic sleep disruption also has important effects on intraneuronal tau protein processing, in addition to the previously described sleep loss effects on extracellular A and amyloid.
How would early life sleep disruption induce sustained advancement of tauopathy? We propose several possible mechanisms that may act independently or synergistically to advance the progression of tauopathy. Both paradigms of sleep disruption can result in loss of sirtuin Type 1 (SirT1) in select neurons, including LCn (Zhu et al., 2015, 2016). Deficiency of SirT1 was recently shown to worsen neurobehavioral impairments in P301S mice (Min et al., 2018). Thus, a lasting reduction in SirT1 may contribute to the persistent pathologic tau in the present study. Additionally, CSS induced immediate and sustained misfolding of tau, as evidenced by increased MC1 tau pathology within the LC and increased soluble phosphorylated and misfolded (MC1) tau in the EC. The LC is implicated as an early site for misfolded tau in tauopathy and a site from which pathologic tau can propagate (Braak et al., 2011;Iba et al., 2015). There is, however, a recent report suggesting that the EC, but not the LC, is involved early in tau seeding (Kaufman et al., 2018). As CSS increased EC tau in the present study, it is also possible that sleep loss increases tau seeding directly from the EC. In addition, CSS induced significant degeneration of LCn, particularly in P301S mutants, and lesioning of the LC can have lasting effects on the progression of tauopathy. Specifically, early life lesioning of LC neurons (at the same age of our CSS exposure) in P301S mice results in more profound neurobehavioral deficits, in particular memory, and increased gliosis, without increasing tauopathy (Chalermpalanupap et al., 2018). Consistent with this finding, we observed increased impairment in memory and a striking gliosis in response to CSS. Interestingly, LC lesioning does not appear to Medial parabrachialis nucleus (MPB) is highlighted to the right of the LC nucleus. B, C, Individual MC1 percent area (blue represents male; pink represents female) for LC nucleus (B) and EC (C) with dense immunoreactivity (blue represents AP; navy represents Rest; n ϭ 11: 8 male and 3 female/group). D, Composite brain mapping of MC1 tau densely labeled neurons in 3 mice 6 months after conditions of Rest and CFS, highlighting the three individual responses as red, green, or blue dots marking labeling sites. E, F, Group scores (Rest; n ϭ 11: 8 male, 3 female) for ledge walk (E) and hindlimb retraction for ages 5 months (closed circles) and 7 months (closed squares). Data were analyzed with two-way ANOVA and Sidak's post hoc analyses. *p Ͻ 0.05, **p Ͻ 0.01, ***p Ͻ 0.001. Scale bar, 50 m.
influence tau pathology, whereas early life sleep disruption does. This may have to do with differences in antibodies used to assess tau pathology, where the only tau species examined after LC lesioning was phosphorylated tau at Ser 306 and 404 (Chalermpalanupap et al., 2018). Alternatively, the composite findings we observe with CSS may include LCn injury and may also include the metabolic resetting as mentioned above. Finally, we note a persistent microglial activation in tau-affected regions. Microglial activation in murine tauopathy can worsen tau spreading and pathology (Maphis et al., 2015). In summary, any one or more of these possibilities may contribute to the persistent progression of tauopathy after CSS and CFS.
CSS increased levels of specific post-translational modifications of tau in P301S mice, which are known to influence the severity of tauopathy in murine models and in humans. Specifically, CSS increased tau phosphorylated at threonine 231 (P231) and MC1 tau in both the LC and EC. In AD, increased P231 tau levels in the CSF predict lower hippocampal volumes and greater declines in volume over time (Hampel et al., 2005), and P231 tau levels predict cognitive decline in individuals with mild cognitive impairment (Buerger et al., 2002). P231 tau disrupts tubulin intermolecular binding in axons, which may contribute to neuronal injury and demise (Moszczynski et al., 2015;Schwalbe et al., 2015). Additionally, P231 tau is critical for the formation of tau fibrils (Moszczynski et al., 2015). Increases in MC1 tau were also evident following CSS. MC1 antibody detects a pathological conformational change in tau protein, one that is identified early on in AD and is not detected in normal brains (Weaver et al., 2000). Importantly, its presence also positively correlates with the severity and progression of AD (Mead et al., 2016). Remarkably, CSS resulted in sustained increases in pathogenic tau oligomers, evident at least 6 months after cessation of CSS. This suggests that . Immediate response to CSS in P301S mice varies for LC and HC. A, B, Confocal images of MC1 (green) with TH-labeled (red) LCn in Rest-exposed (A) and CSS-exposed (B) mice immediately after exposures. Regional reference, LC dendritic field (LC DF). C, D, Confocal images of MC1(green) in CA1 HC with DAPI labeling (red) of nuclei to highlight pyramidal neurons. Reference areas are radiatum layer (Rad) and lacanosum moleculare (Mol). E, Individual MC1 percent area (with black lines indicating mean Ϯ SE) in the LC and CA1 region of HC. F, G, Confocal images of GFAP (blue) with TH-labeled (red) LCn from the same sections (tripled labeled) shown in A and B, revealing a striking GFAP increase over LC DF in response to CSS. H, I, Confocal images of GFAP (blue) in CA1 HC with DAPI labeling (red) of nuclei from the same sections imaged for MC1 and DAPI in C and D. J, Individual GFAP percentage area (with mean Ϯ SE) in LC and CA1. *p Ͻ 0.05, ****p Ͻ 0.0001. Scale bars: A-D, F-I, 50 m. that are implicated in cognitive decline and neural injury in AD, where one of the CSS-induced tau modifications is reversible oxidative oligomerization.
P301S mice showed heightened vulnerability to CSS-induced neurodegeneration in both regions examined: the LC and BLA/ LA. In the LC in P301S mice, CSS neuron loss does not appear to require NFT as the LC in CSS-exposed mice exhibited rare tangles. LCn loss was associated with an increase in LC-soluble tau oligomers and glial activation. Our findings are in keeping with the previously described temporal progression of tauopathy in P301S mice, where neurofibrillary formations occur well after synaptic loss and gliosis (Yoshiyama et al., 2007). There is recent in vivo evidence that oligomers, rather than NFT, parallel neuron loss in tauopathy. Specifically, reducing RNA binding protein TIA1 in P301S mice reduces soluble oligomers and in parallel limits neuronal loss, yet this intervention increased NFT (Apicco et al., 2018). A next important step will be to test the role of disulfide oligomerization in CSS-induced neuron loss in both WT and P301S mice.
Both CSS and CFS increased the severity of motor deficits, the density of tau pathology, and the intensity of both astrocyte and microglial activation, supporting the concept that chronic disruption of sleep, rather than repeated exposure to a novel environment and/or increased ambulation as in CSS, advances tauopathy in the P301S mice. We cannot compare magnitudes of responses, as there is no way to equate severity of sleep disruption across the two models. There were, however, relative differences in the overall response patterns that may provide insight into how the type of sleep disruption may influence the phenotype of tauopathy. Interestingly, the motor deficits started sooner following CFS, whereas EC tau pathology and glial activation appeared more pronounced in response to CSS. We propose that, although chronic sleep disruption (both CSS and CFS) can advance tauopathy, there may be unique metabolic responses to the different forms of sleep disruption (with different patterns of wake durations) which influence the behavioral and pathology outcomes in tauopathy, and these unique responses may contribute to heterogeneity of pathology and neurobehavioral signs across individuals with a given tauopathy. Alternatively, differences in severity of sleep disruption may influence response patterns.
Both CFS and CSS resulted in sustained glial activation. CSS increased microglial activation as evidenced by increased CD68 and Iba-1 percent area, whereas CFS increased only CD68 and not Iba-1. While both Iba-1 and CD68 increases are used to characterize microglial activation, in AD, Iba-1 response can be less robust than the CD68 response (Hendrickx et al., 2017). Thus, both forms of sleep disruption activate microglia. Microglial activation is an early sign in tauopathy, including in the P301S mice, where microglial activation precedes tangle formation (Yoshiyama et al., 2007), but this response may also contribute to disease. Specifically, microglia can propagate tau, whereas depletion of microglia can markedly reduce propagation of tau pathology in a mouse model (Asai et al., 2015). Astrocyte activation was more prominent in CSS than in CFS, at least in the HC, where compared. It is of interest that brain regions with increased MC1 deposition were also regions with increased astrocyte activation, suggesting that these two responses are tightly linked across Rest and CSS conditions. Collectively, the present findings demonstrate that chronic sleep disruption in early adult life advances the temporal progression of tauopathy. Insufficient sleep in adolescents is increasingly common in modern societies (Keyes et al., 2015). CFS is observed in all cases of obstructive sleep apnea, a disorder with increasing prevalence in adolescents and young adults (Peppard et al., 2013;Güngör, 2014). The above findings suggest that these common forms of chronic sleep disruption may prove to be important and modifiable factors in the progression of tauopathies, including AD. | 2018-11-01T18:46:31.932Z | 2018-10-15T00:00:00.000 | {
"year": 2018,
"sha1": "ab0731c64a1562af1445ef3d35e2911959d93f93",
"oa_license": "CCBY",
"oa_url": "https://www.jneurosci.org/content/jneuro/38/48/10255.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "9307b9deaf2ee9a3184eba5e6fee6eaa9fd9ab53",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
211549669 | pes2o/s2orc | v3-fos-license | Impact of Environmental Investment on Financial Performance: Evidence from Chinese listed Companies
Using the data of Chinese listed companies during 2012-2016, this study examines the effect of environmental investment on financial performance, as measured by return on assets (ROA). We also examine the moderating effect of industry attributes, company ownership, and region on this relationship. The empirical results show that there exists a U-shaped relationship between environmental investment and financial performance. However, only 11% of Chinese listed companies can attain profitable environmental investment. In addition, the impact of environmental investment on financial performance in state-owned enterprises (SOEs) is higher than that in private-owned enterprises (POEs), and a company’s environmental investment in China’s eastern regions can do more to promote financial performance. The findings of this study can help managers to reasonably manage the tensions between environmental investment in relation to stakeholders and the pursuit of profitability.
pollution reached about 9000 billion yuan annually ( Fig. 1). Lin et al. [14] and Li et al. [15] also confirmed that environmental protection investment has a positive effect on China's GDP growth.
Environmental issues cannot be ignored by any responsible company. Companies, the main body of environmental protection, currently are facing a growing demand from society for environmental protection. The environment is a key requirement for any company to achieve long-term success [16,17]. The solution to environmental problems depends largely on environmental investment. Guided by Lundgren and Zhou [18], we define environmental investment as a company's efforts to reduce its environmental impact. This study focuses on Chinese listed companies because China highly values ecological and environmental protection. Environmental investment of Chinese listed companies aims at conserving resources and reducing the environmental burden.
Understanding the relationship between environmental investment and financial performance is of increasing interest to both stakeholders and regulators. If this relationship is positive, it will encourage companies to improve environmental performance without necessary environmental regulation. Thus, the purpose of this study is to investigate whether environmental investment of Chinese listed companies has influenced their financial performance. Also, we examine the moderating effect of industry attribute, company ownership, and region on this relationship. This paper contributes to the extant literature as follows. First, most studies have focused on developed countries such as the USA, Japan and France, while little has been done in developing countries such as China. Our study aims to expand the literature on environmental investment and financial performance. In addition, a majority of studies do not consider the moderating effect of industry attributes, company ownership and region on this relationship. Second, our study contributes to the area of environmental studies using firm-level data. Previous research, such as Klassen and McLaughlin [19], King and Lenox [20], Al-Tuwaijri et al. [21], and Tamazian et al. [22], has focused on the industrial level. Finally, this study can enable corporate management to make reasonable investment decisions on ecological protection by the understanding of this relationship. It also will help regulators concentrate their monitoring efforts on firms with a weak correlation between environmental and financial performance.
literature Review and Hypotheses Development
The relationship between environmental investment and financial performance is still inconsistent in academia.
The traditional theory states that environmental investment is a costly burden for firms, which is likely to reduce their profitability [23]. Because environmental protection requires additional investments in a nonproductive sector that are not directly related to financial performance, traditionalists suggest a trade-off relationship between them. For example, if a company wants to increase the production output for profit, the production increase is often related to ecological problems. In this situation, the firm faces a critical tradeoff: incurring the cost of investment in environmental protection vs. benefiting from environmental investment [24]. Orsato [25] pointed out that corporate environmental protection requires a large amount of money to purchase environmental equipment and develop new environmental technology, which increases operating costs. Taking U.S. electric utility firms as the research sample, Sueyoshi and Goto [26] confirmed that environmental protection expenditure under the U.S. Clean Air Act decreases firms' financial performance, measured by return on assets (ROA). Horváthová [3] reported a negative link between environmental and financial performance. Wang and Zhang [10], based on the data of China's manufacturing listed companies during 2009-2013, found that corporate environmental investment is negatively correlated with financial performance (measured through ROA). Based on the data of 79 companies in the heavy chemical industry, Huang [27] found that the investment in environmental protection has a negative impact on short-term financial performance. Recently, Lu and Taylor [28] used the data from Newsweek magazine's green rankings to measure the association between environmental performance, environmental disclosure, and financial performance. Their results showed a negative relationship between environmental performance and financial performance.
Some scholars who challenge this trade-off relationship pointed out that strengthening environmental investment is positively related to financial performance. On the one hand, this is because environmental expenditure can be considered an investment in innovative new technology that reduces pollution abatement costs, thus improving firm performance [29][30][31]. On the other hand, from the income perspective, McGuire et al. [32] has suggested that environmental management behavior can establish a good corporate image and gain consumer loyalty, which indirectly improves company revenue. An early study conducted by Bragdon and Marlin [33] found that a firm's profitability (earnings per share and return on equity) increases when environmental performance improves. Klassen and McLaughlin [19] discussed that environmental management has a positive linkage to firm performance by improving operating income and reducing product costs. Using a sample of S&P 500 companies, Waddock and Graves [34] found a positive relationship between corporate environmental management and financial performance (measured through ROA). Esty and Porter [35] concluded that industrial ecology can help managers find inside and outside opportunities to add value to their products or cut their costs. Based on the data of 652 U.S. manufacturing firms, King and Lenox [20] found a linkage between lower pollution and higher Tobin's q ratio. Al-Tuwaijri et al. [21] also argued that firms could make a win-win situation with environmental protection. Ambec and Lanoie [2] listed seven channels through which environmental practices may increase their revenue or reduce their costs: (1) better access to certain markets; (2) differentiating products; (3) selling pollution-control technology; (4) risk management and relations with external stakeholders; (5) cost of material, energy, and services; (6) cost of capital; and (7) cost of labor. Bagur-Femenías et al. [36] pointed out that the adoption of environmental management practices directly impacts the economic results of small service businesses. Analyzing the data of Chinese listed companies, Song and Zhao [30] concluded that environmental management behaviors can improve corporate value. More recently, Peng and Yue [37] found a positive correlation between corporate environmental investment and financial performance of companies in the papermaking and printing industry. Nishitani et al.'s [38] findings showed that Indonesian firms that reduce greenhouse gas emissions are more likely to enhance profit further. Masocha [39] also found that environmental sustainability practices can contribute to ecological and social performance.
Interestingly, some studies have suggested a nonlinear correlation between them. For example, based on 17 firms in the paper and pulp industry, Bowman and Haire [40] found that middle performers with regard to pollution control have a higher return on equity than low or high performers. Fujii et al. [41], based on the data of Japanese manufacturing firms, demonstrated an inverted U-shaped relationship between ROA and environmental performance calculated by aggregated toxic risk. Pekovic et al. also [8] used the data of more than 6,000 French firms over a 5-year period and found that the impact of environmental investment on economic performance, measured by net profits, follows an almost U-inverted curve. However, using China's A-share listed companies as the sample, Gao [42] found that environmental protection investment has a U-shaped relationship with enterprise market value (measured through ROA and market-to-book ratio). Therefore, we propose the following hypothesis: Hypothesis 1 (H1). Environmental investment has a non-linear relationship with financial performance.
Companies' investment decision-making is inevitably influenced by industry environment and industry attributes [43]. Different industries with different market environments and government regulation intensities usually lead to differences in market competition and firm performance. Heavy-polluting industries face more stringent environmental regulation and bear more social and environmental responsibility than other industries, which enables them to invest more in the purchase of environmental protection facilities, the improvement of environmental protection technology, and the treatment of pollutant emissions [44,45]. The findings of Tang et al. [46] showed that heavy-polluting companies invest more in environmental protection than non-heavypolluting ones. That is, compared with non-heavypolluting companies, heavy-polluting companies are more sensitive to environmental expenditure. Therefore, we propose the following hypothesis: Hypothesis 2 (H2). Industry attribute has a moderating effect on the non-linear relationship between environmental investment and financial performance.
Compared with private-owned enterprises (POEs), state-owned enterprises (SOEs) are under greater pressure from the government and the public, and are more likely to implement proactive environmental strategies, which in turn has an important impact on a company's environmental behaviors. For example, Montabon et al. [47] found that environmental performance can lead to good financial performance when the enterprises implement proactive environmental management. Therefore, SOEs should have better environmental performance than POEs, and have a higher level of environmental information disclosure. On the other hand, the influence of company ownership on financial performance has always been a hot issue in the field of management. Scholars [48][49][50] have argued that SOEs have strong ties with the government, and most SOE executives are directly appointed by the government. Under transitional economy conditions, it is easier for SOEs to obtain heterogeneous resources through such a close relationship, which can promote the financial performance of SOEs. However, POEs need to pay a lot through rent-seeking behaviors to establish political connections, which obviously affects their financial performance. Thus, SOEs are more likely to have better financial performance than POEs. However, Deng and Zeng [51] found that the operational efficiency of SOEs is lower than that of POEs. Therefore, we propose the following hypothesis: Hypothesis 3 (H3). Company ownership has a moderating effect on the non-linear relationship between environmental investment and financial performance.
The government implements different environmental protection policies for companies in different regions. For companies in developed regions, the government focuses on corporate environmental protection behaviors rather than the improvement of their economic performance. Meanwhile, the assessment of environmental performance has become an important performance standard for Chinese listed companies. Conversely, for companies in less developed regions, the responsibility of the government is to promote local economic development and solve people's living problems. Under such circumstances, companies generally lack environmental awareness, which tends to result in poor environment and heavy pollution.
On the other hand, companies in developed regions usually enjoy faster growth and stronger capabilities of technological innovation. They often invest more in environmental protection, respond to government environmental policies, and reduce the waste of resources. In less developed regions, companies may not be able to quickly acquire advanced production technologies. In addition, managers of companies in less developed regions may lack the understanding of environmental issues and need to pay a large amount of money for environmental protection, which affects the financial performance of these companies.
Li [52], based on the data of environmental performance of listed companies in China's extractive industries, found that environmental performance of companies in first-and second-tier cities is greater than that of companies in third-and fourth-tier cities. Yang and Wang [53] also found that the positive impact of environmental performance on financial performance of companies in China's eastern regions is greater. Zhang [54] pointed out that the impact of corporate environmental performance on financial performance is more pronounced in developed areas than in developing areas. Thus, hypothesis 4 is proposed as follows: Hypothesis 4 (H4). Region has a moderating effect on the non-linear relationship between environmental investment and financial performance.
Sample Selection
We selected all companies listed on the Shanghai and Shenzhen Stock Exchanges during 2012-2016. We deleted companies that do not disclose environmental expenditure in their financial statements, companies with a debt ratio greater than 1, companies with missing information, and special treatment companies. After winsorizing the variables at 1% and 99%, our final sample consisted of 455 firm-year observations for 212 companies. Environmental expenditure information is derived from the Rankins CSR Ratings (RKS) database, and other data are sourced from the China Stock Market and Accounting Research (CSMAR) database. The regressions are carried out using SPSS version 20.
(2) Independent variable. Guided by the literature [7,10,46,60], environmental investment (EI) is measured by dividing environmental expenditure to total assets. This measure is more immediate and tangible for firms. Environmental investment includes the following seven categories: expenditure on environmental technology research and development and reconstruction, expenditure on environmental facilities and systems and reconstruction, expenditure on pollution control, expenditure on clean production, environmental taxes, expenditure on ecological protection and other environmental investments.
( (4) Control variables. Firm size (SIZE), debt ratio (LEV), R&D intensity (RD), sales growth rate (GROWTH), and capital intensity (CAPITAL) are used as control variables. Empirical evidence [10,38,42] shows that firm size (SIZE) has a positive impact on firm performance. Firms with high debt ratios are more likely to have high financial risks, which is likely to reduce profits [42,61]. Wang and Zhang [10] found that sales growth rate can positively affect financial performance of Chinese manufacturing listed companies. CAPITAL is used to control how a firm relies on capital investment. In general, high capitalintensive firms tend to pay lower labor costs than laborintensive counterparts, which contributes to better performance of capital-intensive firms. Nunes et al. [62] found that R&D intensity restricts the growth of high-tech SMEs at lower levels of R&D intensity and improves their growth at higher levels. Finally, a year dummy is introduced in the research models. Table 1 presents the definitions of the variables.
Models
Model (1) is used to examine the relationship between environmental investment and financial performance. ROA i,t = β 0 + β 1 EI i,t + β 2 EI 2 i,t + β 3 SIZE i,t + β 4 LEV i,t + β 5 RD i,t + β 6 Model (2), introducing the variable INDUS, is utilized to test the second hypothesis. If H2 is accepted, we expect that the coefficient on INDUS*EI 2 is significant.
Model (3) introduces the variable OWN and is utilized to test H3. If H3 is accepted, we expect that the coefficient on OWN*EI 2 is significant. ROA i,t = β 0 + β 1 EI i,t + β 2 EI 2 i,t + β 3 OWN i,t + β 4 OWN i,t *EI 2 i,t + β 5 SIZE i,t + β 6 LEV i,t + β 7 RD i,t + β 8 GROWTH i,t + β 9 CAPITAL i,t + YEAR i,t + ε i,t Introducing the variable AREA, model (4) is utilized to test the fourth hypothesis. If H4 is accepted, we expect that the coefficient on AREA*EI 2 is statistically significant.
Descriptive Statistics
Descriptive statistics are shown in Table 2. The mean value of ROA is 0.0369 with the maximum value of 0.3989 and the minimum value of -0.2495, which indicates that there may exist great differences in the firms' performance in China. The mean value of EI is 0.0055, indicating that environmental investment level is very low. This is consistent with Wang and Zhang's [10] and Tang and Li's [44] Table 3 demonstrates the means of the variables under different industry attributes. We find that, on average, the performance of non-heavy-polluting companies is better than heavy-polluting companies. Heavy-polluting companies invest, on average, more in environmental protection than non-heavy-polluting companies. However, in terms of EI, the results show that there is no significant difference between heavycompanies and non-heavy-polluting companies under 5% of the significance level (t = 2.015). We also found that heavy-polluting companies, on average, have more debt and less capital intensity, and invest less in technology innovation than non-heavy-polluting companies. Table 4 shows the means of the variables under different company ownership. We find that POEs invest, on average, more in environmental protection than SOEs, and there exists significant difference between SOEs and POEs. Table 5 shows the means of the variables under different regions. We find that companies in central and western regions tend to make more investments in environmental protection than companies in eastern regions.
Correlation Analysis Table 6 represents Pearson's correlation coefficient analysis. We also compute the variance inflation factors (VIFs) and find the values of the VIFs to be less than 7, which indicates that multi-collinearity is not a major issue in our study. Table 7 presents the regression results of models (1) and (2). In model (1), the coefficient on EI is negative (β = -1.203, t = -2.402) and the coefficient on EI 2 is positive (β = 45.222, t = 4.852), which suggests a relationship between environmental investment and financial performance in the form of a U. Therefore, H1 is accepted. To determine the minimum point of the quadratic relationship between environmental investment and financial performance, we choose to consider the regression estimated in column 3 of Table 5. Deriving the function presented in column 3 of Table 5, referring to model (1), to the order we have (∂ROA i,t / ∂EI i,t ) = -1.203 + 90.444EI i,t = 0, the minimum point of environmental investment being 0.013301. Therefore, up to 0.013301, environmental investment can stimulate financial performance. In model (2), although the coefficient on INDUS*EI 2 is negative (β = -17.464, t = -0.930), it is not statistically significant at the 5% level. Therefore, H2 is not accepted. Table 8 presents the regression results of models (3) and (4). In model (3), the coefficient on OWN*EI 2 is significant and positive (β = 32.836, t = 3.740), which supports our H3. That is, company ownership has a positive moderating effect on the relationship between environmental investment and financial performance. In model (4), the coefficient on AREA*EI 2 is positive and significant at the 5% level. Therefore, H4 is fully supported.
Regression Results
In addition, we also find that (1) firm size, R&D intensity and sales growth rate has a significant and positive impact on financial performance, (2) debt ratio negatively affects companies' financial performance, and (3) capital intensity has no impact on financial performance.
Robustness Check
We conducted a robustness check on models (1)-(4) by using return on investment, measured by dividing operating profit by average total assets, as the dependent variable. The results are similar to our previous findings, which suggests that our conclusion is robust.
Discussion
Our empirical findings allow us to conclude the relationship between environmental investment and financial performance examined in our sample.
Opposite to the previous studies [7,41], we find a U-shaped quadratic relationship between environmental investment and financial performance in China, a developing country. From this, we conclude that up to a certain level of environment investment, environmental investment is a restrictive factor, and that it becomes a positive factor of financial performance beyond that level, which is consistent with Liu and Duan [49]. Formally, financial performance starts to improve when the share of environmental investment exceeds 1.3301% of total assets. Noteworthy is that around 11% of Chinese listed companies in our sample dedicate 1.3301% or more of their assets to environmental investment. Within a limited range, the increase in sales volume due to environmental investment and the decrease in operating costs brought by technological innovation do not cover environmental expenditures. However, when the economic benefits from environmental protection cover these costs, companies will take the initiative to protect the environment.
Conversely, Pekovic et al. [7] confirmed that a firm's performance starts to decline when the share of environmental investment exceeds 16.5% of total sales. More specifically, there exists an optimal level of environmental investment. Despite the benefits of environmental investment, such an excessive investment requires a large financial investment with some risks, which may reduce a firm's profitability. Fujii et al. [41] concluded that a positive relationship is caused by using cleaner production technology to reduce emissions of toxic chemical substances, while the negative relationship after the turning point is due to excess investment in pollution abatement. Taking listed companies in the coal mining industry, Fan and Wang [63] found that environmental protection investment can obviously promote financial performance due to corporate social responsibility. Most Chinese companies usually adopt the end-ofpipe treatment of environmental pollutants with high abatement costs, which in turn reduces its economic performance [10]. In addition, under strict environmental regulations, Chinese companies need to pay a pollution abatement cost that leads to a decrease in profits. In particular, environmentally proactive companies may achieve benefits from environmental investment such as high resource use efficiency that outweighs their costs. Hence, identifying an optimal level of environmental investment can be very useful for the construction and implementation of finely tuned environmental regulations.
We found that H2 is not accepted. With regard to strict environmental regulation in recent years, Chinese listed companies have begun to realize the importance of environmental protection and make continuous investments in ecological protection regardless of industry. However, the findings of Zhang [64] also showed that environmental investment can bring value to Chinese listed companies in heavy-polluting industries.
In terms of company ownership, H3 is fully supported. That is, the impact of environmental investment on financial performance in SOEs is higher than that in POEs. SOEs have strong ties with the local government, and environmental investment by SOEs aims to respond to the government environmental policies, which will ensure the effective implementation of these policies.
Finally, we found that a company's environmental investment in China's eastern regions can do more to promote financial performance than in central and western regions. In the case of eastern regions, with the rapid development of the economy, companies have gained benefits from environmental investment and they are more willing to invest more in environmental protection. However, the primary goal of central and western regions is to achieve economic development. In order to ensure the maximization of profits, companies are reluctant to spend much money on environmental protection.
Conclusions
After more than 50 years of theoretical and empirical research, the results of the impact of environmental investment on firm performance seem to remain inconclusive. Some researchers suggest that environmental investment harms firms, while others claim that it may contribute positively. Therefore, this study examines the environmental-financial performance nexus based on a sample of Chinese listed companies over 5 years (2012 to 2016). The main conclusions of this study are as follows: (1) The relationship between environmental investment and financial performance can be explained by an almost U-shaped curve. That is, limited environmental investment becomes detrimental to financial performance, and only beyond a point, investing more in greenness will lead to better financial performance.
(2) The impact of environmental investment on financial performance in SOEs is higher than that in POEs.
(3) Corporate environmental investment in eastern regions can do more to promote financial performance than in central and western regions.
To achieve the sustainable development of Chinese listed companies, we make the following implications: (1) Corporate managers should improve the awareness of environmental protection, clarify social and environmental responsibility, and establish an internal control system of environmental protection, especially in China's central and western regions.
(2) Chinese listed companies should clearly identify the optimal level of environmental investment associated with the benefits from going green and the incurred costs. Meanwhile, they, especially POEs, should increase investment in environmental protection and use cleaner production technology.
(3) Chinese listed companies should build sustainable environmental strategies according to company ownership and their geography for the low-carbon transformation. Meanwhile, the government needs to provide listed companies with financial support to invest for environmental protection.
(4) Chinese listed companies should also voluntarily disclose more environmental information. In addition, the government should carefully check this information and set strict penalties for incorrect information reports.
This study has some limitations. First, we did not examine the effect of environmental investment on financial performance by sector due to sample size. Second, firms' financial performance should be measured in a more systematic way. Therefore, further research on the subject appears warranted. | 2020-02-13T09:11:06.083Z | 2020-03-31T00:00:00.000 | {
"year": 2020,
"sha1": "71135b3e8f5ee29bbd0f38dd5668657b9b9c8f21",
"oa_license": null,
"oa_url": "http://www.pjoes.com/pdf-111230-48415?filename=Impact%20of%20Environmental.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "fe6d448269545b4faf66deaf41e4e37ed4251e31",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
215790753 | pes2o/s2orc | v3-fos-license | Testing the ‘caves as islands’ model in two cave-obligate invertebrates with a genomic approach
Caves offer selective pressures that are distinct from the surface. Organisms that have evolved to exist under these pressures typically exhibit a suite of convergent characteristics, including a loss or reduction of eyes and pigmentation. As a result, cave-obligate taxa, termed troglobionts, are no longer viable on the surface. This circumstance has led to a “caves as islands” model of troglobiont evolution that predicts extreme genetic divergence between cave populations even across relatively small areas. An effective test of this model would involve (1) common troglobionts from (2) nearby caves in a cave-dense region, (3) good sample sizes per cave, (4) multiple taxa, and (5) genome-wide characterization. With these criteria in mind, we used RAD-seq to genotype an average of ten individuals of the troglobiotic spider Nesticus barri and the troglobiotic beetle Ptomaphagus hatchi, each from four closely located caves (ranging from 3-13 km apart) in the cave-rich southern Cumberland Plateau of Tennessee, USA. Consistent with the caves as islands model, we find that populations from separate caves are indeed highly genetically isolated. In addition, nucleotide diversity was correlated to cave length, suggesting that cave size is a dominant force shaping troglobiont population size and genetic diversity. Our results support the idea of caves as natural laboratories for the study of parallel evolutionary processes.
Introduction
Caves are unique habitats with environmental conditions fundamentally distinct from the surface. The most conspicuous of these is the complete absence of light, precluding the use of visual cues for hunting, foraging, locating mates, and evading predators (Rétaux and Casane 2013). Moreover, as photosynthesis is not possible, cave communities depend nearly entirely on trophic input from the surface. Caves are also typically more stable in temperature and humidity than surface habitats (Culver and Pipan 2019). As a result, caves offer opportunities for extensive evolutionary change (Poulson and White 1969).
Some organisms have evolved under these conditions to the extent that they are never found outside of caves. These organisms, termed troglobionts (Culver and Pipan 2019), often bear a suite of distinctive characteristics, including loss or reduction of eyes, pigment loss, elongated appendages, improved non-visual sensory mechanisms, reduced metabolic rates, longer lifespans, and lower rates of reproduction (Poulson and White 1969;Peck 1986;Culver and Pipan 2019). Many of these phenotypes have obvious tradeoffs for fitness on the surface. For instance, pigment loss, selectively neutral or possibly even advantageous in the cave environment (Polo-Cavia and Gomez-Mestre 2017), will decrease crypsis on the surface, a trait known to undergo particularly strong purifying selection (Cook and Saccheri 2013). Intolerance to variation in temperature and humidity may also preclude surface viability (Culver and Pipan 2019). Hence, the surface is a hostile environment to troglobionts. With this idea in mind, Culver and Pipan (2019) pointed out that caves are like islands in a sea of surface habitat.
Previous studies have shown that troglobiont migration is indeed highly limited. For instance, at the species level, endemism is less the exception than the rule (Culver et al. 2000;Niemiller and Zigler 2013). This is especially true in the eastern United States, where up to 45% of troglobionts are singlecave endemics (Christman et al. 2005). These exceptional rates of endemism are consistent with restricted gene flow and frequent speciation. Population genetic studies lend further support. Examining COI in the troglobiotic spider Nesticus barri, Snowman et al. (2010) found extensive haplotype divergence and limited sharing of haplotypes between caves, indicating that migration was minimal to nonexistent over distances greater than 15 km. Another study, examining COI in several troglobionts, including N. barri and the beetle Ptomaphagus hatchi, provided similar findings (Dixon and Zigler 2011), indicating that restricted migration is likely general to terrestrial troglobionts. In contrast, some aquatic troglobionts show high connectivity between caves, and unexpectedly large population sizes (Buhay and Crandall 2005;Buhay et al. 2007). The contrast between aquatic and terrestrial troglobionts may reflect broader aquatic subterranean connectivity than terrestrial (Porter 2007). Hence, for small terrestrial troglobionts caves indeed often resemble islands.
Here, we sought to critically evaluate the caves as islands hypothesis for two cave-obligate invertebrates. To emphasize the severity of isolation imposed by troglobiont ecology and life history, we sampled individuals from caves located closely together in the cave-dense region of the southern Cumberland Plateau, one of the most biodiverse karst areas in the United States (Culver et al. 2000;Christman and Culver 2001;Niemiller and Zigler 2013) (Fig. 1; Table 1). To ensure the generality of the hypothesis, we focused on two species with distinct natural histories: the spider Nesticus barri and the beetle Ptomaphagus hatchi. Nesticus barri is part of a complex of 28 species found across the southeastern United States (Hedin 1997). Their tendency to live in dark, moist habitats has led to numerous instances of cave habitation, with roughly one third of species in the group either troglophiles (frequent cave dwellers that are also found on the surface) or troglobionts (Hedin and Dellinger 2005).
Nesticus barri demonstrates typical troglomorphic features, lacking eyes and with reduced pigment, although it still possesses reproductive seasonality (Carver et al. 2016). The genus Ptomaphagus includes about 60 species in North America, again with roughly one third either troglophiles or troglobionts (Peck 1986). Diversification of the genus throughout the southern Cumberland Plateau is thought to have occurred through progressive vicariance, as the Cumberland Plateau eroded over the last six million years (Leray et al. 2019). Like other troglobiotic Ptomaphagus, P. hatchi has greatly reduced eyes and is wingless. The southern Cumberland Plateau is one of the most cave-rich regions in North America, with more than 4000 caves known from a six-county area in southern Tennessee and northeast Alabama (Zigler et al. 2014). On the southern Cumberland Plateau, N. barri and P. hatchi have largely overlapping ranges, with each species known from dozens of caves (Snowman et al. 2010;Leray et al. 2019), and both species are common in the caves they inhabit. Finally, where previous studies made use of one, or at most a handful loci, we take a genome wide approach using 2bRAD (Wang et al. 2012). This method has the advantage of interrogating thousands of loci across the genome, allowing for more confident estimates of population divergence and neutral diversity (Rokas and Abbot 2009;Nunziata and Weisrock 2018). This is the first study to investigate these species at the genomic scale.
Sampling
We collected specimens from four caves on the edge of the southern Cumberland Plateau in Franklin County, Tennessee (Table 1). The caves were chosen based on proximity, location, and previous knowledge of the presence of Nesticus barri and Ptomaphagus hatchi (Dixon and Zigler, 2011;Wakefield and Zigler, 2012). Distances between the caves ranged from 3-13 km (Fig. 1). The caves are distributed across two adjacent watersheds, with Solomon's Temple (ST) and Sewanee Blowhole (SB) in the Upper Elk River watershed that drains to the north and west of the study area, whereas Grapevine Cave (GV) and Buggytop Cave (BT) are in the Guntersville Lake watershed that drains the study area to the south ( Fig. 1; Table 1).
Sampling was conducted between 21 September and 25 October 2018. We collected Ptomaphagus and Nesticus by hand during visual encounter surveys of the caves. An initial survey of Buggytop Cave yielded only a few Ptomaphagus, so food baits (tuna) were placed in the cave for 24 hours and live specimens were subsequently collected at the baits. Sample size per cave ranged from 6-16 individuals per species (Table 1). Specimens were placed into 100% EtOH in the field and subsequently stored at -20°C. Sampling was permitted by the Tennessee Wildlife Resources Agency (Permit #1385) and the Tennessee Department of Environment and Conservation (Permit #2013-026).
Library preparation
Most of the DNA extractions were performed using the entire body of the individual. If a particular sample seemed large enough (mainly applied to Nesticus barri) the legs were saved while the cephalothorax and abdomen were used. QIAGEN's DNeasy Blood & Tissue Kit (cat. no. 69504 or 69506) was used following the kit's protocol with the exception of using 50 µl Buffer AE for elution.
Concentrations of each DNA isolation were initially checked by nanodrop and confirmed with the Quant-IT Picogreen DS DNA assay (Life Technologies cat. no. P7589). The 2b-RAD library preparation was carried out as described previously (Wang et al. 2012;Dixon et al. 2015;Matz et al. 2018;Matz 2019).
Briefly, DNA isolations were normalized to ~12.5 ng/µl. Samples with concentrations lower than 12.5 ng/µl DNA (~30 samples), were fully dehydrated in a vacuum centrifuge and resuspended to a target concentration of ~12.5 ng/µl. Digestion reactions had concentrations of 1x NEB buffer #3 and 10 µM SAM mixed with 1 total U of BcgI restriction enzyme and 50 ng genomic DNA in a total volume of 6 µl.
Digests were incubated at 37C for 1 hour followed by 20 minutes at 65C for heat inactivation. Ligation reactions had concentrations of 1x T4 ligase buffer, and 0.25 µM each adapter with 400 total U of T4 DNA ligase and 6 µl of digested DNA in a total volume of 20 µl. Ligation reactions were incubated at 4C overnight, followed by 20 minutes at 65C for heat inactivation. Inclusion of internal barcodes in the i7 adapter allowed for pooling sets of samples at this point. Amplification and additional barcoding reactions were performed on these pools. These reactions had concentrations of 312 µM each dNTP, 0.2 µM each of p5 and p7, 0.15 µM appropriate TruSeq-Un primer, 0.15 µM appropriate primer, 1x Titanium Taq buffer and 1x Taq polymerase mixed with 4 µl of pooled ligation in a total volume of 20 µl.
Thermocycler conditions were 70C for 30 seconds, followed by 14 cycles of 95C for 20 seconds, 65C for 3 minutes, and 72C for 30 seconds. The final library was pooled into a single tube and sequenced at the University of Texas at Austin's Genome Sequencing and Analysis Facility on a single lane on the Illumina Hiseq 2500 platform.
Data processing
Raw reads were trimmed and demultiplexed based on internal ligation barcodes using custom perl scripts (Matz 2019). Reads were quality filtered using fastq_quality_filter from the Fastx Toolkit (http://hannonlab.cshl.edu/fastx_toolkit/). Generation of de novo loci was performed using cdhit (Li and Godzik 2006) and custom perl scripts as described previously (Wang et al. 2012;Dixon et al. 2015;Matz et al. 2018;Matz 2019). These tags were assembled into a reference genome with 30 equally sized pseudo-chromosomes for mapping. Re-mapping of reads to these de novo loci was done with bowtie2 (Langmead and Salzberg 2012). Sorting and indexing of bam files in preparation for genotyping was done with samtools (Li et al. 2009).
Genotype analyses
We analyzed the genotype data in two ways. The first method depended on hard genotype calls, in which the genotype of each individual at each site is either called exactly, or filtered to missing data based on arbitrary cutoffs. While simpler to implement, these hard genotype calls can introduce biases, because they can fail to capture statistical uncertainty inherent to individual genotypes from New-Generation Sequencing (NGS) data (Nielsen et al. 2012). A second, alternative approach is to estimate sample allele frequency spectra directly from base calls and quality metrics in the alignment data, allowing for population genetic inferences without making individual genotype calls (Nielsen et al. 2012).
We processed hard genotype calls primarily using VCFtools (Danecek et al. 2011). Estimation of genotype likelihoods, allele frequency spectra, and additional population genetic inferences were implemented using Angsd (Korneliussen et al. 2014). Throughout the manuscript we emphasize the results produced using Angsd, using analyses from hard genotype calls primarily for corroboration. All steps used for both sets of analyses, along with scripts for statistical analysis and figure generation are included in the git repository (Dixon 2019).
Hard genotype calls were made as described previously (Dixon et al. 2019) using mpileup and bcftools (Li 2011). Genotype calls with a depth lower than 2, as well as indels, singletons, sites with more than two alleles or less than 75% of samples genotyped were removed using VCFtools (Danecek et al. 2011). Sites with excess heterozygosity (p-value less than 0.1) (likely paralogs) were removed based on the --hardy output from VCFtools.
Summarizing genetic variation
To summarize genetic variation, we used Angsd to calculate pairwise differences between samples using the -IBS 1 option and a minimum minor allele frequency of 1% (-minMaf). Here pairwise distance between samples i and j (dij) is calculated as: Where M is the total number of sites with at least 1 read from each individual, and 1 -Ibj(bi) is the indicator function which is equal to one when the two individuals have the same base and zero otherwise (Korneliussen 2013). This distance matrix was used for hierarchical clustering and multidimensional scaling. Admixture analysis was performed on genotype likelihoods output by Angsd using NGSadmix (Skotte et al. 2013).
Genetic differentiation between populations based on SNP calls
We estimated FST for each pair of populations using Angsd Korneliussen et al. 2014). This method computes the posterior expectation of genetic variance between populations (designated A), and total expected variance (designated B). These values (A and B) are closely related to the alpha and beta estimates described in Reynolds et al. (1983). The unweighted FST is computed as the mean of the per-site ratios of A and B and the weighted FST is computed as the ratio of the sum of As to the sum of Bs (Korneliussen 2013). The unweighted and weighted FST values reported in Table 2 and Pairwise FST and dXY were also calculated for each pair of caves based on hard genotypes. We used VCFtools to calculate pairwise FST (Weir and Cockerham 1984), and a custom R scrip to calculate dXY for unphased data as:
= ∑
Where xi and yj are the frequencies of the ith allele from population X and the jth allele from population Y respectively, and kij is 1 when i and j differ, and 0 if they are the same (Hahn 2018). The Weir and Cockerham's FST and the dXY values reported in Table 2 and Fig. 4 are the averages of these statistics across all hard genotyped SNPs.
Nucleotide diversity
To calculate nucleotide diversity using genotype likelihoods, we first generated genotype likelihoods as described above. We then used the custom python script HetMajorityProb.py (Matz et al. 2018;Matz 2019) to remove sites where the heterozygosity rate appeared higher than 50%, as these were likely paralogs spuriously lumped as single loci. We then estimated nucleotide diversity from the folded site frequency spectra using Angsd (Nielsen et al. 2012;Korneliussen et al. 2013Korneliussen et al. , 2014. The value was then averaged across the 30 pseudochromsomes from the reference created during de novo locus generation described above. These averages are the values reported in Figure 5. We also calculated nucleotide diversity (π) from hard genotype calls. Here we first determined the allele frequencies for each species in each cave using VCFtools. We then calculated expected heterozygosity (h) for each site as: Where n is the number of sequences, and pi is the frequency of the ith allele at the site. We then calculated π as the sum of expected heterozygosities across sites: Where S is the number of segregating sites and hj is the expected heterozygosity of the site (Hahn 2018).
We report this value per site by dividing by the total number of interrogated positions. Effective Table 1.
Delineating populations
For both species, the four caves harbored genetically distinct populations. Hierarchical clustering based on pairwise differences separated individuals by cave (Fig. 2a,d). Overall topology of clustering was the same for both species, with caves from the same watershed clustering together ( Fig. 1; Fig. 2a,d).
Multidimensional scaling produced similar results, with clear clustering by cave along the first three axes for N. barri (Fig. 2b,c) and the first two for P. hatchi (Fig. 2e,f). Both analyses indicated that SB and ST caves were more similar for N. barri than P. hatchi.
Admixture analysis further supported the caves as independent populations. For both species, when the number of ancestral populations (k) was set to four, ancestry estimates matched fully with source cave (Fig. 3). For both species, division of ancestry among five ancestral populations lead to a split of Buggytop (the largest cave) into two roughly equally sized groups. Together, these analyses indicate that the four caves are indeed distinct populations, harboring genetically distinct individuals for both species.
Genetic differentiation between caves
Genetic differentiation between caves was high. Pairwise weighted FST (Angsd) ranged from 0.33 to 0.52 for N. barri, and 0.3 to 0.36 for P. hatchi. Unweighted FST (Angsd) and Weir and Cockerham's FST estimated from hard genotype calls were lower, but still considerable, with a minimum value of 0.12 (Table 2; Fig. 4). Based on hierarchical clustering and admixture analyses ( Fig. 2; Fig. 3), we expected to find greater genetic differentiation between caves located in different watersheds. This pattern was consistent for N. barri, but not for P. hatchi (Fig. 4). Although all the difference estimates were tightly correlated (Fig. S1), only absolute genetic distance (dXY from hard genotype calls) was consistently lower within watersheds for P. hatchi.
Nucleotide diversity
In both species, estimates of nucleotide diversity based on genotype likelihoods indicated surprising levels of diversity that varied with cave length. For individual caves, per site nucleotide diversity (π) ranged from 1.17e-3 to 2.43e-3. For all but the longest cave (Buggytop Cave; BT), the beetle P. hatchi had higher nucleotide diversity than the spider N. barri (Table 3). For both species, nucleotide diversity correlated positively with cave length (Fig. 5). Assuming a mutation rate of 2.8e-9 estimated for Drosophila (Keightley et al. 2014), the effective population sizes based on the π estimates for P. hatchi ranged from 1.4e5 in Solomon's Temple to 2.2e5 in Buggytop, with a range of 1.0e5 to 2.3e5 for N. barri (Table 3). Nucleotide diversity estimates based on hard genotype calls were proportionally similar, but on average 8-fold lower than from genotype likelihoods (Table 3). This likely resulted from greater stringency during filtering of variant sites from the hard genotype calls. Estimates of π from hard genotype calls were similarly positively associated with cave length (Fig. S2). Figure 4: Pairwise estimates of genetic differentiation between caves. The first cave in each pair is indicated on the X axis. The second cave is indicated by the bar color. Whether the two caves are located in the same watershed is indicated by the bar outline color. The statistics are: weighted = weighted FST computed using Angsd; Weir = Weir and Cockerham's FST averaged across all variant sites from hard genotype calls; dXY = absolute genetic distance averaged across all variant sites from hard genotype calls.
Discussion
We used genome-wide genotyping to examine population structure of two troglobionts from the southern Cumberland Plateau in Tennessee. Despite relatively small distances between caves (no two caves were more than 15 km apart), we detected strong population structure for both species.
Hierarchical clustering, PCA, and Admixture clearly identified each cave as a genetically distinct population ( Fig. 2; Fig. 3). Pairwise estimates of genetic differentiation further supported these results, with a minimum weighted FST of 0.33 for N. barri and 0.30 for P. hatchi (Table 2), indicating "very great differentiation" (Wright 1978). For comparison, a recent 2bRAD study on the coral Acropora millepora across the Great Barrier Reef, including sites located over 1200 km apart, detected a maximum pairwise FST of 0.014 (Matz et al. 2018). Hence, populations from separate caves are remarkably isolated. These findings corroborate previous population genetic studies using single gene approaches (Snowman et al. 2010;Dixon and Zigler 2011) at the genomic scale.
Under a neutral model, nucleotide diversity is expected to be linearly proportional to the effective population size. For both species, we detected a positive association between nucleotide diversity and cave length ( Fig. 5; Fig. S2). While the number of caves in our study was small, this result is consistent with the intuitive idea that larger caves harbor larger, more genetically diverse populations.
Indeed, based on the strength of the correlations observed, cave size appears to be a dominant force shaping troglobiont genetic diversity in our study area. Future studies spanning larger numbers of caves will shed light on how generally this pattern occurs.
Based on π estimates, the effective population sizes of both species were surprisingly large ( reliable. This idea is illustrated by the concurrent associations between cave length and π estimated using Angsd and from hard genotypes, despite the roughly 8-fold difference between them in absolute terms (Table 3; Figure 5; Figure S2).
Based on our results, we conclude that gene flow between caves is rare. This is consistent with the inability of troglobionts to traverse even small distances between the caves (Fig. 1). Hence the analogy of caves to islands in a sea of surface habitat holds for these species (Culver and Pipan 2019;Snowman et al. 2010). It is thought that migration of troglobionts must occur via subterranean connections (Culver and Pipan 2019;Trontelj et al. 2019). Consistent with this theory, hierarchical clustering of populations for both species paired caves by watershed, rather than physical distance ( Fig. 1; Fig. 2). This possibly reflects greater frequency of rare subterranean connections, or more recent vicariance between caves located in the same watershed. Estimates of genetic differentiation were consistent with this theory for N. barri, but not for P. hatchi.
Caves provide a unique opportunity for insight into evolutionary processes. For instance, troglomorphic traits have been divided into 'constructive' traits, such as appendage and antennae elongation, and 'regressive' traits, such as eye and pigment loss, which are hypothesized to vary in both their rates and mechanisms of evolution (Culver et al. 1995). Regressive traits are considered capable of relatively rapid evolution via drift and mutation accumulation, whereas constructive traits are hypothesized to evolve more slowly by positive selection (Culver et al. 1995;Rétaux and Casane 2013). A similar distinction is evident in a passage from Darwin, attributing eye loss to disuse rather than selection because it was "difficult to imagine that the eyes, though useless, could be in any way injurious to animals living in darkness" (Darwin 1959). However, energetic costs have since been cited as a potential selective pressure driving eye loss (Moran et al. 2015), and there is evidence from troglobiotic fish that eye loss involves epigenetic changes, rather than mere failures to develop due to loss-offunction mutations (Gore et al. 2018). These studies, along with the findings reported here strengthen the idea that caves are ideal natural laboratories for evolutionary insight (Poulson and White 1969). The highly restricted migration rates we observe indicate an opportunity to examine ongoing and nearly independent evolution to highly unique conditions. Hence, through further application of genomic tools, these natural laboratories have great potential to inform evolutionary understanding. | 2020-04-16T09:19:12.308Z | 2020-04-09T00:00:00.000 | {
"year": 2020,
"sha1": "0a2f5eceac901d41fd935333bc0b244889d26782",
"oa_license": "CCBY",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2020/04/09/2020.04.08.032789.full.pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "5322078ddafd0e0991f0b15685317cc6e6ba6395",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Geography"
]
} |
52121865 | pes2o/s2orc | v3-fos-license | Effects of the Nanofillers on Physical Properties of Acrylonitrile-Butadiene-Styrene Nanocomposites: Comparison of Graphene Nanoplatelets and Multiwall Carbon Nanotubes
The effects of carbonaceous nanoparticles, such as graphene (GNP) and multiwall carbon nanotube (CNT) on the mechanical and electrical properties of acrylonitrile–butadiene–styrene (ABS) nanocomposites have been investigated. Samples with various filler loadings were produced by solvent free process. Composites ABS/GNP showed higher stiffness, better creep stability and processability, but slightly lower tensile strength and electrical properties (low conductivity) when compared with ABS/CNT nanocomposites. Tensile modulus, tensile strength and creep stability of the nanocomposite, with 6 wt % of GNP, were increased by 47%, 1% and 42%, respectively, while analogous ABS/CNT nanocomposite showed respective values of 23%, 12% and 20%. The electrical percolation threshold was achieved at 7.3 wt % for GNP and 0.9 wt % for CNT. The peculiar behaviour of conductive CNT nanocomposites was also evidenced by the observation of the Joule’s effect after application of voltages of 12 and 24 V. Moreover, comparative parameters encompassing stiffness, melt flow and resistivity were proposed for a comprehensive evaluation of the effects of the fillers.
Introduction
During recent decades, polymer nanocomposites with carbon-based nanofillers have been extensively investigated for fabrication of multifunctional materials with tailored properties, including high mechanical, thermal and electrical performance. Among these nanofillers, different form of graphene and carbon nanotubes have been commonly utilized due to their extraordinary intrinsic properties [1][2][3][4][5][6][7][8][9][10][11]. The nomenclature of two-dimensional carbon materials is still an object of discussion and confusion, for instance between graphene nanoplatelets or graphite nanoplates. At this purpose, some interesting recommendations are reported by Bianco et al. [12]. In the present study, the authors decided to use the term graphene or graphene nanoplatelets (GNP), in conformity with the commercial name of the product. Various scientific papers on graphene (GNP) or carbon nanotubes (CNT) nanocomposites reported the comparative study of these two nanofillers, detailing their effect in various matrices: epoxy [13][14][15][16][17], polyamide [18] and poly (styrene-b-ethylene-ran-butylene-b-styrene) (SEBS) [19]. GNP and CNT incorporated in epoxy matrix showed marked improvement of mechanical properties, thermal conductivities and dielectric constant of nanocomposites, for example, at the highest concentration of 1 wt % [14,15], 3 wt % [13] and 4 wt % [17] of both fillers. The mechanical and electrical properties of PA/GNP and PA/CNT nanocomposites with 1 wt % nanofiller were compared [18]. Nanocomposites of SEBS matrix filled with single GNP, or CNT, or GNP/CNT mixture
Materials Processing and Sample Preparations
Composites were prepared by melt compounding of ABS with 2-8 wt % GNP or CNT in a co-rotating Thermo-Haake Polylab Rheomix internal mixer (Thermo Haake, Karlsruhe, Germany) at a temperature of 190 • C and a rotor speed of 90 rpm for 15 min, producing about 50 g for each composition. Square plates (160 × 160 × 1.2 mm) of compounded materials and neat ABS matrix were obtained after compression moulding at 190 • C by using a Carver Laboratory press (Carver, Inc., Wabash, IN, USA) for 10 min under a pressure of 3.9 MPa.
Transmission Electron Microscopy (TEM)
The morphology of graphene nanoplatelets and carbon nanotubes was observed by transmission electron microscopy (TEM), using a Philips ® EM 400 T (Philips, Amsterdam, The Netherlands) transmission electron microscope at an acceleration voltage of 120 kV. Nanoparticles were dispersed in acetone and the suspension (concentration = 0.5 mg/mL) was sonicated for 5 min. Afterwards, the nanoparticle suspensions were dropped on a 600-mesh copper grid for TEM observation.
Scanning Electron Microscopy (SEM)
Nanocomposites were fractured in liquid nitrogen and fracture surfaces were observed by using a Carl Zeiss AG Supra 40 field emission scanning electron microscope (FESEM) (Carl Zeiss AG, Oberkochen, Germany) at an acceleration voltage of 3 kV.
Melt Flow Index
Melt flow index (MFI) measurements were carried out in a temperature range of 220-280 • C with an applied load of 10 kg by using a capillary rheometer Kayeness Co. model 4003DE (Morgantown, PA, USA) following ASTM D 1238 standard (procedure A). Samples of about 5 g were loaded in the cylinder and pre-heated for about 5 min before testing. The results are reported in Table 1 as average values of at least five measurements (standard deviation is reported).
Quasi-Static Tensile Test
Uniaxial tensile tests were performed at room temperature on ISO 527 type 1BA specimens by using an Instron ® 5969 universal testing machine (Norwood, MA, USA), equipped with a 50 kN load cell. Test specimens were die-cut from compression moulded plates (gauge length of 30 mm; width of 5 mm; thickness of 1.2 mm). Elastic modulus was determined at a crosshead speed of 1 mm/min with the secant method between strain levels of 0.05% and 0.25% according to ISO 527 standard and by using an electrical extensometer Instron ® Model 2620-601 (Norwood, MA, USA) with gauge length of 12.5 mm for strain monitoring. Yield stress (σ y ), stress at break (σ b ) and strain at break (ε b ), were evaluated at a crosshead speed of 10 mm/min without extensometer. Specific tensile energy to break (TEB) values under quasi-static conditions were computed integrating stress-strain curves. The results are reported in Table 2 as average values of at least five specimens (standard deviation is reported).
Creep Test
A creep test was performed with the aid of TA Instruments DMA Q800 (TA Instruments-Waters LLC, New Castle, DE, USA) under a constant stress of 3.9 MPa (i.e., about 10% of the yield stress of neat ABS) at 30 • C up to 3600 s. Rectangular samples with length of 25 mm, width of 5 mm and thickness of 0.9 mm were machined from compression moulded plaques. The adopted gauge length of all samples was 11.8 mm. For samples with electrical resistivity higher than 10 7 Ω·cm, the volume resistivity was measured according to the ASTM D257 by using a Keithley 6517A electrometer/High Resistance Meter (Beaverton, OR, USA) and an 8009 Resistivity Test Fixture at the room temperature. In this test, a constant applied voltage of 100 V was applied to a square specimen of 64 × 64 mm.
For moderately conductive materials (<10 7 Ω·cm), volume electrical resistivity was determined following ASTM D4496-04 standard for moderately conductive materials with four-point contact configuration. Each specimen was tested at a voltage of 5 V by using a direct current (DC) power supply IPS303DD produced by ISO-TECH (Milan, Italy) and the current flow on the samples was measured between external electrodes using an ISO-TECH IDM 67 Pocket Multimeter electrometer (ISO-TECH, Milan, Italy). Compression moulded samples were tested with a length of 25 mm and different cross-section (rectangular specimens 6 × 1.2 mm). At least three specimens were replicated for each sample. The electrical volume resistivity of the samples was evaluated by Equation (1): where R is the electrical resistance, A is the is the cross-section of the specimen and L is the distance between the internal electrodes (i.e., 3.69 mm).
The heating of a sample generated by current flow is known as resistive heating and it is described by the Joule's law. Surface temperature evolution induced by Joule's effect upon different applied voltages was measured by a Flir E6 thermographic camera (FLIR System, Wilsonville, OR, USA). The voltages were applied by a DC power supply (IPS 303DD produced by ISO-TECH), while the samples were fixed with two metal clips with an external distance of 30 mm. In these tests, specimen length was 50 mm with different cross-sections of rectangular 6 × 1.2 mm. The surface temperature values have been recorded for 120 s of application of the voltage levels of 12 V and 24 V. These voltages have been selected because they are common voltages for batteries used in automotive, solar storage equipment, electrical bikes and other domestic applications. Due to high conductivity, only the Joule's effect of the ABS/CNT nanocomposites was reported.
Morphology
The morphologies of graphene (GNP) and carbon nanotube (CNT) have been characterized by TEM microscopy (Figure 1). Figure 1a shows the typical thin sheet structure of GNP. From TEM micrographs, the average diameter of platelets of GNP has been found to be about 5.5 to 6.8 µm. In addition, it was also observed that some GNP nanoplatelets superimposed on top of each other and wrinkled into an irregular shape. Figure 1b displays the morphological structure of CNT and clearly documents that the investigated CNT have the outer diameter of tubes about 15-20 nm with wall thickness of about 4-6 nm.
The SEM images of the fracture surfaces of ABS/graphene and ABS/CNT samples are represented in Figures 2a-d and 2e-f, respectively. A relatively poor adhesion level between graphene and ABS is documented in Figure 2b. Figure 2c shows GNP-30 sample with the highest GNP concentration where graphene flakes appear distributed quite evenly within the matrix up to the highest concentration of 30 wt %. A relatively good dispersion was also observed for graphene nanoplatelets. As it can be seen in Figure 2e-f, carbon nanotubes were clearly observed in the SEM micrographs with uniform distribution and excellent dispersion.
Melt Flow Index
The processability of the nanocomposite materials was investigated by comparing their melt flow index values. Figure 3 summarizes that the effect of the nanofiller amounts, the types of nanofiller and the temperature on MFI value of nanocomposites. As expected, the MFI values decreased with the nanofiller fraction. The MFI values of various compositions at 220, 250 and 280 • C are reported in Table 1). It is worthwhile to note that CNT accounts for a much higher reduction in MFI than GNP, thus temperature of 220-250 • C does not allow satisfactory conditions for materials processing. Therefore, 280 • C has been found as an adequate temperature for processing ABS/CNT composites with 6-8% of CNT.
Melt Flow Index
The processability of the nanocomposite materials was investigated by comparing their melt flow index values. Figure 3 summarizes that the effect of the nanofiller amounts, the types of nanofiller and the temperature on MFI value of nanocomposites. As expected, the MFI values decreased with the nanofiller fraction. The MFI values of various compositions at 220, 250 and 280 °C are reported in Table 1). It is worthwhile to note that CNT accounts for a much higher reduction in MFI than GNP, thus temperature of 220-250 °C does not allow satisfactory conditions for materials processing. Therefore, 280 °C has been found as an adequate temperature for processing ABS/CNT composites with 6-8% of CNT. Moreover, following the results of MFI at different temperature, the activation energy (Eact) for polymer chain mobility in both ABS and nanocomposites could be evaluated from the slope of the best fitting straight lines ( Figure 4) by using an Arrhenius type equation [29,30]: Moreover, following the results of MFI at different temperature, the activation energy (E act ) for polymer chain mobility in both ABS and nanocomposites could be evaluated from the slope of the best fitting straight lines ( Figure 4) by using an Arrhenius type equation [29,30]: where C 0 is a pre-exponential factor, T is the selected temperature for MFI test and R, the universal gas constant, is 8.314 J/mol·K. The value of intercept C 0 formally represents MFI at infinite temperature.
where C0 is a pre-exponential factor, T is the selected temperature for MFI test and R, the universal gas constant, is 8.314 J/mol•K. The value of intercept C0 formally represents MFI at infinite temperature. As reported in Table 1, the activation energy of neat ABS is about 87 kJ/mol. As expected, the higher the filler content, the lower the polymer chain mobility and, consequently, the higher the activation energy of the process flow. In particular, it should be underlined that the activation energy of ABS/CTN nanocomposites is higher than that of corresponding graphene nanocomposites. Thus, the higher activation energy could indicate a stronger of interaction between CNT and ABS in the molten state. Consequently, CNT nanocomposites require more energy for processing which is documented by the fact that the flow activation energy of nanocomposite. For instance, Eact of 6 wt % of nanofiller is about 94 kJ/mol and 117 kJ/mol for GNP and CNT, respectively. The difference is even higher in composites at 8 wt % (see Table 1).
Quasi-Static Tensile Test
Tensile testing was carried out to investigate the reinforcing effect of graphene and CNT in ABS nanocomposites. Stress-strain curves are reported in Supplementary Materials ( Figure S1), whereas the tensile properties of ABS/GNP and ABS/CNT nanocomposites are summarised in Table 2. As expected, both ABS/GNP and ABS/CNT show an enhancement of tensile properties compared to neat ABS. In Table 2, the elastic modulus of GNP-based nanocomposites is higher than that of CNT-based composites. For instance, the elastic modulus of composites containing 8 wt % of GNP was increased from 2315 MPa to 3523 MPa (i.e., 52%) whereas in the case of 8 wt % of CNT a corresponding value is only 3068 MPa (i.e., 32%). The elastic modulus of the composites is affected by nanofiller stiffness but also by the shape and orientation of particles and their dispersion level. Yielding phenomenon was observed only for nanocomposites containing less than 4%.
On the other hand, various factors affecting the tensile strength of ABS nanocomposites include the filler/matrix interfacial adhesion, the amount of filler, its properties and geometry and dispersion level in the matrix. Table 2, shows that strength of ABS/CNT composites is higher than that of ABS/GNP nanocomposites which can be attributed to the better dispersion and interaction of CNT with ABS matrix with respect to GNP, as shown in Figure 2c-f. Moreover, this result could be associated with the two-dimensional (2D) shape of GNP which makes plane-to-plane contact area and could to be wrinkled and be detached from ABS (see Table 2). On the other hand, some bending and twisting in the structure of the CNT could prevent the detachment of CNT from ABS matrix. As reported in Table 1, the activation energy of neat ABS is about 87 kJ/mol. As expected, the higher the filler content, the lower the polymer chain mobility and, consequently, the higher the activation energy of the process flow. In particular, it should be underlined that the activation energy of ABS/CTN nanocomposites is higher than that of corresponding graphene nanocomposites. Thus, the higher activation energy could indicate a stronger of interaction between CNT and ABS in the molten state. Consequently, CNT nanocomposites require more energy for processing which is documented by the fact that the flow activation energy of nanocomposite. For instance, E act of 6 wt % of nanofiller is about 94 kJ/mol and 117 kJ/mol for GNP and CNT, respectively. The difference is even higher in composites at 8 wt % (see Table 1).
Quasi-Static Tensile Test
Tensile testing was carried out to investigate the reinforcing effect of graphene and CNT in ABS nanocomposites. Stress-strain curves are reported in Supplementary Materials( Figure S1), whereas the tensile properties of ABS/GNP and ABS/CNT nanocomposites are summarised in Table 2. As expected, both ABS/GNP and ABS/CNT show an enhancement of tensile properties compared to neat ABS. In Table 2, the elastic modulus of GNP-based nanocomposites is higher than that of CNT-based composites. For instance, the elastic modulus of composites containing 8 wt % of GNP was increased from 2315 MPa to 3523 MPa (i.e., 52%) whereas in the case of 8 wt % of CNT a corresponding value is only 3068 MPa (i.e., 32%). The elastic modulus of the composites is affected by nanofiller stiffness but also by the shape and orientation of particles and their dispersion level. Yielding phenomenon was observed only for nanocomposites containing less than 4%.
On the other hand, various factors affecting the tensile strength of ABS nanocomposites include the filler/matrix interfacial adhesion, the amount of filler, its properties and geometry and dispersion level in the matrix. Table 2, shows that strength of ABS/CNT composites is higher than that of ABS/GNP nanocomposites which can be attributed to the better dispersion and interaction of CNT with ABS matrix with respect to GNP, as shown in Figure 2c-f. Moreover, this result could be associated with the two-dimensional (2D) shape of GNP which makes plane-to-plane contact area and could to be wrinkled and be detached from ABS (see Table 2). On the other hand, some bending and twisting in the structure of the CNT could prevent the detachment of CNT from ABS matrix. Thus, these factors may induce a better interfacial interaction between the CNT and ABS matrix. As a result, the load can efficiently be transferred from the ABS matrix to CNT nanofillers and therefore the tensile strength is higher for CNT nanocomposites. Analogously, the strain at break was observed to be more severely reduced in the case of GNP nanocomposites.
Another interesting result is the possibility to achieve very high concentrations of GNP by proper processing conditions. In particular, the 30 wt % reported in this present paper represent the highest fraction reported in the literature for ABS/graphene composites. Interestingly enough, these composites show elastic modulus of about 7362 MPa and tensile strength of about 44 MPa. To compare the mechanical properties of ABS composites reported in literature, a normalized modulus was evaluated as follows: where E c is the modulus of ABS composite; E m is the matrix modulus of neat ABS and w f is the weight fraction of incorporated filler [6,31]. According to the experimental data, a maximum strength value was obtained for 6 wt % of CNT, whereas a maximum stiffening effect (E norm ) was observed for 6 wt % of GNP, maintaining an acceptable deformation at break (3-4%) for both the compositions, even in absence of yielding.
The empirical Halpin-Tsai model is a simple approach to predicting the modulus of composite materials which takes into account the modulus of matrix E M and filler E F , filler aspect ratio ξ, the volume fraction of filler V f , assuming a homogeneous dispersion and perfect interfacial adhesion between polymer/filler [32][33][34][35][36]. The tensile modulus in both longitudinal E L and transverse E T directions can be predicted according to Halpin-Tsai model [37,38] by the following equations: where the parameters η L , η T and ξ are defined as [34,39,40]: D f and t f are lateral diameter and thickness of platelets and L f and D f are length and diameter of fibres, respectively.
The volume fraction V f is linked to the weight fraction w f through the following equation: where, ρ M and ρ f are the densities of the ABS matrix and graphene nanoplatelets, respectively. Subsequently, the modulus of a composite with platelet filler along the axis parallel to the loading direction (E Parallel ) and randomly oriented platelets/fibres fillers in all two dimensional 2D-direction (E 2D,Random ) and three-dimensional 3D-directions (E 3D,Random ) can be predicted according to literature [39][40][41] as follows.
For plates: For fibres: E 3D,Random In the Halpin-Tsai model an experimental modulus for neat ABS of 2315 MPa was considered ( Table 2). The aspect ratios are considered to be equal to 833 for GNP (D f = 5000 nm and t f = 6 nm) and 158 for CNT (L f = 1500 nm and D f = 9.5 nm). An elastic modulus of 70 GPa has been assumed for both graphene and carbon nanotubes [42,43].
The experimental data show good agreement with Halpin-Tsai model assuming a 3D randomly oriented nanofiller due to the melt compounding process. However, the elastic moduli of ABS/GNP nanocomposites were lower than the predicted elastic modulus when the content of GNP is higher than 6.5 vol % (12 wt %), which indicates a decrease in the reinforcing efficiency (see Figure 5b).
Creep Stability
The isothermal creep compliance of ABS/GNP and ABS/CNT nanocomposites under a constant load of 3.9 MPa and at 30 °C is reported in Figure 6a,b. If no plastic deformation occurs, compliance of isothermal tensile creep, Dtot(t), consists of two components: elastic (instantaneous) Del and viscoelastic (time-dependent) Dve, as defined in Equation (16).
Creep Stability
The isothermal creep compliance of ABS/GNP and ABS/CNT nanocomposites under a constant load of 3.9 MPa and at 30 • C is reported in Figure 6a,b. If no plastic deformation occurs, compliance of isothermal tensile creep, D tot (t), consists of two components: elastic (instantaneous) D el and viscoelastic (time-dependent) D ve , as defined in Equation (16). Following the described models of creep evaluation, the elastic (De), viscoelastic Dve,3600s and total (Dt,3600s) components of the creep compliance after 3600 s are have been calculated; the results are summarized in Table 3. As expected, the introduction of graphene or carbon nanotubes results in a significant improvement of the creep stability of the materials. As expected, the higher the filler content, the lower the creep compliance (see Table 3). The role of nanofillers is to restrict the polymeric chain mobility, thus increasing creep stability. The empirical Findley's model (power law), summarized in Equation (17), was used to describe the viscoelastic creep response [44][45][46]: where De is the elastic (instantaneous) creep compliance, k is a coefficient related to the magnitude of the underlying retardation process and n is an exponent related to the time dependence of the creep Following the described models of creep evaluation, the elastic (D e ), viscoelastic D ve,3600s and total (D t,3600s ) components of the creep compliance after 3600 s are have been calculated; the results are summarized in Table 3. As expected, the introduction of graphene or carbon nanotubes results in a significant improvement of the creep stability of the materials. As expected, the higher the filler content, the lower the creep compliance (see Table 3). The role of nanofillers is to restrict the polymeric chain mobility, thus increasing creep stability. The empirical Findley's model (power law), summarized in Equation (17), was used to describe the viscoelastic creep response [44][45][46]: where D e is the elastic (instantaneous) creep compliance, k is a coefficient related to the magnitude of the underlying retardation process and n is an exponent related to the time dependence of the creep process. The fitting parameters for experimental creep data are summarized in Table 3. The fitting model was satisfactory, as R 2 around 0.99 was found for all samples. The coefficient n reflects, which the kinetics of displacements of the segments of macromolecules in the viscous medium in the course of the creep. ABS/GNP exhibit n value higher than correspondent ABS/CNT nanocomposites. Table 3. Creep compliance data of ABS-graphene and ABS-CNT nanocomposites according Equation (17). In addition, the creep compliance of GNP nanocomposite appeared to be significantly lower than that of reduced with respect to CNT nanocomposite at the same nanofiller fraction. This reduction is largely associated with the reduction of the values of elastic component (either D el or D e , for both the models). In summary, ABS/graphene nanocomposites exhibited a higher creep stability than CNT nanocomposites, which in agreement with observed difference of moduli (as shown in Table 2).
Further information can be obtained by considering the creep compliance curves at various temperatures from 30 • C to 90 • C (see Figure S2 and Table 4). It can be noted that the higher temperature the high creep compliance especially for neat ABS in proximity of T g . Selected data are shown in Figure 7; a relevant reduction of the creep compliance of GNP-6 and CNT-6 was observed at the highest investigated temperature (90 • C). process. The fitting parameters for experimental creep data are summarized in Table 3. The fitting model was satisfactory, as R 2 around 0.99 was found for all samples. The coefficient n reflects, which the kinetics of displacements of the segments of macromolecules in the viscous medium in the course of the creep. ABS/GNP exhibit n value higher than correspondent ABS/CNT nanocomposites. In addition, the creep compliance of GNP nanocomposite appeared to be significantly lower than that of reduced with respect to CNT nanocomposite at the same nanofiller fraction. This reduction is largely associated with the reduction of the values of elastic component (either Del or De, for both the models). In summary, ABS/graphene nanocomposites exhibited a higher creep stability than CNT nanocomposites, which in agreement with observed difference of moduli (as shown in Table 2). Table 3. Creep compliance data of ABS-graphene and ABS-CNT nanocomposites according Equation (17).
Samples
Del Further information can be obtained by considering the creep compliance curves at various temperatures from 30 °C to 90 °C (see Figure S2 and Table 4). It can be noted that the higher temperature the high creep compliance especially for neat ABS in proximity of Tg. Selected data are shown in Figure 7; a relevant reduction of the creep compliance of GNP-6 and CNT-6 was observed at the highest investigated temperature (90 °C). From the results of total creep compliance at different temperatures, activation energy (E act ) of the creep process for the investigated nanocomposites can be evaluated from the slope of the best fitting straight lines by using an Arrhenius type equation, as previously [29,30]: where D 0 is a pre-exponential factor, T is the selected temperature for creep experiment and R, the universal gas constant, is 8.314 J/mol·K. The value of intercept D 0 formally represents creep compliance at infinite temperature. In Table 5, the activation energy of neat ABS is 15 kJ/mol and it increases for nanocomposite with 6 wt % of graphene or CNT. It should be noted that the activation energy of ABS/GNP nanocomposites appears to be a little bit higher than that of corresponding ABS/CNT nanocomposites which may explain higher creep stability.
Electrical Resistivity
The electrical volume resistivity values of CTN or graphene filled ABS (compression moulded) plates are reported in Figure 8 as a function of the nanofiller fraction. The introduction of the carbon-based nanofiller in the insulating polymeric matrix increases the conductivity of the nanocomposites with direct dependence on the type and the content of nanofiller. For example, a resistivity value lower than 10 2 Ω·cm can be achieved with CNT content of 2 wt %. The introduction of CNTs confers a good conductivity to the nanocomposites samples, which makes possible to evaluate the lower electrical percolation threshold in CNT nanocomposites with respect to GNP-filled nanocomposites. This threshold value is below 2 wt % for CNT while values between 8 and 12 wt % were found for GNP. Higher resistivity reduction reached with the introduction of carbon nanotube could be attributed to the better dispersion level. According to the statistical percolation theory, the data of electrical resistivity as a function of filler volume fraction can be fitted by a power law equation: The equation can be adapted to electrical resistivity as follows: where ρ = composites resistivity, ρo = scale factor related to the filler intrinsic resistivity, = filler volume fraction, c = percolation threshold and t = critical exponent. Exponent t value in the range 1.1-1.3 indicates the conduction through 2D network whereas for 3D network the value lies in the range of 1.6-2.0. The best-fit line in Figure 9 shows that the percolation threshold is 3.8 vol % (~7.3 wt %) for GNP and 0.4 vol % (~0.9 wt %) for CNT, respectively. In addition, the t values were found to be 7.3 and 1.8 for graphene and carbon nanotubes, respectively. The results suggested that a 3D network is formed in both GNP and CNT composites. In literature, Zhao et al. [47] reported t values in the range of 2.40-6.92 for graphene-based polymer composites. The t values for CNT were found in the range of 1.3-4.0 [48] and around 2.0 [49]. Various models applying different parameters for the evaluation of conductivity and other properties have also been presented in a recent paper by Zare et al. and in other references therein [50]. According to the statistical percolation theory, the data of electrical resistivity as a function of filler volume fraction can be fitted by a power law equation: The equation can be adapted to electrical resistivity as follows: where ρ = composites resistivity, ρ o = scale factor related to the filler intrinsic resistivity, φ = filler volume fraction, φ c = percolation threshold and t = critical exponent. Exponent t value in the range 1.1-1.3 indicates the conduction through 2D network whereas for 3D network the value lies in the range of 1.6-2.0. The best-fit line in Figure 9 shows that the percolation threshold is 3.8 vol % (~7.3 wt %) for GNP and 0.4 vol % (~0.9 wt %) for CNT, respectively. In addition, the t values were found to be 7.3 and 1.8 for graphene and carbon nanotubes, respectively. The results suggested that a 3D network is formed in both GNP and CNT composites. In literature, Zhao et al. [47] reported t values in the range of 2.40-6.92 for graphene-based polymer composites. The t values for CNT were found in the range of 1.3-4.0 [48] and around 2.0 [49]. Various models applying different parameters for the evaluation of conductivity and other properties have also been presented in a recent paper by Zare et al. and in other references therein [50].
Surface Temperature under Applied Voltage
In this section, the measurements of Joule's heating produced by voltage application on the samples with different contents of CNT are presented. These tests were performed by using two different voltages, 12 V and 24 V which are commonly reached by batteries for automotive applications. We monitored the evolution of the surface temperature of studied nanocomposites as a function of applied voltage and exposition time.
Surface Temperature under Applied Voltage
In this section, the measurements of Joule's heating produced by voltage application on the samples with different contents of CNT are presented. These tests were performed by using two different voltages, 12 V and 24 V which are commonly reached by batteries for automotive applications. We monitored the evolution of the surface temperature of studied nanocomposites as a function of applied voltage and exposition time.
Representative images of the surface temperature evolution taken by an IR thermocamera under an applied voltage of 12 V for the CNT-6 and CNT-8 nanocomposites are reported in Figure 10. It is evident that the samples start heating as soon as a voltage is applied and a homogeneous temperature profile can be detected even after prolonged time (i.e., 120 s). As it could be expected, the temperature in the central section of the sample is higher than that detectable on the borders, because the heat exchange is favoured in the external zones of the samples. Under an applied voltage of 24 V (see Figure 11), both CNT-6 and CNT-8 samples rapidly reached a temperature higher than 280 • C after 10 s. On the other hand, graphene composites of 2-8 wt % nanofiller fraction are below the percolation threshold (ρ > 10 13 Ω·cm) and consequently no heating is generated on samples. The numerical results of the temperature increment upon an applied voltage of 12 V and 24 V on samples are shown in Figure 12a,b, respectively. The first aspect to underline is that not all samples can be significantly heated through the voltage application. In fact, only samples with CNT content higher than 4 wt % can increase their surface temperature when a voltage of 12 V is applied. At applied voltage of 24 V, only CNT-2 sample does not significantly increase its surface temperature, while CNT-4 sample shows a moderate heating after 120 s. Very effective results can be obtained for all the other samples. For instance, for CNT-6 and CNT-8 samples, it was not possible to reach the end of the test because they thermally decompose with the emission of dense smoke, characteristic of polymers containing aromatic rings.
Comparative Effects of GNP and CNT
In order to evaluate beneficial or negative effects of GNP and CNT on the properties of ABS nanocomposites, a graphical representation of selected properties is given in a radar plot. Figure 13 clearly evidences that GNP exhibits positive effect on mechanical properties, that is, increases tensile modulus and reduces creep compliance. In the same time, only small reductions of MFI and of tensile strength were observed. On the other hand, CNTs account for an interesting and valuable improvement of conductivity (i.e., reduction of resistivity) and slight increase in tensile strength but
Comparative Effects of GNP and CNT
In order to evaluate beneficial or negative effects of GNP and CNT on the properties of ABS nanocomposites, a graphical representation of selected properties is given in a radar plot. Figure 13 clearly evidences that GNP exhibits positive effect on mechanical properties, that is, increases tensile modulus and reduces creep compliance. In the same time, only small reductions of MFI and of tensile strength were observed. On the other hand, CNTs account for an interesting and valuable improvement of conductivity (i.e., reduction of resistivity) and slight increase in tensile strength but on the other hands deteriorates material processing due to high reduction of melt flow. Hence it is possible to conclude that both GNP and CNT simultaneously have positive and negative effects on processing as well as properties of the nanocomposites, even if at different levels. Hence it is possible to conclude that both GNP and CNT simultaneously have positive and negative effects on processing as well as properties of the nanocomposites, even if at different levels.
For the purpose of quantitative evaluation of the nanofiller effects we can use the definition of comparative parameters that take into consideration some specific properties important for of the applications. A first parameter P E,ρ that maximizes both the stiffness and the conductivity can be calculated from Equation (21) P E,ρ = E/ρ where E and ρ represent the modulus and the resistivity of ABS and its nanocomposites, respectively. The comparison of these parameters is reported in Figure S3. CNT nanocomposites evidenced the higher parameters, where the predominant factor is their higher conductivity and P E,ρ of CNT nanocomposites were found to directly increase with the filler fraction. On the other hand, much lower values were observed for GNP nanocomposites, with only a small variation between 2-4% and 6-8 wt % of filler content. However, for a better comparison of nanofillers, the nanocomposites processability should be also taken into account. Hence, an interesting parameter P E,M,ρ defined by Equation (22) can be used for the comparison: where MFI is the melt flow index. Figure 14 shows that only 6 and 8% of GNP lead to a positive variation of the combined factors, that is, stiffness, processability and resistivity of composites with respect to ABS matrix. As can be seen, the parameter P E,M,ρ is much higher for CNT nanocomposites than for GNP composites and remains almost constant for all the investigated compositions in the range 2-8% of CNT. In general, the values of E × MFI/ρ increased with filler fraction for nanocomposites with 6-8% of GNP, while a relative maximum was reached for nanocomposites at 4 wt % of CNT.
6-8 wt % of filler content. However, for a better comparison of nanofillers, the nanocomposites processability should be also taken into account. Hence, an interesting parameter PE,M,ρ defined by Equation (22) can be used for the comparison: where MFI is the melt flow index. Figure 14 shows that only 6 and 8% of GNP lead to a positive variation of the combined factors, that is, stiffness, processability and resistivity of composites with respect to ABS matrix. As can be seen, the parameter PE,M,ρ is much higher for CNT nanocomposites than for GNP composites and remains almost constant for all the investigated compositions in the range 2-8% of CNT. In general, the values of E × MFI/ρ increased with filler fraction for nanocomposites with 6-8% of GNP, while a relative maximum was reached for nanocomposites at 4 wt % of CNT. Similarly, the same comparative parameter PE,M,ρ was also evaluated as a function of MFI at 220 °C. Figure S4 covers the regions up to the highest experimental fraction of the filler, that is, 8 wt % of CNT and 30 wt % of GNP in ABS nanocomposites. From this point of view, the effect of 20-30% of GNP is approximately equivalent to that of 6-8% of CNT. Similarly, the same comparative parameter P E,M,ρ was also evaluated as a function of MFI at 220 • C. Figure S4 covers the regions up to the highest experimental fraction of the filler, that is, 8 wt % of CNT and 30 wt % of GNP in ABS nanocomposites. From this point of view, the effect of 20-30% of GNP is approximately equivalent to that of 6-8% of CNT.
Some more detailed comparison of the tested properties is reported in Supplementary Materials Table S1. Obviously, proper combination of these two fillers can be selected in order to tune the properties of the ABS/GNP/CNT nanocomposites for the intended applications.
Conclusions
Using a solvent free mixing process, GNP and CNT nanofillers were properly dispersed in ABS matrix up to their maximum concentration, that is, 30% and 8% by wt, respectively. Comparative experimental results evidence that the incorporation of both nanofillers produced significant effects on the properties of the ABS nanocomposites.
The addition of nanofillers accounts for an increase in modulus and tensile strength as well as significant reduction of the strain at break. In particular, ABS/GNP composites show a slightly higher stiffness and creep stability than ABS/CNT ones. On the other hand, the tensile strength of ABS/GNP samples up to 20% of the filler was similar to that of neat ABS, whereas the tensile strength of the ABS/CNT composites was slightly enhanced, probably due to better dispersion level and stronger interfacial interactions between CNTs and the ABS matrix. Furthermore, ABS/GNP nanocomposites with 30 wt % filler content showed higher elastic modulus and strength, that is, of 7.4 GPa and 44 MPa, still maintaining moderate processability. On the other hand, significant or almost critical reduction of MFI was observed with rising fraction of CNT. For this reason, CNT nanocomposites need to be processed at higher temperatures than corresponding GNP nanocomposites.
ABS/CNT nanocomposites showed much lower electrical resistivity than corresponding ABS/GNP composition. In fact, a marked resistivity drop in the range 1-10 Ω·cm was observed after dispersion of 4-8 wt % of CNT. Electrical percolation threshold was achieved for 7.3 wt % (3.8 vol %) of GNP and 0.9 wt % (0.4 vol %) of CNT. In the case of CNT-filled samples, the Joule's effect tests indicated a rapid heating upon voltage application. In contrast, no increase in the temperature during prolonged voltage application was observed for ABS/GNP nanocomposites. It is possible to conclude that the investigated carbonaceous nanofillers account for positive as well as some negative effects on tested physical properties of ABS composites. It is worth noting that GNP brings about notable increases in modulus and only rather moderate reduction of melt flow index. In the case of CNT, the relevant improvement of nanocomposite conductivity is accompanied by a marked increase of melt viscosity, which could be problematic in material processing.
For future work, it can be presumed that proper combinations of CNT and GNP at convenient fractions (e.g., 6 wt % in the hybrid) can offer a possible compromise including relatively easy processability, acceptable mechanical properties and specific electrical properties for applications which require polymeric materials with low electrical resistivity.
Supplementary Materials: The following are available online at http://www.mdpi.com/2079-4991/8/9/674/s1, Figure S1: Representative stress-strain curves of ABS and ABS nanocomposites with (a) graphene and (b) carbon nanotubes, Figure S2: Creep compliance curves of (a) ABS matrix and nanocomposites with (b) GNP-6 or (c) CNT-6 under applied load of 3.9 MPa at different temperatures in the range 30-90 • C, Figure S3: Comparison of the parameters P E,ρ encompassing the effects of elastic modulus and resistivity of nanocomposites, as function of the nanofiller fraction up to 8 wt % (see Equation (21)), Figure S4: Comparison of parameters P E,M,ρ encompassing the effects of elastic modulus, melt flow index (at 220 • C and 10 kg) and resistivity, as function of nanofiller fraction up to 8 wt % of CNT or 30 wt % of GNP (See Equation (22)), Table S1: Summary of representative properties of ABS matrix and its composites with GNP-M5 or CNT nanofillers. The values for filler fractions 2 wt % and 8 wt % are calculated by using Equations (S1) and (S2).
Author Contributions: S.D., A.P. and L.F. conceived and designed the experiments; S.D. performed the experiments; S.D., A.P. and L.F. analysed the data and wrote the paper.
Funding: This research received no external funding. | 2018-09-15T22:01:13.853Z | 2018-08-29T00:00:00.000 | {
"year": 2018,
"sha1": "b99b29ac7b4dfc8a24c36b5a5373bce949511773",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-4991/8/9/674/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a900c850cf83fe342673b55acd3766c767f54de3",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
31491950 | pes2o/s2orc | v3-fos-license | The Effects of composts on adsorption-desorption of three carbamate pesticides in different soils of Aligarh district BANSAL
The effects of various organic manure [farm yield manure (FYM), sewage sludge and poultry manure] application on adsorption-desorption of three carbamate pesticides [oxamyl(I);S-Ethyl-Nmethylcarbamoyl)oxy]thioacetimidate (II) and N-Phenyl (ethyl carbamoyl) propyl carbamate (III) on six soil samples of Aligarh district was studied. Addition of organic manure increased soil organic carbon content and electrical conductance while pH decreased. The results of the study denoted that adsorption isotherms were ‘L’ type and adsorption-desorption data conformed to Freundlich adsorption isotherm equation. The adsorption increased with the increase in organic manure and followed the order sewage sludge > FYM > poultry manure. The adsorptivity of soils was in order soil No. 1 > 2 > 3 > 4 > 5 > 6. The adsorption capacity was significantly positively correlated with soil organic carbon and CEC and negatively correlated with soil pH. Desorption was more in unamended soil than manure amended soil and decreased with the increase in amount of organic manure. Desorption showed hystersis, indicate by the higher adsorption slope (1/n ads) compared with desorption slope (1/n des). @JASEM Pesticides adsorption and desorption are the key processes determining whether pesticide used will have any impact on environmental quality. For most of the pesticides soil organic matter and clay contents are the most important properties which affect the sorption and transformation (Durovic et al., 2009; Osborn et al., 2009; Villavarde et al., 2008). The use of composts derived from source-separated municipal solid waste/animal manure/FYM is now a common agronomic practice throughout the world. Such amendments improve the physico-chemical properties of the soil. Application of such composts affect the fate and mobility of applied pesticides in soil as addition of compost increases besides nutrients the soil organic matter content (Paustin et al., 1992). Carbamate pesticides are widely used as insecticides, nematicides and herbicides (Hague, 1979). Investigations on the adsorption of carbamate pesticides by clays have shown that they are adsorbed by co-ordination and/or protonation at the carbonyl oxygen by exchangeable cations of clays (Bansal 2009, 1983; Li et al., 2003). The purpose of the present study is to study the effect of different amount of organic manure (FYM, sewage sludge and poultry manure) on the extent of adsorption/desorption of three polar water soluble carbamate pesticides; oxamyl (I) [methyl-2-(dimethylamine)-N-{(methyl amino) carbonyl}oxy]-2-oxoethanimidothioate (CH3)2NCO(SCH3)=NOCOCH3; S-Ethyl-N(methylcarbamoyl)oxy]thioacetimidate (II) CH3-NHCO-N=C(CH3) (SCH3); and N-Phenyl (ethyl carbamoyl) propyl carbamate (III) (C6H5NHCOOCH(CH3)CONHC3H7 in six soils of Aligarh district. MATERIAL AND METHODS The six soils (1-6) selected for this study were taken from different parts of Aligarh district at plough layer (0-30 cm). They were air dried at room temperature and sieved by passing through a 100 mesh sieve. Their physico-chemical properties were determined by the usual soil laboratory methodology and clay mineralogy by an X-ray diffraction procedure on orientated specimen. Physico-chemical properties, clay mineralogy and classification are given in Table 1. Table 1. Characteristics of soils S/No Place Taxonomical name Soil units Silt % Clay % pH (1:2:5) Organic C % CaCO3 % CEC (Cmol (P) kg) Surface area mg Major clay minerals 1. A Entisol Typic Ustochrept 38.8 16.2 7.8 0.88 6.3 12.8 38.6 Q,I,C 2. B Acidisol Typic Arigids 52.0 14.0 8.8 0.72 7.6 11.6 36.2 Q,I,C 3. C Aridisol Typic Orthids 37.5 12.0 7.4 0.63 5.8 10. 8 30.1 Q,M,I,C 4. D Aridisol Typic Orthids 44.9 13.6 7.6 0.56 6.6 10.3 27.2 Q,I,C 5. E Aftisol Calciorthents 24.9 9.8 8.3 0.44 8.8 9.9 32.1 Q,I,K,C 6. F Inceptisol Calciorthents 27.1 8.5 8.1 7.9 7.6 6.9 0.48 23.3 26.6 20.2 9.0 8.6 42 46 38 25.6 Q,I,K,C A = Hathras; B = Sikandra Rao; C = Datawali; D Khair = ; E = Atrauli; F = Bank of Yamuna Tappal FYM Sewage sludge Poultry manure; Q = Quartz, I – Illite, M = ontmorillonite, C = Calcite, K = Kaolinite Soil amendment : Soils (1-6) were amended with 0, 2.5 and 5 g of organic manure (FYM, sewage sludge and poultry manure) kg -1 soil at 60% moisture level of water holding capacity and incubated for 45d at the temperature 25±2 o C. The physico-chemical properties of amended soils as determined by standard methods are given in Table 2. The Effects of composts on adsorption-desorption..... BANSAL, O P Table 2. Effect of application on organic material on soil properties Organic material added g kg soil Soil 1 Soil 2 Soil 3 Soil 4 Soil 5 Soil 6 pH OC g kg 1 EC dSm 1 pH OC g kg 1 EC dSm 1 pH OC g kg 1 EC dSm 1 pH OC g kg EC dSm 1 pH OC g kg EC dSm 1 pH OC g kg EC dSm 1 Sewage sludge 0 7.8 8.8 0.66 8.8 7.2 0.61 7.4 6.3 0.55 7.6 5.6 0.57 8.3 4.5 0.52 8.1 4.8 0.44 2.5 7.7 10.6 0.73 8.5 9.9 0.67 7.3 8.9 0.59 7.4 8.1 0.59 8.1 7.0 0.56 8.0 7.5 0.49 5.0 7.5 11.7 0.78 8.1 11.0 0.71 7.2 10.0 0.66 7.1 9.3 0.63 7.8 8.1 0.60 7.7 8.0 0.53 FYM 0 7.8 8.8 0.66 8.8 7.2 0.61 7.4 6.3 0.55 7.6 5.6 0.57 8.3 4.5 0.52 8.1 4.8 0.44 2.5 7.7 10.3 0.71 8.5 9.7 0.68 7.4 8.7 0.57 7.4 8.0 0.58 8.1 6.9 0.55 7.9 7.2 0.48 5.0 7.6 11.4 0.76 8.2 10.7 0.70 7.3 9.8 0.63 7.2 9.1 0.62 7.8 8.0 0.60 7.7 7.9 0.52 Poultry manure 0 7.8 8.8 0.66 8.8 7.2 0.61 7.4 6.3 0.55 7.6 5.6 0.57 8.3 4.5 0.62 8.1 4.8 0.44 2.5 7.7 10.0 0.70 8.5 9.6 0.66 7.4 8.6 0.57 7.5 7.8 0.60 8.2 6.8 0.55 7.9 7.2 0.47 5.0 7.7 11.0 0.73 8.2 10.5 0.69 7.2 9.6 0.62 7.2 8.9 0.61 7.9 7.8 0.58 7.8 7.8 0.50
Pesticides adsorption and desorption are the key processes determining whether pesticide used will have any impact on environmental quality.For most of the pesticides soil organic matter and clay contents are the most important properties which affect the sorption and transformation (Durovic et al., 2009;Osborn et al., 2009;Villavarde et al., 2008).The use of composts derived from source-separated municipal solid waste/animal manure/FYM is now a common agronomic practice throughout the world.Such amendments improve the physico-chemical properties of the soil.Application of such composts affect the fate and mobility of applied pesticides in soil as addition of compost increases besides nutrients the soil organic matter content (Paustin et al., 1992).Carbamate pesticides are widely used as insecticides, nematicides and herbicides (Hague, 1979).Investigations on the adsorption of carbamate pesticides by clays have shown that they are adsorbed by co-ordination and/or protonation at the carbonyl oxygen by exchangeable cations of clays (Bansal 2009(Bansal , 1983;;Li et al., 2003).The purpose of the present study is to study the effect of different amount of organic manure (FYM, sewage sludge and poultry manure) on the extent of adsorption/desorption of three polar water soluble carbamate pesticides;
MATERIAL AND METHODS
The six soils (1-6) selected for this study were taken from different parts of Aligarh district at plough layer (0-30 cm).They were air dried at room temperature and sieved by passing through a 100 mesh sieve.Their physico-chemical properties were determined by the usual soil laboratory methodology and clay mineralogy by an X-ray diffraction procedure on orientated specimen.Physico-chemical properties, clay mineralogy and classification are given in Table 1.Soil amendment : Soils (1-6) were amended with 0, 2.5 and 5 g of organic manure (FYM, sewage sludge and poultry manure) kg -1 soil at 60% moisture level of water holding capacity and incubated for 45d at the temperature 25±2 o C. The physico-chemical properties of amended soils as determined by standard methods are given in Table 2.The flow rate of nitrogen gas was 50 mL min -1 .The retention time for carbamate pesticides I, II, III were 2.14, 1.84 and 2.62 min respectively.Before using, the GC column was primed with several injections of standard pesticides till a consistent response was obtained for each pesticide.The concentration of sample was quantified by comparing the peak height of the sample chromatograms with those of standard run under identical operating conditions.Recovery was 93-99% and minimum detection was 0.05 µg g -1 .
Desorption studies : For desorption 50 mL of distilled water was added to soil residue left in after centrifugation and samples were shaken for 30h in a shaker.Supernatant was centrifuged and amount of pesticide desorbed was estimated in aliquots as cited above.
All the experiments were conducted in duplicate with suitable blanks.
RESULTS AND DISCUSSION
Table 2 denotes that addition of organic manure (FYM, sewage sludge, poultry manure) to soil influences the soil chemical properties.Soil organic carbon content, electrical conductance increased while pH decreased.Ce (mg/L) = equilibrium concentration of carbamate pesticide q (mg/g) =amount of pesticide adsorbed amended with 5 g sewage sludge per kg so il amended with 5g FYM per kg so il amended with 2.5g sewage sludge per kg so il amended with 2.5 g FYM per kg so il.amended with 5g po ultry litter per kg so il amended with 2.5 g po ultry litter per kg spil unamended so il The empirical Freundlich relationship can be used to describe carbamate pesticide adsorption results on soils of Aligarh district.The linear form of this equation is log C = log K + 1/n log Ce.Where C is the amount (mg kg -1 ) of pesticide adsorbed by soil, Ce is the equilibrium concentration in solution (mg L - 1 ), K (mg 1-1/n L 1/n kg -1 ) is the Freundlich adsorption coefficient and 1/n is a describer of isotherm The Effects of composts on adsorption-desorption…..
BANSAL, O P
curvature.The values of 1/n during adsorption of three carbamate pesticides on six different soils were less than unity (0.830-0.950) indicating a convex or 'L' type of isotherm (Figs.1-3) (Giles et al., 1960;1974).These kinds of isotherm arise because of minimum competition of solvent for sites on the adsorbing surface.The slope of the isotherm steadily decreases with the rise in solute concentration because vacant sites become less accessible with the progressive covering of the surface.The curvilinear isotherm suggests that the number of available sites for the adsorption become a limiting factor.The adsorption was in the order pesticide III > I > II, which is supported by the values of K (31.6-56; 30.2-53.5 and 32.6-60.2for pesticides I, II, III respectively) and 1/n (0.835-0.935; 0.830-0.900and 0.855-0.950for pesticides I, II, III respectively).The adsorption of all the three carbamate pesticides follows the order soil 1 > 2 > 3 > 4 > 5 > 6.
The carbamate pesticides adsorption isotherms for organic material amended soils are given in Fig. 1.The isotherms were non-linear.The adsorption of pesticides in organic manure amended soil was more than in unamended soils.This increase could be related to the sorption of organic matter to the soil by increasing the sorption sites available for adsorption.The interaction between pesticide and organic manure occur via multiple bonding mechanisms including ionic bond between negative charged organic matter and positively charged pesticides and/or hydrogen bonds in between pesticides and organic matter as (Villavarde et al., 2008).
The adsorption of pesticides in presence of organic manure followed the order sewage sludge > FYM > poultry manure which may be correlated with organic carbon content.The values of Kd (distribution coefficient) [Kd = (X/m) /Ce] were 42.5-8.4;40-7.3 and 31.7-6.7 for sewage sluge, FYM and poultry manure amended soils respectively, confirming the above inferences.The values of Kd increases with the addition of organic manure (28-8.2 to 42.5-13 on amending 0-5 g sewage sludge kg -1 soil).The values were minimum in unamended soil No. 6 and maximum in sewage sludge amended soil 1.These results confirm the role of organic matter as the primary adsorbing component for the studied pesticides.Ce (mg/L)= equilibrium concentration of carbamate pesticide q (mg/g) = amount of pesticide adsorbed Desorption results indicate that part of the pesticide adsorbed can be desorbed by water.The desorption isotherms followed the same pattern as that of adsorption.Desorption was in the order pesticide II > I > III.The desorption in unamended soil was more than in amended soil and followed the order unamended > poultry manure > FYM > sewage sludge, denoting that organic manure adversely affects desorption due to less availability and more retention of pesticide by organic carbon.The desorption isotherm also obey Freundlich equation.With increasing level of organic manure (0-5 g kg -1 soil) the value of Freundlich desorption coefficient K' increased ( 49.9 to 59.4 for sewage sludge amended soil 1 )and 1/n' decreased (0.880 to 0.865 for sewage sludge amended soil 1) , the higher values of K' was indicative of difficult desorption.
Fig
Fig 1 Effect of organic manure on the adsorption of pesticide I on soil 1.
Fig
Fig 2 Effect of organic manure on the adsorption of pesticide II on soil 1.
Table 1 .
Characteristics of soils
Table 2 .
Effect of application on organic material on soil properties | 2017-09-12T18:57:40.640Z | 2011-01-25T00:00:00.000 | {
"year": 2011,
"sha1": "e7c7f2465ebcd059e178d74989c6948a6a084545",
"oa_license": "CCBY",
"oa_url": "https://www.ajol.info/index.php/jasem/article/download/63305/51189",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "e7c7f2465ebcd059e178d74989c6948a6a084545",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
248457115 | pes2o/s2orc | v3-fos-license | Raynaud’s Phenomenon with Focus on Systemic Sclerosis
Raynaud’s phenomenon is a painful vascular condition in which abnormal vasoconstriction of the digital arteries causes blanching of the skin. The treatment approach can vary depending on the underlying cause of disease. Raynaud’s phenomenon can present as a primary symptom, in which there is no evidence of underlying disease, or secondary to a range of medical conditions or therapies. Systemic sclerosis is one of the most frequent causes of secondary Raynaud’s phenomenon; its appearance may occur long before other signs and symptoms. Timely, accurate identification of secondary Raynaud’s phenomenon may accelerate a final diagnosis and positively alter prognosis. Capillaroscopy is fundamental in the diagnosis and differentiation of primary and secondary Raynaud’s phenomenon. It is helpful in the very early stages of systemic sclerosis, along with its role in disease monitoring. An extensive range of pharmacotherapies with various routes of administration are available for Raynaud’s phenomenon but a standardized therapeutic plan is still lacking. This review provides insight into recent advances in the understanding of Raynaud’s phenomenon pathophysiology, diagnostic methods, and treatment approaches.
Introduction
Raynaud's phenomenon (RP) is defined as intermittent, excessive vasoconstriction of the microvasculature, triggered by cold exposure or emotional stress [1]. The classic clinical picture involves changes in skin colour from white (ischemia), to blue (cyanosis), and red (reperfusion). These changes are associated with a significant burden of pain and hand-related disability [2]. Raynaud's phenomenon most often occurs in the fingers/toes. Less commonly, the nose, tongue, nipples, and pinnae of the ears may be involved [3].
Raynaud's phenomenon can be subdivided into primary (idiopathic) and secondary forms [ Table 1]. Primary Raynaud's phenomenon (PRP) has an estimated prevalence of 5% in the general population, and most often occurs in young women [3]. Both forms of Raynaud's phenomenon are more common in cold climates [4].
Patients with PRP have a younger age of onset (usually between the age of 15 and 30) than those with secondary Raynaud's phenomenon (SRP), and the thumb is usually not involved [5]. The latest diagnostic criteria for PRP are: history of episodic, acral, bior triphasic colour change; normal nailfold capillaries; antinuclear antibodies (ANAs) titer < 1:40 (i.e., negative); no association with underlying systemic disease; and no history of collagen vascular disease [6].
A large population-based cohort study revealed that low body weight and previous involuntary weight loss are significantly associated with an increased risk of RP in both men and women [7]. Table 1. Differences between primary and secondary Raynaud's phenomenon [3,5,6,8,9]; ANAantinuclear antibodies.
Raynaud's Phenomenon Primary Secondary
Age at Onset usually between 15 Secondary causes of RP ( Figure 1) include various autoimmune connective tissue disorders-systemic sclerosis (SSc), systemic lupus erythematosus (SLE), Sjögren's syndrome, idiopathic inflammatory myopathies, antisynthetase syndrome (ASyS), thoracic outlet syndrome; cervical rib, embolic or thrombotic events; vibration-induced trauma; and multiple different medications [2,10,11]-the most relevant being β-adrenoceptor blockers, vinyl chloride, interferons, and chemotherapy [12,13]. Raynaud's phenomenon frequently represents the initial manifestation in patients who have mixed connective tissue disease (MCTD). It is the cutaneous symptom of a systemic vasculopathy that is characterized by intimal fibrosis and blood vessel obliteration that frequently leads to visceral involvement. Raynaud's phenomenon appears in 18-46% of patients with systemic lupus erythematosus [14,15]. Looking for signs of arthritis or vasculitis, as well as a number of laboratory tests (Table 2), may separate them. Complete blood count may reveal a normocytic anaemia, suggesting chronic disease or kidney failure. Blood tests for urea and electrolytes may reveal kidney impairment. Tests for rheumatoid factor, erythrocyte sedimentation rate, C-reactive protein, and autoantibody screening may reveal specific causative illnesses or an inflammatory process. Thyroid function tests may reveal hypothyroidism [16,17].
Systemic sclerosis is an autoimmune disorder characterized by inflammation, fibrosis, and microvasculopathy. It results in potentially widespread fibrosis and vascular abnormalities, which can affect the skin, lungs, heart, gastrointestinal tract and kidneys. The uncontrolled fibrosis of the skin and internal organs in systemic sclerosis leads to severe and sometimes life-threatening complications.
The underlying mechanisms are complex and remain largely unknown [18,19]. Definitive diagnosis is made with fulfilment of the 2013 European League Against Rheumatism (EULAR) and American College of Rheumatology (ACR) classification criteria [20]. Over the past 15 years, efforts have been made towards early diagnosis [21]. Almost all individuals with SSc have detectable circulating antibodies against nuclear proteins and different SSc phenotypes are strongly associated with the different antibody types [22]. Several forms of the disease have been esteemed. Diffuse cutaneous systemic sclerosis (dcSSc) is characterized by the quickest course, with internal organ involvement already at early disease stage, and poor prognosis [23]. Skin thickening confined to sites above the elbows or knees is classified as limited cutaneous SSc (lsSSc). Around 20% of patients with lcSSc may present with features of SSc as a component of the overlap syndrome [24,25].
Pathophysiology
The pathophysiological mechanisms behind RP are not entirely understood; gen ally, it is characterized by excessive vasoconstriction of the digital arteries, precapilla arterioles, and cutaneous arteriovenous anastomoses [33].
In PRP, vasospasm of the digital and cutaneous vessels is believed to occur as a co sequence of an increased alpha2c-adrenergic response, and does not result in vascular p thology [34]. Ascherman et al. put forward an autoimmune etiology, proposing tokeratin 10 (K10) as a potential autoantigen. Their study on mice showed that anti-K antibodies can mediate ischemia, similar to that seen in primary Raynaud's Phenomen [35].
In SRP, affected endothelial cells exhibit amplified exocytosis of endothelin-1 and tra-large von Willebrand factor (ULVWF), which contribute, respectively, to increas vasospasm and capillary thrombosis. In addition, it is believed that increased transfor ing growth factor β (TGFβ), endothelin-1, cytokines, and angiotensin II drive the proc of myofibroblast proliferation, vascular fibrosis and dropout in SSc patients [34]. Nit oxide (NO) has a complex role in the disease process [36]. A decrease in endothelial f mation of NO results in diminished vascular relaxation and extended vasoconstrictio Conversely, overproduction of NO leads to increased generation of reactive oxygen sp cies and plays a pathogenic role in fibrosis.
Gualtierotti et al. found that markers of endothelial damage are regularly elevated patients with PRP at their first assessment, even when there are no capillaroscopic abn malities or autoantibodies detectable. They are particularly increased in patients with ve early SSc. The plasma concentration of tissue-type plasminogen activator (t-PA) and v Willebrand factor (vWF)-two markers of endothelial damage, as well as interleukin (IL-6)-a pro-inflammatory cytokine, were evaluated. After a 36-month follow-up, tho with higher basal concentrations of markers of endothelial damage had developed co nective tissue disease. Von Willebrand factor analysis showed clear differences betwe primary and secondary RP patients. These findings suggest that markers of endothel damage are elevated in RP patients who go on to develop SSc or other connective tiss diseases, even in the absence of capillaroscopic abnormalities [37]. Table 2. Raynaud's phenomenon-proposed laboratory tests [16,17]. TPO-thyroid peroxidase; TG-thyroglobulin; ANA-antinuclear antibody.
Raynaud's Phenomenon-Proposed Laboratory Tests
• Complete blood count • Erythrocyte sedimentation rate • Thyroid function tests: Thyroid-stimulating hormone (TSH) Thyroxine (T4) TPO antibodies TG antibodies Almost all SSc patients suffer from RP, typically the initial manifestation of disease and may precede the involvement of other organs by many years, especially in lcSSc [26]. There are many areas in which RP initiates considerable disease-related morbidity in SSc patients, including impaired hand function, pain, reduced social engagement, diminished body image, increased dependence on others, and reduced quality of life [27]. Though rare, several paraneoplastic causes of RP have been identified secondary to malignancies of the lung, breast, uterus, and ovaries [28,29].
RP has additionally been reported as a side effect of biological agents-for example interferon, radiotherapy, and chemotherapeutic agents-particularly bleomycin (alone or combined with vinca alkaloids or cisplatin) and cisplatin (combined with other chemotherapy agents); beta-adrenergic blocking agents may also provoke paroxysmal vasospasm of small vessels [6].
Kim et al. reported a case of RP in a 70-year-old woman, with no history of connective tissue disease, secondary to pembrolizumab therapy for gallbladder cancer [30]. A further case report on the development of IL-17A antagonist (secukinumab)-related RP in a 35-yearold female patient with ankylosing spondylitis has also been described [31]. Additionally, Bouaziz et al. described a patient with proven COVID-19 infection presenting with RP and chilblain appearance of the hands [32].
Pathophysiology
The pathophysiological mechanisms behind RP are not entirely understood; generally, it is characterized by excessive vasoconstriction of the digital arteries, precapillary arterioles, and cutaneous arteriovenous anastomoses [33].
In PRP, vasospasm of the digital and cutaneous vessels is believed to occur as a consequence of an increased alpha 2c -adrenergic response, and does not result in vascular pathology [34]. Ascherman et al. put forward an autoimmune etiology, proposing cytokeratin 10 (K10) as a potential autoantigen. Their study on mice showed that anti-K10 antibodies can mediate ischemia, similar to that seen in primary Raynaud's Phenomenon [35].
In SRP, affected endothelial cells exhibit amplified exocytosis of endothelin-1 and ultra-large von Willebrand factor (ULVWF), which contribute, respectively, to increased vasospasm and capillary thrombosis. In addition, it is believed that increased transforming growth factor β (TGFβ), endothelin-1, cytokines, and angiotensin II drive the process of myofibroblast proliferation, vascular fibrosis and dropout in SSc patients [34]. Nitric oxide (NO) has a complex role in the disease process [36]. A decrease in endothelial formation of NO results in diminished vascular relaxation and extended vasoconstriction. Conversely, overproduction of NO leads to increased generation of reactive oxygen species and plays a pathogenic role in fibrosis.
Gualtierotti et al. found that markers of endothelial damage are regularly elevated in patients with PRP at their first assessment, even when there are no capillaroscopic abnormalities or autoantibodies detectable. They are particularly increased in patients with very early SSc. The plasma concentration of tissue-type plasminogen activator (t-PA) and von Willebrand factor (vWF)-two markers of endothelial damage, as well as interleukin-6 (IL-6)-a pro-inflammatory cytokine, were evaluated. After a 36-month follow-up, those with higher basal concentrations of markers of endothelial damage had developed connective tissue disease. Von Willebrand factor analysis showed clear differences between primary and secondary RP patients. These findings suggest that markers of endothelial damage are elevated in RP patients who go on to develop SSc or other connective tissue diseases, even in the absence of capillaroscopic abnormalities [37].
A recent study by Taher et al. demonstrated use of a non-invasive NO-dependent method to identify peripheral microvascular endothelial dysfunction in patients with SRP. The association between SRP and microvascular peripheral endothelial dysfunction was also significant after adjusting for confounding variables, including conventional risk factors for cardiovascular disease, and vasoactive medications. This also remained significant in women after stratifying only by sex. It was emphasized that detection of microvascular peripheral endothelial dysfunction at an early stage could help to identify individuals with SRP who are at risk of developing connective tissue disease, as well as cardiovascular disease. Early detection could additionally indicate who may benefit from frequent screening, prompt initiation of preventative treatments, and modification of risk factors [38].
Genetics
A genetic predisposition for RP has been demonstrated in two studies demonstrating greater concordance amongst monozygotic than dizygotic twins. Heritability for RP is reported to be 55-64% [39,40].
Polymorphisms in various genes encoding ion channels or vasoactive agents have been hypothesized to result in the RP phenotype. It is suggested that genetic variation in temperature-responsive or vasospastic genes may underlie RP manifestation. Several studies have investigated candidate genes that could potentially regulate vascular reactivity [41,42]. Munir et al. aimed to evaluate the association between RP and single nucleotide polymorphisms (SNPs). Temperature-sensing receptor channels called thermo-sensitive transient receptor potential (TRP) ion channels include TRPA1 and TRPM8. These are cold-sensing and have been proposed to mediate cold-induced vascular responses in skin in vivo. This is linked, at least in part, to the expression of these channels on perivascular sensory nerves [41]. Calcitonin-related polypeptides, alpha and beta (CALCA, CALCB), encode the peptide hormones calcitonin, calcitonin gene-related peptide, and katacalcin by tissue-specific alternative RNA splicing of gene transcripts and cleavage of inactive precursor proteins. Calcitonin is involved in the regulation of calcium levels and phosphorus metabolism. Calcitonin gene-related peptide functions as a vasodilator [43]. NO derived from neuronal nitric oxide synthase (nNOS) facilitates the restorative vasodilator response after cold exposure; thus, the gene encoding nNOS (NOS1) has also been investigated [41]. Munir et al. found that one polymorphic variant within the NOS1 gene was significantly associated with RP in the general population [42].
Diagnosis
A detailed medical history, laboratory tests, and nailfold capillaroscopy form the basis of RP diagnosis [17]. Follow-up nailfold capillaroscopy should be performed every 12 months in patients with significant nailfold videocapillaroscopy disturbances present at baseline [44]. Laboratory investigations should comprise a full blood count, inflammatory markers, thyroid function, and ANA testing by indirect immunofluorescence (accompanied by ELISA or solid-phase immunoassays to determine antigen specificities where possible). A negative ANA with cytoplasmic stain could indicate anti-synthetase antibodies, such as anti-Jo-1, or rarer SSc-specific autoantibodies such as anti-eukaryotic initiation factor 2B autoantibodies (anti-EIF2B) [45].
Capillaroscopy
Nailfold capillaroscopy is a simple, non-invasive technique that allows both qualitative and quantitative evaluation of the microcirculation, thus enabling early detection of abnormalities. At the nailfold, capillaries are positioned parallel to the surface of the skin, allowing full morphological assessment [46,47]. Among the most important indications for capillaroscopy are the differential diagnosis of primary and secondary RP.
Capillaroscopy is included in the 2013 American College of Rheumatology (ACR)/Eur opean League Against Rheumatism (EULAR) recommendations [20]. It is considered a key investigation in both the very early phases of the disease, and in monitoring disease progression ( Figure 2). Cutolo et al. proposed three progressive capillaroscopic patterns in SSc-'early', 'active', and 'late'. The 'early' pattern is defined as the presence of a few giant capillaries, single microhemorrhages, and preservation of capillary architecture without capillary loss. 'Active' presents as numerous giant capillaries and microhemorrhages, mild disturbance of the capillary architecture and moderate capillary loss. The 'late' pattern is characterized by severe capillary loss with extensive avascular areas, disorganization of the capillary architecture and ramified/bushy capillaries [48].
A clinical expert-based, fast track decision algorithm was developed to facilitate differentiation of a "non-scleroderma pattern" from a "scleroderma pattern" on capillaroscopic images. The algorithm demonstrated excellent reliability when used by capillaroscopists with varied expertise levels compared to principal experts, and corroborated with external validation [49,50].
Qualitative analysis is subjective, and quantitative analysis is time-consuming when done manually. A study performed by Cutolo et al. accomplished validation of fully automated AUTOCAPI software for measuring the absolute capillary number over 1 linear/mm in NVC images. The software was subsequently optimized to assess capillary number in the shortest possible time and with the lowest possible error, in both healthy subjects, and those with SSc [61].
Laser Doppler Flowmetry
Laser Doppler flowmetry (LDF) is a semi-quantitative imaging technique useful for studying the nitric oxide endothelial-dependent vascular response and axon reflex-mediated vasodilation. Impaired regulation of NO vascular tone has been described in patients with SSc-associated RP when compared to those with PRP and healthy controls [62,63]. Laser Doppler flowmetry has been proposed as a method for evaluating blood perfusion of the skin. This is a functional assessment of the vessels of the skin, involving the deeper dermal vessels in addition to the capillaries [64]. In addition capillaroscopic changes have been observed in dermatomyositis, polymyositis, antiphospholipid syndrome, Sjogren's syndrome, and systemic lupus erythematosus [51]. Dermatomyositis pattern, often associated with aspects of the SSc pattern, includes the presence of two or more of the following findings in at least two nail folds: enlargement of capillary loops, loss of capillaries, disorganization of the normal distribution of capillaries, 'budding' ('bushy') capillaries, twisted enlarged capillaries, and capillary haemorrhages (extravasates) [51,52]. Characteristic systemic lupus erythematosus pattern includes morphological alterations of capillary loops, venular visibility and sludging of blood with variability in capillary loop length [53]. Capillaroscopic abnormalities in SS ranged from non-specific findings (crossed capillaries) to more specific findings (confluent haemorrhages and pericapillary haemorrhages) or SSc-type findings [54,55]. Multiple hemorrhages from normal-shaped capillaries, which appear parallel/linear and arranged perpendicularly to the nailfold bed, are called "comb-like" hemorrhages and are suggestive of antiphospholipid syndrome [56].
It has been shown that the ability to detect capillary abnormalities increases as the number of fingers examined increases. Sensitivities ranged from 31.7% to 46.6% for only one finger (right middle and left ring finger, respectively), 59.8% for both ring fingers, 66.7% for a four-finger combination (both ring and middle fingers) and 74.6% for the eight-finger standard. In order to achieve the most accurate assessment during routine capillaroscopic examination, all eight nailbeds should be examined omitting the thumbs, where it is more difficult to visualize and classify capillaries [57,58]. It should be noted that in a time pressured scenario, the best two-finger combination to detect capillary abnormalities is both ring fingers [58].
Nailfold videocapillaroscopy is the standard, although a handheld dermatoscope or an ophthalmoscope may also be used as screening tools [50]. The nailfold videocapillaroscopy technique with 200× magnification, capturing at least two adjacent fields of 1 mm in the middle of the nailfold finger, is the standard capillaroscopic technique to perform nailfold capillaroscopy [50].
Ideally all dermatology specialists should have access to videocapillaroscopy; a pragmatic solution for practitioners may be to have a low-cost capillaroscopy system. Technologies using a smartphone camera could help to improve availability to nailfold capillaroscopy whilst still providing accurate results [59]. Research regarding automated measurement of capillaroscopic characteristics is currently under way and holds promise as an objective clinical outcome measure [50]. Interestingly, a consensus-based assessment of dermatoscopy versus nailfold videocapillaroscopy by a European League against Rheumatism study group revealed tenuous promise for dermatoscopy as a tool for the initial screening of nailfold capillaries in RP. However, as perhaps expected, dermatoscopy is less sensitive, but more specific, in regard to detecting abnormalities, compared with videocapillaroscopy [60].
Qualitative analysis is subjective, and quantitative analysis is time-consuming when done manually. A study performed by Cutolo et al. accomplished validation of fully automated AUTOCAPI software for measuring the absolute capillary number over 1 linear/mm in NVC images. The software was subsequently optimized to assess capillary number in the shortest possible time and with the lowest possible error, in both healthy subjects, and those with SSc [61].
Laser Doppler Flowmetry
Laser Doppler flowmetry (LDF) is a semi-quantitative imaging technique useful for studying the nitric oxide endothelial-dependent vascular response and axon reflexmediated vasodilation. Impaired regulation of NO vascular tone has been described in patients with SSc-associated RP when compared to those with PRP and healthy controls [62,63]. Laser Doppler flowmetry has been proposed as a method for evaluating blood perfusion of the skin. This is a functional assessment of the vessels of the skin, involving the deeper dermal vessels in addition to the capillaries [64].
Melsen et al. completed a systematic review evaluating the use of LDF, describing the results of quality reports on assessment of the skin's microcirculatory flow at the level of the fingertip in SSc patients, and investigating the validation status of LDF as an outcome measure. The systematic review highlights the very preliminary validation status of LDF in the assessment of the microcirculatory flow in SSc [65].
In a study performed by Gregorczyk-Maga et al., LDF was used to investigate oral capillary flow in PRP patients who habitually have dysfunction in the microcirculation of the oral mucosa and who often have lesions in the oral cavity [66].
Time to postocclusive peak blood flow measured by LDF is an extremely accurate test for distinguishing patients with PRP from healthy controls [67].
An additional study performed by Waszczykowska et al. presented the suitability of LDF for assessment of the degree of microangiopathy present in SSc patients. Assessment of the skin perfusion value in SSc patients should on the basis of parameters obtained during microcirculation challenge tests [68].
Thermography
Thermal imaging is an indirect method that makes use of a thermal camera to image skin temperature and demonstrate underlying blood flow [69]. Thermal imaging has been used to evaluate RP in several studies; the response to lower temperatures was able to differentiate between PRP and RP secondary to SSc [70].
Patients with SSc-related RP have been found to have structural changes in the digital arteries and microcirculation with a decrease in baseline blood flow. This typically does not return to normal after a cold challenge with rewarming, in direct contrast to primary RP, in which the fingers classically rewarm [71].
Measurements made by mobile phone thermography compared favorably with those made by standard thermography, paving the way for ambulatory monitoring in noncontrolled environments; this will enable further assessments to increase the understanding of RP episodes [69]. Infrared thermography may additionally be a method of verification in Raynaud's Phenomenon [72].
Laser Speckle Contrast Analysis (LASCA)
Laser speckle contrast analysis (LASCA) is a tool used to investigate variations in peripheral blood perfusion during long-term follow-up and can safely monitor the evolution of digital ulcers in SSc patients [73].
LASCA can quantify blood flow over a defined area and is based on the concept that when laser light illuminates a tissue it forms a speckle pattern. Variations in this pattern are analyzed by dedicated software-static areas demonstrate a stationary speckle pattern, in contrast with mobile objects-such as red blood cells-that cause the speckle pattern to fluctuate and appear blurred. The amount of blurring (contrast) is analyzed and thus interpreted as blood perfusion [74].
The pilot study completed by Ruaro et al. determined that the hand blood perfusion, as evaluated by LASCA, was lower in PRP than in SSc patients with the "early" nailfold videocapillaroscopy microangiopathy pattern [75].
Treatment
Lifestyle modifications are essential in all patients with RP [8]. Patients' education is an important aspect of disease management and patients' support organizations provide them with valuable education on the topic [76]. The first line of the treatment is based on avoiding triggering factors such as: exposure to cold, sudden changes of temperature, stress, cigarette smoke, and infections [34]. Patients should dress warmly (including warm gloves and socks). A number of different types of gloves have been proposed for patients with RP to reduce the risk of attacks, including battery-heated and specifically ceramicimpregnated gloves [77]. During the vasospasm, it is advised that one should place one's hands under warm running water or to rub one hand against the other to intensify blood flow [78]. Because stress may trigger an attack, learning to recognize and avoid stressful situations may help control the number of attacks. Exercise can improve circulation, among other health benefits. Avoidance of repeated trauma to the fingertips by all patients with RP and avoidance of vibrating tools utilization by patients with vibration-induced RP has to be underlined [79]. Patients should be counselled regarding the critical importance of smoking cessation as nicotine enhances vasoconstriction [80]. Certain medications such as beta-blockers, ergotamine, or sumatriptan, and some types of chemotherapy, specifically, cisplatin and bleomycin, were most likely to induce the Raynaud's phenomenon. If possible, alternative therapies that do not alter peripheral blood flow should be considered [12]. In most cases of primary Raynaud's phenomenon, lifestyle modifications may be sufficient to control the symptoms [17,81].
Pharmacological treatment is required when adaptive measures to avoid cold exposure are ineffective. RP reflects excessive vasoconstriction; thus, vasodilator therapyparticularly targeted to the cutaneous circulation-is a major focus. Patients with connective tissue disease-associated RP, SSc in particular, may progress to tissue injury; hence, drug treatment often needs to be more 'aggressive' to prevent/minimize tissue loss [ Table 3]. Table 3. Pharmacotherapy options in management of Raynaud's phenomenon. A-level recommendation is based on consistent and good-quality patient-oriented evidence; B-level recommendation is based on inconsistent or limited-quality patient-oriented evidence; C-level recommendation is based on consensus, usual practice, opinion, disease-oriented evidence, or case series for studies of diagnosis, treatment, prevention, or screening.
Calcium Channel Blockers
Calcium channel blockers (CCBs) are generally considered to be the first-line pharmacotherapeutic treatment of PRP, and are the group of drugs which have been most extensively researched. According to the 2017 update of the European League against Rheumatism (EULAR) recommendations for the treatment of SSc, oral therapies with CCBs are strongly recommended (strength of recommendation A) [16,82]. CCBs are currently the most frequently prescribed drug for PRP. Nifedipine and amlodipine are considered to be the most effective agents, blocking calcium channels located in the cell membranes of vascular smooth muscle and cardiac muscle. Consequently, calcium ion entry into cells is inhibited, resulting in blood vessel relaxation and improved blood supply to tissues [83].
Doses should be adjusted depending on individual tolerance, with particular caution advised in patients with low arterial blood pressure [84].
A meta-analysis of randomized clinical trials concluded that CCBs are only somewhat effective at reducing the frequency of Raynaud's attacks in PRP [42,85]. Whereas other studies suggest that CCBs may be effective at decreasing the severity of attacks, pain and disability associated with RP [83].
Phosphodiesterase-5 (PDE-5) Inhibitors
Phosphodiesterase-5 (PDE-5) inhibitors are commonly used as a second-line systemic agent to manage RP resistant to CCBs. Inhibition of PDE-5 activity allows accumulation of cGMP within endothelial cells, which alters the cellular response to prostacyclin or nitric oxide, and in turn dilates blood vessels [86].
A 2013 meta-analysis of six randomized controlled trials including 296 SRP patients revealed a significant, moderate effect on the clinical severity, duration, and frequency of attacks [87]. Additionally, a significant decrease in the number of digital ulcers in SSc patients with RP was found in a randomized, placebo-controlled study in patients receiving sildenafil compared to a placebo [88].
Adverse effects of these PDE-5 inhibitors include flushing, headaches, and dizziness. Less common side effects include hypotension, arrhythmias, cerebral vascular accidents, and vision changes [89].
Prostaglandin Analogs
While oral prostaglandins have not shown any benefit in RP, prostacyclin analogs administered intravenously exhibit a strong vasodilative effect which considerably improves the clinical condition, particularly among patients with ulcers and erosions [90].
Iloprost is a synthetic analogue of prostacyclin (PGI2), with vasodilatory and antiplatelet effects; however, it is more stable than PGI2, has a longer half-life (20 to 30 min) and better solubility [91]. Iloprost activates PGI2 receptors, thus stimulating adenylate cyclase to generate cyclic adenosine monophosphate (cAMP). PGI2 receptors inhibit vascular smooth muscle constriction and platelet aggregation. They are also expressed on endothelial cells, where they initiate multiple protective effects, including amplification of endothelial adherens junctions and decreased monolayer permeability [92]. Iloprost infusions are frequently recommended as second-line treatment after CCBs and are the firstline therapeutic choice for digital ulcerations and critical ischemia [21,93]. A meta-analysis determined that the use of iloprost in critical limb ischemia was effective in improving ulcer healing, relieving pain, and reducing the need for amputations [94]. In cases of pre-existing digital ulcerations, iloprost promotes healing and reduces the incidence of new ulcerations [16]. Three further studies described an improvement in nailfold microvascularization following iloprost treatment [95].
In 2017, the EUSTAR recommendations allocated intravenous iloprost a Grade A recommendation for management of severe SSc-related RP attacks and for digital ulcer treatment [16]. However, in the recommendations, the dosing and therapeutic regimen was not specified. The absence of an accepted regimen is a major impediment to the administration of iloprost in SSc. According to the Delphi concensus, intravenous iloprost can be useful in RP that is severe or refractory to CCB and PDE-5i. To control symptoms, it is recommended that iloprost be administered 1-3 days every month. Dosing should be determined according to the tolerance, starting from 0.5 up to 3.0 ng/kg/min. To achieve a lasting effect, infusions must be repeated regularly [16,96,97]. Interestingly, the pharmacological actions may persist longer than suggested by the pharmacokinetic profile (i.e., weeks to months) [98].
Currently, intravenous iloprost is available in several countries only for RP secondary to SSc for a duration of 3-5 days. For RP and digital ulcer healing, expert consensus proposes a regimen of 1-3 days per month, with 1 day per month for DU prevention. These recommendations allow clinicians some scope on how to personalize intravenous iloprost therapy according to patients' needs [93]. However, although these suggestions are supported by an expert group for use in a clinical setting, it would be necessary to formally validate the recommendations in future clinical trials.
As iloprost is not available in some countries, alprostadil (a combination of prostaglandin E1 with a-cyclodextrin in a 1:1) has been found to be an effective alternative for SRP [99]. Alprostadil is primarily used to maintain patency of the ductus arteriosus, and also has mild pulmonary vasodilatory effects. It reportedly inhibits macrophage activation, neutrophil chemotaxis, and release of oxygen radicals and lysosomal enzymes. It influences coagulation by inhibiting platelet aggregation and potentially by inhibiting factor X activation. Alprostadil may promote fibrinolysis by stimulating production of tissue plasminogen activator. The overall benefits of iloprost and alprostadil are comparable, without significant differences in clinical efficacy or circulating markers of endothelial damage [100].
Epoprostenol, the first prostacyclin agent approved by the US Food and Drug Administration (FDA), in 1995, requires continuous intravenous infusion via a dedicated central venous catheter with infusion pump. Epoprostenol stimulates vasodilation of pulmonary and systemic arterial vascular beds and impedes platelet aggregation [11,101]. Based on published evidence, the initial dose of intravenous epoprostenol for treatment of refractory RP, with or without ischemic ulcers, should not exceed 2 ng/kg/min. A conservative titration schedule, based on those used in previous studies, should allow for rate increases of 1 ng/kg/min every 15 min as tolerated, adjusted as per the onset of treatment-emergent adverse effects. It should be noted that more aggressive uptitrations of 2 to 2.5 ng/kg/min were used in some studies. However, as a consequence of the lack of standardized efficacy outcomes in the available literature, it is not possible to assess if such regimens hold any advantages other than reaching the maximum dose more quickly. Intermittent infusions of 5 to 6 hours' duration should be initially considered to limit drug exposure and potential toxicities. However, it is reportedly reasonable to use continuous infusions for up to 72 h in patients unresponsive to intermittent therapy [101].
Epoprostenol is contraindicated in patients with congestive heart failure due to left ventricular dysfunction, and in those with known history of hypersensitivity reactions to the drug. Other adverse effects that should be monitored for include pulmonary edema, hemodynamic instability, line infections, and bleeding [101].
Endothelin Receptor Antagonists
Bosentan is an endothelin receptor antagonist (ERA) primarily used to manage severe pulmonary hypertension. A starting dose of 62.5 mg twice a day for four weeks, followed by 125 mg twice a day for 12 or 20 weeks, has shown some effectiveness in preventing formation of new digital ulcers, but did not influence healing of pre-existing ulcers [102,103]. Potential adverse effects include headaches, dizziness, and hypotension [16]. Adverse drug reactions are relatively mild, but during the treatment monthly liver function and 3-monthly full blood count is required [104,105].
Angiotensin II Receptor Blockers
Angiotensin II receptor blockers (ARBs) are reserved as a third-line treatment for mild Raynaud's phenomenon. There has been only one trial including 52 patients (25 patients with primary RP and 27 with SSc-related RP); this was an open-label, unblinded, controlled trial, during which a 12-week treatment with losartan 50 mg/day resulted in reduced frequency and severity of RP attacks in comparison to nifedipine 40 mg/day. The benefit was more noticeable in the subgroup of patients with primary RP. Losartan is widely available, accessible, and has an acceptable side effect profile [82,106,107].
Sulodexide
Sulodexide is a safe rheological drug used successfully as a supportive way of treating RP [84]. It consists of a purified mixture of glycosaminoglycans acquired from bovine intestinal mucosa, comprising a heparin of a rapidly moving field of electrophoresis (80%) and dermatan sulfate (20%). It functions as an anticoagulant, is pro-fibrinolytic and antiinflammatory, disrupts the process of fibrosis, and has a protective influence on vascular endothelial cells. Due to its pleiotropic activity and high safety profile, the benefits from sulodexide may be applied to many dermatological diseases [108].
SSc patients with secondary microcirculatory disorders who are intolerant to prostanoids, where there are contraindications, may be treated with sulodexide, 600 lipasemic units (LSU) intravenously twice a day. This dosing regimen has previously produced good therapeutic results. Other than sporadically observed dizziness and hypotension, no significant side effects were noted. In patients' pre-existing digital erosions and ulcers, a 3-4 day cycle of intravenous sulodexide, at 600 LSU twice a day every 4-6 weeks, has been shown to improve lesion healing [109].
Results of a recent pilot study suggest that the use of sulodexide treatment in RP results in a long-term improvement of capillary flow, a decrease in episode recurrence, and a reduction in pain intensity [110].
During parenteral treatment with sulodexide, it is imperative to discontinue any use of heparin or oral anticoagulants to reduce the bleeding risk [109].
Statins
Statins, 3-hydroxy-3-methylglutaryl coenzyme A (HMG-CoA) reductase inhibitors, are extensively used to reduce serum cholesterol levels in the primary and secondary prevention of cardiovascular disease. Statins also have direct, vasculoprotective effects, which are independent of their ability to lower circulating LDL levels. Statins may be beneficial in RP patients, including those with SSc. Statins increase endothelial nitric oxide synthase expression and thus nitric oxide production, decrease oxidative stress, reduce endothelin-1 expression, impede endothelial apoptosis, increase endothelial progenitor cell mobilization, and promote microvascular growth [111,112]. Statins additionally inhibit endothelial-mesenchymal transition, which may contribute to vasculopathy and tissue fibrosis in SSc [113]. In a study of SSc patients, treatment with a statin was found to reduce the severity of RP vasospastic episodes and improved endothelial function. This was associated with increased levels of nitric oxide, reduced oxidative and inflammatory stress, increased quantity of endothelial progenitor cells, and amelioration of circulating concentrations of von Willebrand factor [114]. Data from a study performed by Abou-Raya et al. suggest that atorvastatin may exert beneficial effects in SSc by protecting the endothelium and improving its functional activity [115].
Topical Vasodilators
Topical vasodilators may be used as adjuvant therapy for RP patients. 10% nifedipine cream and 10% nitroglycerin gel have both been accepted as efficient therapeutic options with side-effects comparable to placebo usage. Local topical nitrates display significant efficacy in treatment of both primary and secondary RP [116]. Topical nitrates have been reported to increase perfusion at both distal digital ulcer and extensor digital ulcer cores in SSc patients, when compared to a placebo and evaluating with laser doppler imaging [117]. Wortsman et al. found that 5% sildenafil cream significantly improved blood flow in digital arteries (an increase of 9.2 mm/s, p < 0.0083). A trend toward improvement was also observed for vessel diameter in patients with SRP (p = 0.0695), suggesting local vasodilatation. Adverse effects to topical vasodilators include headaches and dizzinessno serious adverse effects were detected [118]. A study by Bentea et al. assessed the effects of nitroglycerin patch application to the dorsum of the hand. Results showed an increase in blood flow and hand temperature in patients with SSc after a cold challenge using laser doppler imaging [119].
Sympathectomy
Surgical treatment options involving sympathectomy or arterial reconstruction may be required in patients who suffer from incapacitating pain and ulcers with torpid evolution.
However, these techniques carry the risk of comorbidities and may not always provide satisfactory results.
Cautious selection of RP patients is necessary, and endoscopic thoracic sympathectomy should be reserved as an ultimate choice only for patients who have severe symptoms that are treatment-resistant with serious complications and impaired quality of life. The limiting factor with sympathectomy is a high recurrence rate. Symptoms and examination findings reported the quantity and dosage of medications used returned to preoperative levels in 66.6% of patients at month 6, and in all patients except one at the end of the 1st year [120].
Digital periarterial sympathectomy may be considered in patients suffering from critical digital ischemia or persistent ulceration despite aggressive vasodilatory therapy. A long-term retrospective study assessed 35 patients with primary or secondary RP who underwent thoracoscopic sympathectomy: 77% of participants had a positive response. However, symptoms recurred in 60% at a median follow-up of 5 months [121].
Single-port thoracoscopic sympathicotomy (SPTS) is a novel minimally invasive technique compared to conventional sympathectomy [122]. A recent study showed that the single-port procedure is effective in improving hand perfusion in patients with treatmentresistant RP. One month after unilateral single-port thoracoscopic sympathicotomy, the number of RP attacks was reduced and perfusion of the treated hand increased. However, the long-term efficacy and safety profile of this treatment need to be established [123].
Botulinum Toxin Type A
Botulinum toxin type A (BTX-A) is a polypeptide produced by the bacteria Clostridium botulinum. It is an acetylcholine release inhibitor in the peripheral nerve endings of the motor plate and sweat glands. It is well established that BTX-A inhibits acetylcholine release, leading to inhibition of neurotransmitter-induced vasoconstriction and relief of other symptoms, such as pain [124,125].
Botulinum toxins inhibit macromolecular SNARE complexes, which are involved in vesicle fusion with the plasma membrane, thus preventing neuronal exocytosis [126].
A study performed by Medina et al. established botulinum toxin as a safe, accessible, and effective therapeutic alternative for patients with severe RP, allowing those who do not respond to conventional treatments to sustain a good quality of life via annual infiltrations [127].
BTX-A is a promising non-surgical treatment modality and/or adjunct for patients who have contraindications to CCBs, PDE-5 inhibitors, and nitrates. It also provides a non-operative therapeutic alternative for patients experiencing chemotherapy-induced RP where mainstay therapies may be contraindicated, thus reducing pain, improving patient quality of life, and slowing disease progression [128].
In a study by Nagarajan et al. several patients derived long-term benefits from a single treatment, however in patients with SSc, repeat treatments were required and administered after an average of 6 months [129].
Riociguat
Riociguat is a first-in-class guanylate cyclase stimulator and may be a promising new treatment for RP. It works by direct stimulation of guanylate cyclase, independent from NO, and additionally via sensitization of guanylate cyclase to endogenous nitric oxide by stabilizing NO-guanylate cyclase binding. As a result, riociguat efficiently stimulates the nitric oxide-soluble guanylate cyclase-cyclic guanosine monophosphate pathway and leads to increased intracellular levels of cyclic guanosine monophosphate. In contrast to PDE-5 inhibitors, the action of riociguat does not depend on endogenous nitric oxide levels [130]. In the pilot study performed by Huntgeburth et al., a single oral dose of riociguat 2 mg was well tolerated in patients with RP and resulted in improved digital blood flow in some patient subsets, with high inter-individual variability [131].
SSRIs
An improvement in RP symptoms has been reported in patients treated with selective serotonin reuptake inhibitors (SSRIs). A study of 27 patients with SSc revealed that fluoxetine at a dose of 20mg/day was significantly superior to nifedipine in reducing the frequency and severity of RP attacks in patients with SSc [132]. Of note however, exacerbations of RP have also been reported with the use of serotonin reuptake inhibitors treatment [133].
Treat to Target (T2T) Strategy
At present, the decision to commence treatment and to evaluate response in RPincluding the need for dose escalation-is principally based upon clinician-patient discussions regarding symptom severity, perceived effectiveness of the existing/planned interventions, and drug tolerability.
Hughes et al. proposed a five-stage roadmap that may support the development of a treat to target (T2T) strategy for SSc-RP. Significant initial steps are to define the study population and the goals of developing a T2T strategy (stage 1) and to review and shortlist candidate target items (stages 2 and 3, respectively). If agreement regarding feasible targets is not reached at this point, then the goals and purpose need to be refined. Subsequently, a consensus-building exercise among relevant stakeholders would allow the 'target' to be defined (stage 4). Ultimately, well-designed studies (stage 5) will be required to investigate the feasibility and treatment benefit of a T2T strategy in patients with SSc-RP. Much can be learned from primary studies of T2T for rheumatoid arthritis, including randomized trials comparing T2T with routine care, and those comparing different treatment approaches (e.g., monotherapy vs. combination therapy) to reach a defined target. Crucial features of these studies were the frequent review of patients, and the clear guidance that existed on how to intensify treatment of patients who had not reached the target [134,135].
Conclusions
An increased understanding of the pathogenesis of Raynaud's phenomenon is guiding new approaches to treatment. Assessment of peripheral endothelial dysfunction may aid identification of individuals with secondary Raynaud's phenomenon who are at risk of developing connective tissue diseases, and who may therefore benefit from repeated screening, early initiation of preventative treatments, and risk factor modification.
Capillaroscopy is of crucial value for the diagnosis and differentiation of primary and secondary Raynaud's phenomenon.
The presence of digital ulcers implies that intervention with vasodilator therapy is necessary. Early intervention is vital to the treatment of critical ischemia, and calcium channel blockers remain the first line of therapy. Alternatives for severe disease include phosphodiesterase-5 inhibitors and intravenous prostaglandin analogues. The overall benefits of iloprost and alprostadil are comparable without significant differences; however, ease of handling and the lower cost profile favours alprostadil. Sulodexide is a safe rheological drug successfully used as a supportive treatment in RP, resulting in a long-term improvement of capillary flow and a reduction in the frequency of Raynaud's syndrome relapse. Topical vasodilators, for example 10% nifedipine cream, 10% nitroglycerine gel, and 5% sildenafil cream, may act as adjuvant therapy. Riociguat may be a promising new treatment for Raynaud's phenomenon; however, this warrants further evaluation. A variety of alternative modalities have also been reported to be effective in the management of RP including botulinum toxin A, and sympathectomy, or single-port thoracoscopic sympathectomy.
Treat to target strategies may optimize treatment approaches for Raynaud's phenomenon and herald the emergence of disease-modifying vasodilator therapies for systemic sclerosis-related digital vasculopathy. Data Availability Statement: The study did not report any data.
Conflicts of Interest:
The authors declare no conflict of interest. | 2022-05-01T15:11:01.264Z | 2022-04-28T00:00:00.000 | {
"year": 2022,
"sha1": "bc2978f9eac8b2f76d046e4ea01e1c0a14947e08",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/11/9/2490/pdf?version=1651217695",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b803c9f8e017d16373c87eb6b0f5ba645d7b1630",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
29865635 | pes2o/s2orc | v3-fos-license | Least square fitting with one parameter less
It is shown that whenever the multiplicative normalization of a fitting function is not known, least square fitting by $\chi^2$ minimization can be performed with one parameter less than usual by converting the normalization parameter into a function of the remaining parameters and the data.
I. INTRODUCTION
The general situation of fitting by χ 2 minimization is that m data points y i = y(x i ) with error bars △y i and a function y(x; a j ) with j = 1, . . . , n parameters are given and we want to minimize with respect to the n parameters, where we neglect (as usual) fluctuations of the △y i error bars. In many practical application one of the n parameters, say a n , is the multiplicative normalization of the function y(x; a j ) so that it can be written as y(x; a j ) = a n f (x; a 1 , . . . , a n−1 ) . ( With the parameters a 1 , . . . , a n−1 fixed there is a unique analytical solution c 0 for c = a n , which minimizes χ 2 (c), so that the function y(x; a j ) depends effectively only on n − 1 parameters: y(x; a 1 , . . . , a n−1 ) = (3) c 0 (a 1 , . . . , a n−1 ) f (x; a 1 , . . . , a n−1 ) .
This can be exploited to perform least square fitting with one parameter less. Remarkably, the number of steps required for one calculation of χ 2 remains linear in the number of data. Although the derivation of these result is rather straightforward, I have not encountered them in the literature, though I found myself frequently in situations of performing least square fits of the type to which this method can be applied.
In section II explicit equations for c 0 (a 1 , . . . , a n−1 ) and its derivatives with respect to the parameters are derived. Practical examples based on Levenberg-Marquardt fitting [1,2] are given in section III. The conclusion from section III is that we have an additional useful approach, which mostly converges faster than the corresponding fit with the full number of parameters. Conclusion and an outlook on an eventual application are given in section IV. All examples of this paper can be reproduced with Fortran code that is provided on the Web and documented in appendix A. In any case, the code provided here should be useful.
II. CALCULATION OF THE NORMALIZATION CONSTANT AND ITS DERIVATIVES
We find the normalization constant c 0 (a 1 , . . . , a n−1 ) by minimizing χ 2 (c), c = a n for fixed parameters a 1 , . . . , a n−1 . We define so that the derivative with respect to c is zero at the minimum χ 2 (c 0 ): This implies for c 0 the solution As it should, this equation reduces for just one data point (m = 1) to c 0 = y 1 /f (x 1 ). For fixed parameters a 1 , . . . , a n−1 the error bar △c 0 of c 0 follows from the variances of the data points: where | a indicates that the parameters a 1 , . . . , a n−1 are kept fixed. When all error bars of the data agree, i.e., △y i = △y for i = 1, . . . , m holds, equations (6) and (8) simplify to If, in addition, the function f (x) is a constant, f (x i ) = f 0 for i = 1, . . . , m, we find the usual reduction of the variance through sampling: (△c 0 ) 2 | a = (△y/f 0 ) 2 /m. Note that the error bar Eq. (8) does not hold when the parameters a 1 , . . . , a n−1 are allowed to fluctuate, i.e., have themselves statistical errors △a i . Then the propagation of these errors into c 0 (a 1 , . . . , a n−1 ) has to be taken into account, which is done below. Eq. (8) is mainly of relevance for the n = 1 case when c 0 eliminates the sole parameter a 1 . In our illustrations based on the Levenberg-Marquardt approach as well as for many other fitting methods one needs the derivatives of the fitting function with respect to the parameters. With Eq. (3) this become We find the derivatives of c 0 from Eq. 6): Using the derivatives (12) the full variance of c 0 with the associated error bar △c 0 = (△c 0 ) 2 becomes where C jk is the covariance matrix of the parameters a 1 , . . . , a n−1 , which is in our examples returned by the Levenberg-Marquardt fitting procedure..
III. EXAMPLES
In this section we summarize results from least square fits with n = 1 to n = 4 parameters a i , i = 1, . . . , n, where the last one, a n , is always taken to be a multiplicative normalization. The corresponding Fortran code is explained in appendix A.
We apply the Levenberg-Marquardt method in each case to all n parameters as well as to n−1 parameters by considering c 0 , the least square minimum of a n given by (6), to be part of the function. The Levenberg-Marquardt method uses steepest decent far from the minimum and switches to the Hessian approximation when the minimum is approached. Our Fortran implementation is a variant of the one of Ref. [3]. Besides the fitting function y(x; a i ) one has to provide the derivatives and start values for the n, respectively n−1, parameters a i . Usually the method will converge to the nearest local minimum of χ 2 , i.e., the minimum which has the initial parameters in its valley of attraction. Our choice of data for which we illustrate the method is rather arbitrary and emerged from considerations of convenience. For the n = 1 to n = 3 parameter fits we use deconfining temperatures estimates from Markov Chain Monte Carlo (MCMC) simulations of 4D SU(2) gauge theory on N 3 s N τ lattices as reported in Table 4 of a paper by Lucini et al. [4]. We aim to extract from them corrections to asymptotic scaling by fitting (aT c ) −1 = N τ (β c ) to the form [5] where N τ is the temporal extension of a N 3 s N τ lattice and f as λ (β, N ) is the universal two-loop asymptotic scaling function of SU(N) gauge theory with b 0 = (N/16π 2 ) (11/3) the one-loop [6,7] and b 1 = (N/16π 2 ) 2 (34/3) the two-loop result [8,9].
In Table 4 of [4] the error bars are for the critical coupling constants β c . For the purpose of the fit (18) the error bars are shuffled to N (β c ) by means of the equation where a preliminary estimate of the f λ (β) scaling function (17) is used. The thus obtained data (omitting N τ < 4 lattices) are compiled in Table I. For the n = 4 parameter fits results from Bhanot et al. [10] for the imaginary part Im(u) of the partition function zero closest to the real axis are used, which are obtained from MCMC simulation of the 3D Ising model on N 3 s lattices. These data are also collected in our Table I. To leading order their finite size behavior is where a 1 is related to the critical exponent ν of the correlation length by a 1 = −1/ν. In the context of our method this 2-parameter fit is of no interest, because it can be mapped onto linear regression for which the χ 2 minimum leads to analytical solutions for both parameters, a 1 and a 2 . For the critical exponent ν this yields 1/ν = 1.6185 (2), but has an unacceptable large χ 2 , which leads to Q = 0 for the goodness of fit (details are given in [3]). One is therefore led to including subleading corrections by moving to the 4-parameter fit for which our method replaces a 4 by c 0 (6) as function of the other parameters. We are now ready to present the results for our fits.
A. 1-parameter fits
The function (17) reduces to the from and there are no fitting parameters left when the analytical solution c 0 (6) with the error bar △c 0 form (8) is used for a 1 . Our Levenberg-Marquardt procedure works down to a single fit parameter and uses besides the fitting function (23) the only derivative Using the start value a 1 = 0.0628450 one finds convergence after three iteration with the results a 1 = 0.025336 (26) and χ 2 = 4263 .
Without any iteration identical values for a 1 and χ 2 are obtained by using the analytical solution c 0 and its error bar (8). Note that χ 2 has to be the same for identical parameters. So c 0 still counts when it comes to counting the degrees of freedom. Obviously the obtained χ 2 is unacceptable large and it is well visible from the 1-par curve in Fig. 1 that this fit is not good. Additional parameters are needed to account for corrections to asymptotic scaling.
B. 2-parameter fits
The function (17) is now reduced to the form For the fit with two parameters we use the start values a 2 = 0.0628450 (as for a 1 before) and a 1 = −1.43424. After six iterations we find convergence and the results Running the fitting routine now just for the parameter a 1 by eliminating a 2 in the described way, we find the results (27) after four iterations. In particular, also the error bar of c 0 , now calculated via Eq. (15), agrees with the error bar of a 2 . Clearly the χ 2 is still too large to claim consistency between the fit and the data though the visible improvement is considerable. See the 2-par curve in Fig. 1.
C. 3-parameter fits
We fit now to the full functional form (17). For the three parameter fit the previous starting values are reshuffled a 2 → a 3 , a 1 → a 2 and the additional starting value is taken to be a 1 = 1. Our Levenberg-Marquardt procedure needs 245 iterations for convergence and yields the values which are rather different than the corresponding results a 3 → a 2 and a 2 → a 1 (27) of the 2-parameter fit.
Identical values (28), (29) are obtained after eliminating the normalization, here a 3 , from the fit parameters used in the Levenberg-Marquardt iteration and we find a reduction of iterations from 245 to 12.
The χ 2 is now small enough to signal consistency between the data and fit, although the 2-par and 3-par curves in Fig. 1 are hard to distinguish visually. Converting the χ 2 of (29) into a goodness of fit one finds Q = 0.69.
D. 4-parameter fits
The aim is to perform the 4-parameter fit (22) for the data of Bhanot et al. (Table I) where the χ 2 can be converted into a goodness of fit Q = 0.74.
Eliminating the normalization a 4 from the direct fitting parameters, convergence is reached after 58 iterations and we find identical estimates as before: Either set of parameters fits the data perfectly well in their range, while the different fits function diverge quickly out of this range, i.e., for larger lattices.
IV. CONCLUSION AND OUTLOOK
This paper shows that we can exclude the multiplicative normalization of a fitting function from the variable parameters of a χ 2 minimization and include it into the fitting function (it still counts when it comes to determining the degrees of freedom). Our simple examples show that this works well, reducing the number of iterations.
The code discussed in appendix A shows that there is no extra work involved for the user. Once the general, application independent, code is set up, the subroutine which the user has to supply has one parameter less than the one needed for the usual Levenberg-Marquardt fitting procedure (compare the reduction from subg la3su2.f to suby la3su2.f). Besides, it is useful to have alternatives at hand when one is trying to find convenient initial values.
Finally, there may be interesting applications, which cannot be easily incorporated into conventional fitting schemes. For instance, there are situations where distinct data sets are supposed to be described by the same function with different multiplicative normalizations. An example is scale setting in lattice gauge theories [11]. The method of this paper can then be used to eliminate all multiplicative constants from the independent parameters of the fit so that one can consolidate all data sets into one fit. This application will be pursued elsewhere. Each of these subfolders contains two Fortran programs, ngfit.f and ngfitm1.f, where n = 1, 2, 3 or 4 denotes the number of parameters. Both programs are ready to be compiled and run, say with ./a.out>a.txt. The thus produced results should agree with those found in the text file an.txt and anm1.txt. In addition graphical output is created. Type gnuplot gfit.plt to see the plots (a gnuplot driver gfit.plt is located in each folder).
The programs read data and starting values from files named fort.10. For the n = 4 case there are two sets, bhanot1.dat and bhanot2.dat, which differ by the initial values and the desired one has to be copied on fort.10 before the run.
Finally, the program 1gfitm1.f is a special case, because the fitsub routine does not work for nfit=0 parameters. Actually, it is not needed at all in this case and becomes replaced by a variant of the chi2dcda.f routine: chi2c0.f calculates c 0 (6) and its error bar, now according to Eq. (8) instead of (15). | 2016-05-02T17:16:51.000Z | 2015-05-28T00:00:00.000 | {
"year": 2015,
"sha1": "6f099b2d460ec35bec83d0d99e04bc157509ff1e",
"oa_license": "elsevier-specific: oa user license",
"oa_url": "http://manuscript.elsevier.com/S0010465515003987/pdf/S0010465515003987.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "ff4736acdfc82c1060c69d81cc05eeaf761e68ef",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Physics"
]
} |
67877098 | pes2o/s2orc | v3-fos-license | BacSoft: A Tool to Archive Data on Bacteria
Recently, DNA data storage systems have attracted many researchers worldwide. Motivated by the success stories of such systems, in this work we propose a software called BacSoft to clone the data in a bacterial plasmid by using the concept of genetic engineering. We consider the encoding schemes such that it satisfies constraints significant for bacterial data storage.
I. INTRODUCTION
The amount of digital data generated is increasing at really high speed, and it is estimated to be around 35 Zettabyte (10 21 ) bytes by 2020. This digital universe consisting of the big data needs a reliable, dense and affordable storage medium to store data. Recently, Biocomputers have attracted researchers from computer science and engineering to apply their principles to living cells [8]. One of the applications is to use the nature hard drive as a storage medium which is DNA. DNA consists of four bases namely A (Adenine), T (Thymine), G (Guanine) and C (Cytosine) that encodes the information of life. DNA is the most reliable source of data storage that has evolved through generations therefore it is natural to use DNA for digital data storage.
In DNA data storage system, data is encoded into strings of A, T, G and C using different encoding schemes. These data encoded DNA is synthesized and stored in the appropriate environmental conditions. Stored data can be decoded back to the original data by using DNA sequencing. In last few years, researchers have developed DNA based data storage systems by introducing different encoding schemes which showcased that such systems are robust [7], reliable [3] and dense [15]. Storing the data on DNA was first showcased by G. Church et al. [4] and N. Goldman et al. [6] in the year 2012 and 2013 respectively. Motivated by this, a software DNACloud was developed by a team at Gupta Lab, that demonstrated the data encoding in DNA sequences using a modified Goldman's scheme [11] . In subsequent years, various DNA based data storage systems were proposed that revealed the density and durability of the DNA for the archival data storage [9]. A rewritable and random access DNA based data storage was proposed by J. Bornholt et al. [3] and Yazdi et al. [16]. Very recently, Y. Erlich and D.Zeilinski [5] proposed capacity achieving codes by developing fountain codes to encode the data in DNA. Medical School, used bacterial living cells, for the first time to store movie using a powerful gene editing tool CRISPR [12]. Very recently in [13], a systematic framework to simulate the data storage in a bacterial plasmid is given by considering the bacterial plasmids as clusters. To use bacteria for data storage, some of the DNA constraints important for DNA data storage [10] are common to bacterial data storage which is fixed GC content (DNA sequence with the fixed number of Gs and Cs) and no homopolymers (DNA sequence without repeated DNA bases, e.g. ATGCTG). In this work, we propose a method which generates DNA sequences with both these constraints.
All the previous work for bacteria based data storage systems have been performed under experimental trials without using any automated software. In order to ease the process of encoding the data into bacterial DNA, software is required that can verify the experimental protocols before hands-on experiments. To achieve this, motivated by the genetic engineering protocol, we introduce the software to encode the data in bacterial plasmids. Genetic engineering deals with manipulating the bacteria by the insertion of a foreign DNA into plasmids. The plasmid is a carrier that allows the insertion of an external DNA and transfers it to bacteria.
In this work, we have built an open source software, which can be used to automatically store data in the bacterial plasmid using the concepts of genetic engineering. The software generates the DNA sequences with GC content (50 %) and avoids long runs of DNA sequences (no homopolymer). First, we encode the text data into DNA sequences using the encoding strategy discussed in the section II-A. After encoding, this data is cloned into the plasmid which can be selected from the list of different available plasmids. Restriction enzymes, which helps in inserting data into plasmid are automatically selected in the software based on the input DNA sequence.
arXiv:1903.01902v1 [cs.ET] 5 Mar 2019
After Cloning, this data can be decloned back from plasmid and then decoded into the original data. A feature of Gel Electrophoresis simulation of encoded DNA is included in the software that allows visualizing the DNA on the simulated gel according to the length of DNA. All the steps described above can be visualized adequately in the software. This paper is organized as follows. Section 2 discuss the algorithm used for Encoding the data, Cloning the data into a plasmid and then Decloning and Decoding the data back to original form. Section 3 discuss the GUI of our software. Section 4 contains the functionality and workflow of the software. Section 5 contains some examples showing how data can be encoded, cloned, decoded and decoded in the software. Section 6 provides the link to the website from where the software can be downloaded. Section 7 concludes the paper with some general remarks.
II. ALGORITHM This section describes algorithms used for encoding the text data into the bacterial plasmid. It includes Cloning the data into the plasmid, Decloning it back from plasmids and the Decoding it back to original data.
A. Encoding the text data
To encode the data, first, a file is selected using the import function of the software.
After importing the file, it is encoded into the corresponding DNA sequences that will be cloned into the bacterial plasmid. To encode the data, we have used the following encoding scheme: • First, the corresponding text file is converted into the binary format using the standard ASCII conversion table. • Then this binary file is divided into n (x 1 , x 2 , x 3 , ......, x n ) chunks of equal length, where the length of each chunk is 32 bits. • Next, we form k (m 1 , m 2 , m 3 , ...., m k ) new chunks out of the original n chunks using XOR operation (addition modulo 2) on three consecutive chunks. • Now an 8-bit header is added to the above chunk, which is used for indexing. So the total chunk size is 40 bits (32 bits data + 8 bits header). • Next, encode the corresponding binary block into DNA by using the Table I which converts binary to DNA. In this, if the first bit of the binary block is '0' then it is encoded as 'G' else if it is '1' then it is encoded as 'C'. • The above encoding scheme is designed in such a way that the homopolymer runs and GC content are already taken care of while in the method proposed by [5], an additional step of screening is required for testing these DNA constraints.
B. Cloning the Data
Once the data is encoded, the user can select any "Desirable plasmid" and the "Restriction Enzyme" category from the drop-down menu available. There are 176 plasmids available (e.g., pBR322, pUC18, pUC19 and others). There are five different restriction enzymes category available like 6+ Cutters, restriction enzymes, Unique and Dual Cutters, Unique 6+ Cutters, Unique Cutters. After selecting the plasmid, if the imported text data size exceeds the maximum limit of plasmid, then a pop-up warning will be displayed. At present, we have been successfully able to clone around 10 Kb of data into the plasmids. The plasmids with maximum insertion capacity are pJAZZ-OK and pJAZZ-OC belonging to E.Coli bacteria. On selecting the "MCS and Restriction Enzyme" button, Multiple Cloning Sites (MCS) and the plasmid diagram appears on the screen. MCS diagram shown in the Figure 8, describes the various restriction enzymes available in the corresponding plasmid along with their cloning sites. The plasmid diagram (see the Figure 10), describes various elements like antibiotic resistance markers (e.g. Ampicillin, Tetracycline, Chloramphenicol, Kanamycin etc.), repressors, gene and other elements present in the corresponding plasmid. On selecting the "Clone Data" button, the data gets cloned into the selected plasmid and the Cloned Plasmid diagram appears as shown in the Figure 11. For Cloning purposes it employs the following steps: • First, software scans for the Cloning site with sticky ends in the plasmid. • Next, it checks whether the encoded data sequences at start contains complementary base pairs corresponding to the restriction enzyme. If not, then it inserts the corresponding complementary base pairs at the start of the encoded data. • After determining the restriction site, it checks the encoded data length and finds another cloning site according to size. • Once it identifies the cloning sites, it checks for the complementary base pairs at the end of the encoded data to stick into the plasmid. If it is not found, then it inserts the complementary base pairs corresponding to the restriction enzyme at the end of the encoded data. • The data is cloned into the plasmid.
The Cloning process described above can be seen in the Cloned Plasmid diagram as shown in Figure 11. In the figure 11, one can see three diagrams on the top right corner which are explained as follows: • First diagram shows both the cloning sites of plasmid along with the restriction enzymes that were selected for cloning the data. Restriction sites of the displayed restriction enzymes are shown at the bottom of the screen. • Second diagram shows the DNA sequence at both the ends of our encoded data. • Third diagram shows that the DNA sequences of encoded data are complementary to restriction sites of selected restriction enzymes at both ends. Therefore the data gets successfully inserted into the plasmid.
C. Decloning and Decoding Data
To decode the data, first, the cloned data is decloned from the bacterial plasmid and then it is decoded to the original file.
First, it takes the cloned data and searches for the Cloning sites with blunt ends. After that it cuts at that ends and gets back the encoded data that was inserted in the plasmid. After decloning the data from the plasmid, it decodes the data into the original text. The Decoding algorithm is explained below: • It takes the data and divides it into chunks of 40 bits each. • Then it converts the A, T, G and C sequences back to the original binary data by using the conversion Table I. • Then it analyzes the 8-bit header and identifies which original chunks were present in it. For instance, let d = a⊕b⊕c be the encoded chunk formed by XOR operation on the data chunks a, b and c. Suppose while decoding b and c are recovered and a is not recovered, then a can be obtained by XORing d with b and c (a = d ⊕ b ⊕ c ) • The above step is repeated until you recover all the chunks. Then, this binary sequence is again converted back to the text data using the standard ASCII Conversion table and in this way, the original data can be obtained back. After the decoding process, the "Gel Electrophoresis" diagram of the above experiment appears as shown in Figure 12. From this figure, we can see that the length of encoded data, as well as the decloned data is same, which proves the correctness of our experiment.
III. GRAPHICAL USER INTERFACE
The Graphical User Interface (GUI) of BacSoft has been developed for the users to easily upload any text file or upload any encoded DNA sequences file and get the cloned bacterial plasmid corresponding to the selected plasmid. The schematic representation of GUI is described in the Figure 1 A. Importing a text file Figure 3, by clicking this button one can convert the imported text file into corresponding DNA Encoded Sequences which can be further used for cloning purposes. Once the data is encoded, it can be seen on the screen under the section "Encoded Data." Also, the encoded data is available in the file encoded.txt at location C:/Software folder/plasmid/. One can find a drop-down menu beside the label "Select Desirable plasmid". There are around 176 different plasmid vectors available for options. After selecting the desirable plasmid, user can select any of the restriction enzyme categories from the drop-down menu available beside the label "Select Restriction Enzyme". After selecting the desirable plasmid and the restriction enzyme category press the "OK" button as seen in Figure 4 and the selected plasmid vector sequence is displayed on the screen under the "Plasmid" section . On Clicking the button "MCS and Restriction Enzyme" as shown in Figure 5, two new windows pop-up appear on the screen. One of them displays the various restriction enzymes available in the selected plasmid as shown in Figure 8, while the other one shows various antibiotic resistance markers, repressors, protein, gene, etc. available in the selected plasmid as shown in the Figure 10. On Clicking the button "Clone Data" as shown in Figure 6, your encoded data is inserted into the plasmid. The cloned data can be seen on the screen under the section "Cloned Data". The text in red color indicates user encoded text data while the other is the plasmid DNA sequence. A pop-up window is also displayed where the highlighted text in pink is the user encoded text data as shown in the Figure 9. Also, the corresponding text file named cloned data.txt can be found in the folder C:/Software folder/plasmid/. The cloning process can be visualized as shown in the Figure 11. Figure 7, it retrieves the data back from the plasmid and then decodes it to the original data. Decoded data can be seen on the screen under the section "Decloned Data". In this, it also shows the "Gel Electrophoresis" experimental simulation of the Decloned data and the Encoded data as shown in the Figure 12.
IV. FUNCTIONALITY AND WORKFLOW
The primary objective of this subsection is to provide an overview of the working and functionality of the software.
A. Importing and encoding text data
A text file can easily be imported using the import file option. In order to clone data into the bacterial plasmid, it needs to be converted into the corresponding DNA sequences. The proposed encoding method is already described in section II-A.
B. Display MCS and Restriction Enzymes
For cloning purpose, first, select the plasmid to clone data into it. After selecting the plasmid, for the data insertion, we have to cut the plasmid, so that the data can be inserted into it. Restriction Enzymes contains Cloning Sites from where we can cut the plasmid and insert data into it. The user can see various restriction enzymes present in the plasmid along with the Cloning sites as shown in Figure 8.
C. Clone Data
From various restriction enzymes available, the software automatically selects restriction enzymes which will help in inserting data. After selection, the plasmid is cut from there and the Encoded data is inserted into the plasmid as shown in the Figure 11. In the figure, the upper portion on the right side shows the cloning sites chosen along with their restriction enzymes, while the lower right corner shows the corresponding restriction enzyme sequences.
D. Declone data
The software looks for the Restriction Enzymes with blunt ends to retrieve the data back from the plasmid, and it is ready to be decoded. The decoding algorithm is applied as discussed in II-C, which gives the original data back. The user can also see the "Gel Electrophoresis" simulation of the experiment as shown in Figure 12.
V. EXAMPLES
Using BacSoft, one can import any text file and get the corresponding encoded sequences, select various restriction enzymes and MCS, clone data in the plasmid and finally visualize the Gel Electrophoresis simulation of the corresponding inserted data. Also, if the user wants to import its encoded sequence, then one can choose various restriction enzymes and MCS and the clone the data in the plasmid. The details of the examples are given below : A. Importing the text file ' • For example, let us import the text file containing the text data "Start-up India.Stand-up India." (see the Figure 13 under the section "Imported data from file"). • Once the file is imported, the text data is encoded into the DNA sequences using the encoding strategy discussed in the section II-A. The encoded data is of length 320 bps and can be seen in the Figure 13 under the section "Encoded data". • Select the plasmid "pBR322" and "Unique Cutters" restriction enzyme category for cloning the encoded data. This plasmid contains 4361 bps and 52 unique restriction enzymes. These restriction enzymes along with their cloning sites can be seen in the Figure 8. Also, one can see various elements like antibiotic resistance markers, gene, protein, etc. present in the plasmid in the Figure 10. • After this, the data is cloned into the plasmid. The encoded data is highlighted with the pink color in the Figure 9. The figure 13 shows the cloned data under the section "Cloned data". From the Figure 11, it is observed that for this example, HindIII (A AGCTT) and BamHI (G GATCC) restriction enzymes are used with the Cloning sites at positions 29 and 375 bps respectively. • After this the data is decloned back from the plasmid and decoded back using the encoding strategy discussed in the section II-C to the original data as can be seen from the Figure 13 under the section "Decloned data". Also, the Gel Electrophoresis simulation of the above example is shown in the Figure 12.
B. Importing DNA Sequences
• For this example, we will import the text file containing the encoded data "AATTTTTTAAGGCC". The total length of the data encoded DNA is 14 bps. • Select the plasmid "pBR322" and "Unique Cutters" restriction enzyme category for cloning the encoded data. This plasmid contains 4361 bps and 52 unique restriction enzymes. These restriction enzymes along with their cloning sites can be seen in the Figure 8. Also, one can see various elements like antibiotic resistance markers, gene, protein, etc. present in the plasmid in the Figure 10. • After this, the data is cloned into the plasmid. The encoded data will be highlighted in pink color as shown in Figure 9. For this example, HindIII (A AGCTT) and BsrFI (G CCGGT) restriction enzymes are used with the Cloning sites at positions 29 and 160 bps respectively.
VII. CONCLUSION
The software BacSoft gives a simple demonstration to encode the data in the bacterial plasmid. The software includes a data encoding method that preserves GC content and no homopolymers DNA constraints for bacterial data storage. It enables the selection of bacterial plasmid for cloning and facilitates gel electrophoresis simulation for the encoded data. An example describing the encoding and decoding of the data in the bacterial plasmid is illustrated. However, there are challenges in developing robust encoding schemes for bacterial data storage such that it achieves the maximum capacity. As a future aspect, we anticipate proposing an improved data encoding scheme by using better error correcting codes. | 2019-03-05T15:34:28.000Z | 2019-03-05T00:00:00.000 | {
"year": 2019,
"sha1": "0602c279c0df47a09c5290c7d8f0acfc5e4d1c0a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "0602c279c0df47a09c5290c7d8f0acfc5e4d1c0a",
"s2fieldsofstudy": [
"Computer Science",
"Biology"
],
"extfieldsofstudy": [
"Computer Science",
"Biology"
]
} |
9242957 | pes2o/s2orc | v3-fos-license | Theoretical and experimental investigations of macro-bend Losses for standard single mode fibers
Modeling of macro-bend losses for single mode fibers with multiple cladding or coating layers is presented. Macro-bend losses for standard single mode fibers (SMF28) are investigated theoretically and experimentally, showing that the inner primary coating layer of SMF28 has a significant impact on the bend losses and most of the radiation field is absorbed in the inner primary coating layer of SMF28. The agreement between theoretical calculations and experimental measurements suggests that the so-called elastooptical correction in modeling is not required for SMF28. © 2005 Optical Society of America OCIS codes: (060.2310) Fiber optics; (060.2430) Fibers, single-mode References 1. R. C. Gauthier and C. Ross, “Theoretical and experimental considerations for a single-mode fiber-optic bend-type sensor,” Appl. Opt. 36, 6264-6273 (1997). 2. D. Marcuse, “Curvature loss formula for optical fibers,” J. Opt. Soc. Am. 66, 216-220 (1976). 3. D. Marcuse, “Bend loss of slab and fiber modes computed with diffraction theory,” IEEE J. Quantum Electron. 29, 2957-2961 (1993). 4. C. Vassallo, “Perturbation of an LP mode of an optical fiber by a quasi-degenerate field: a simple formula,” Opt. & Quantum Electron. 17, 201-205 (1985). 5. I. Valiente and C. Vassallo, “New formalism for bending losses in coated single-mode optical fibers,” Electron. Lett. 25, 1544-1545 (1989). 6. H. Renner, “Bending losses of coated single-mode fibers: a simple approach,” J. Lightwave Technol. 10, 544-551 (1992). 7. L. Faustini and G. Martini, “Bend loss in single-mode fibers, ” J. Lightwave Technol. 15, 671-679 (1997). 8. A. J. Harris and P.F. Castle, “Bend loss measurement on high numerical aperture single-mode fibers as function of wavelength and bend radius,” J. Lightwave Technol. 4, 34-40 (1986). 9. R. Morgan, J.S.Barton, P.G. Harper and J.D.C. Jones, “wavelength dependence of bending loss in mononmode optical fibers:effect of the fiber buffer coating, ” Opt. Lett. 15, 947-949 (1990). 10. A. B. Shama, A. H. Al-Ani and S. J. Halme, “Constant-curvature loss in monomode fibers: an experimental investigation, ” Appl. Opt. 23, 3297-3301 (1984). 11. K. Nagano, S. Kawakami and S. Nishida, “ Change of the refractive index in an optical fiber due to external forces, ” Appl. Opt. 17, 2080-2085 (1978).
Introduction
It is well known that a radiation loss occurs when a single mode fiber is bent.Accurate modeling of this bend loss is essential for the design of fibers employed in optical communications or optical devices based on a bent fiber, such as some forms of optical sensor [1].The simplest model treats a bent fiber as a core-infinite cladding structure [2,3].In fact, a practical fiber with coating layer(s) offering mechanical protection shows quite different bend loss characteristics to those predicted by the simplest model.Existing theoretical calculations of fiber bend losses treat a fiber as a core-cladding-infinite coating structure, when considering the impact of coating layer [4][5][6][7].As we know most fibers have double coating layers or some fibers themselves have more than one cladding layer, such as depressedcladding fibers but no existing formulas have been presented for modeling bend losses of these fibers except the prediction of maximum or minimum bend losses conditions in relation to fiber parameters and input wavelength [8,9].Therefore, calculation of bend losses for a single mode fiber with multiple cladding layers or coating layers based on perturbation theory is firstly presented in Section 2, which can be used for simulation and design of fiber devices with multiple cladding layers or multiple coating layers when macro-bend losses are involved.
Previous published investigations of fiber bend loss have been focused on some special fibers (particularly fibers with small numerical apertures) rather than standard single mode fibers (such as SMF28), which are widely used in optical communications.In the present paper, the bend loss characteristics of SMF28 are investigated theoretically and experimentally.The impact of three interfaces: 1) between cladding layer and inner primary coating layer; 2) between the inner primary coating layer and outer primary coating layer and 3) between the outer primary coating layer and air on bend losses are analyzed.Detailed comparisons between the experimental measured results and theoretical calculated bend losses based on different models are carried out in Section 4, which indicate that most of the radiated field is absorbed in the inner coating layer and a theoretical model with only the inner coating layer can predict bend losses with a good agreement with experimental results.An elastooptical correction or so-called effective bending radius was required in previous published investigations in order to make the calculated bend losses agree with experimental results, due to the refractive index change caused by the bending stress.However, the agreement between theoretical and measured results in the present paper suggests that this elastooptical correction is not required for SMF28.
Theoretical calculations of fiber bend loss
Total loss of a bent fiber includes the pure bend loss in the bent section and the transition loss caused by the mismatch of propagation mode between the bent and the straight sections.For a single mode bent fiber of length L, the pure bend loss can be calculated by [8] ( ) ( ) where α is the so-called bend loss coefficient, which is determined by the fiber structure, bending radius and wavelength of the light.Most theoretical investigations on fiber bend losses are focused on calculations of this bend loss coefficient.
The simplest model treats the fiber as a core-infinite cladding structure and a simple formula was developed to calculate the bend loss coefficient α [2].A practical fiber contains one or two coating layer(s) outside to offer mechanical protection.The existence of the coating layer(s) will produce a so-called whispering-gallery mode for a bent fiber due to the reflection of the radiated field at the interface between the cladding layer and the coating layer.In order to consider the effect of this reflection on the bend loss, more complicated formulas for the bend loss coefficient α have been developed.Calculation of bend loss considering the coating layer was presented in Ref. [4] using perturbation theory, and subsequently two straightforward formulas were presented in Ref. [6] and [7], respectively.However, all these calculations are based on a fiber containing only one coating layer.Many fibers have more than one coating layer or the fiber itself may have a multiple cladding layer structure, for which the bend loss cannot be calculated with these formulas.Therefore, calculation of bend losses for a fiber with multiple cladding or coating layers using perturbation theory is presented below through generalizing the approach employed in Ref.
[4] and [7], and it can be used for modeling and design of fiber devices with multi-cladding layers involving or utilizing bend loss.In common with previous models, the outermost layer is considered infinite in present calculation.Figure 1 gives the schematic cross-section view of a bent fiber with multiple cladding or coating layers.The bending radius is denoted by R. For the q-th cladding layer, the refractive x N-1 0 y Fig. 1.Cross-section of a bent fiber with multiple cladding layers.
index is q n and the thickness is Based on the approximations made in Refs.[4][5][6][7], the field in the cladding layers of the bent fiber is where the notation in Refs.
[6] and [7] is used and . i B and i A are Airy functions, respectively.
For the outermost infinite layer, there is a relationship between For any two adjacent layers, according to field-continuous boundary conditions, we have q q q i q q q i q q q i q q q i q q q i q q q i q q q i q x Therefore , considering all the cladding layers, or in a short form Based on the boundary condition between the first cladding layer and the core layer [7], we have and with the perturbation theory, the bend loss coefficient can be calculated by The advantage of this model is that it can not only be used to calculate bend losses of fibers containing only one coating layer as presented in Ref. [5][6][7], but also it is suitable for fibers with multiple cladding layers (depressed-cladding fibers) or coating layers.The fiber used in Ref. [9] has two coating layers and the presented experimental results and theoretical investigations (predicting the maximum or minimum bend losses conditions in relation to fiber parameters and input wavelength) in Ref. [9] show that for that fiber, the radiated field penetrates through both the inner and the outer primary coating layers.With the above formulas, theoretical modeling, including the outer primary coating, shows a better agreement between the experimental and theoretical results, by comparison with the case where only the inner layer is considered.
Experimental investigations about bend losses for SMF28
Figure 2 gives the experimental setup used for our measurement of fiber bend losses.The optical spectrum analyzer is used instead of an optical power meter because it can measure the bend loss at the peak output wavelength of the tunable laser rather then over a range of wavelengths.The fiber used in the experiment is SMF28, which is a very common fiber and widely used in optical communication systems.It has core, cladding, inner and outer primary coating layers.Corresponding parameters are presented in Table 1.Using a bending fiber of length 1~2 m, we measured the bend loss in the bend radius range of 8.5 mm to 12 mm inclusive, in increments of 0.5 mm and in the wavelength range from 1500 nm to 1600 nm.For bend radius smaller than 8.5 mm, the bend fiber is easily broken while for a bending radius larger than 12 mm, the bend loss is too low for reliable and repeatable measurement.Figure 3 presents typical measured bend losses for SMF28 with or without an absorbing layer applied to the outside of the fiber.The curves for bend losses for a fiber without an absorbing layer outside have random variations that are small relative to the absolute bend loss at a given wavelength.The maximal variation in Fig. 3 is 3dB when the bend loss is 18dB.Further the measured results are not exactly repeatable and differ each time (Fig. 3 gives bend losses for ten measurements).After we coated the fiber with an absorbing layer, these random variations disappear and the measured bend losses also become invariant.This indicates that these random variations are caused by the reflection that occurs at the interface between the outer primary coating layer and air.It also shows that most of the radiated field is absorbed in the coating layers.Only a small amount of radiated field reaches the fiber surface and is reflected back resulting in interference with the propagation mode.Otherwise, according to the measured results and analysis method presented in Ref. [9], the curve for bend losses as a function of wavelength should have a periodical oscillatory nature comparable in amplitude to the bend losses themselves due to the reflection of the interface between the outer primary coating layer and air rather than these small random variations in our experiments.
Comparisons between theoretical and experimental results
Detailed comparisons between theoretical and experimental results have been carried out in this section to investigate the accuracy of different models; the impact of the coating layers on bend losses and the so-called elastooptical corrections for modeling in the previously published investigations.
Initially, for the purpose of comparison with previous works, we treat the inner primary coating layer to be infinite, which is also equivalent to the case that the inner primary coating layer absorbs most of radiated field.Typical measured bend losses for different bending radii under two different wavelengths, i.e., 1500 nm and 1600 nm are given in Fig. 4(a), and 4(b) with squares and circles, respectively.Theoretical calculation results based on the simplest model (core-infinite cladding layer structure), the formulas proposed in Ref. [6] and [7] and the formula presented in the above section (all treat the fiber as a core-cladding-infinite coating layer structure in this calculation) are presented in Fig. 4(a) and Fig. 4(b).
From Fig. 4(a) and 4(b), firstly one can see the coherent coupling between the fundamental propagation field and the reflected radiated field by the coating layer, i.e., socalled whispering-gallery mode, has an apparent effect on bend loss characteristics so that the calculated results with the simplest model, i.e., treating the fiber as the core and infinite cladding structure, are obviously different from the measured bend losses.Simulation results with the formula proposed in Ref. [6] predicts the impact of the whispering-gallery as compared to the simplest model, but it still cannot agree with the measured results well due to the approximations made in deriving this formula.Calculated results with the formula developed in Ref. [7] and formulas presented in Section 2 have a good agreement with the experimental results and therefore, calculated results based on the presented formulas in Section 2 are used in the following comparisons.Using the formula in Ref. [7] Using the formula in Ref. [6] Using the simplest model Measured results Using the formula in Ref. [7] Using the formula in Ref. [6] Using the simplest model The agreements between experimental measured bend losses and calculated results in Fig. 4 also suggest that the inner primary coating layer absorbs most of the radiated field and the outer primary coating layer has little impact on the bend loss characteristics.In order to verify it, we carry out the calculation of bend losses considering the outer primary coating layer with the formula presented in Section 2, which has the advantage that it can calculate bend losses for more than one coating layer.The calculation treats that the inner primary coating layer as ideally transparent and the outer primary coating layer to be infinite.Figures 5(a) and 5(b) present the corresponding bend losses in wavelength range from 1500 nm to 1600 nm for bending radii R=9 mm and R=10 mm, respectively.The measured bend losses and the calculated results considering only the inner coating layer are also presented in Fig. 5 for comparison.From Fig. 5(a) and 5(b), one can see, that the calculated bend losses considering only the inner coating layer are much closer to the experimentally measured results.The existence for a two coating layer structure of a wave-like variation in the bend loss of the fiber with wavelength is mainly caused by the reflections at the interface between the inner and outer coating layers.The fact that the measured results do not display this wave-like variation support a conclusion that the inner coating layer absorbs most of the radiated field from the cladding.In the previous published investigations, the elastooptical correction or a so-called effective bending radius was required in order to make the calculated bend losses agree with experimental results [5][6][7]10,11], due to the refractive index change caused by the bending stress.Generally the relationship between effective bend radius eff R in modeling and actual bend radius R in experiments is R R eff 27 .1 = and this correction is applied in modeling bend losses under whatever bending radii and wavelengths.We calculated the examples presented in previously published investigations with the formula developed in Ref. [7] and formulas presented in Section 2, respectively.It shows that numerical results using effective bending radius agree better with the measured bend losses than those obtained using actual bending radius directly.However, in the above calculations the agreement between theoretical and measured results for SMF28 suggests that this elastooptical correction is not required, i.e., R R eff = .Theoretical bend losses for SMF28 with the so-called effective bending radius R R eff 27 .1 = are presented in Fig. 6(a) and Fig. 6(b) with solid lines for wavelength range from 1500 nm to 1600 nm and bending radii 9 mm and 10 mm, respectively.Corresponding measured results and calculated results based on the same bending radii as those in experiments are also shown in Fig. 6(a) and 6(b).From this comparison one can see that it will lead to incorrect results if the effective bending radius is used in modeling bend loss for SMF28 as compared to the measured results.This suggests that in the present experiment the bending stress has little effect on the refractive index, so that the so-called effective bending radius for SMF28 almost equals to the actual bending radius.This could be caused by two reasons.One reason could be the fiber material itself and the other could be the fiber parameter V. Compared to the fibers used in the previously published investigations about macrobend losses, a significant difference is that in the present paper, the fiber parameter V of SMF28 is bigger than those of the fibers used in previous papers.For example, fiber parameter V of the fiber LB1000 employed in Ref. [7] is 1.68 (The considered bending radius is from 13.5 mm to 24 mm.).However, for the SMF28 used in the present paper the corresponding fiber parameter V is about 2.123 (The considered bending radius is from 8.5 mm to 12 mm).Further experimental and theoretical investigations about elastooptical correction for fiber bend loss are ongoing.
Conclusion
Modeling of bend losses for a single mode fiber with multiple cladding layers has been presented.Bend loss characteristics of the SMF28 have been investigated theoretically and experimentally.Both experimental and theoretical results have shown that reflection at the interface between the cladding layer and the coating layer has apparent effect on the bend loss characteristics.Comparisons between the experimental and theoretical results have indicated that: 1) most of the radiated field is absorbed in the inner coating layer and a theoretical model with only one coating layer structure agrees well with the experimental results.
2) The agreement between theoretical results and measured results suggests that so-called elastooptical correction used in previous published investigations is not required for SMF28.
Fig. 3 .
Fig. 3. Measured bend losses for bending radius R=10.5 mm and bent length of 0.66 m.
Fig. 4 .
Fig. 4. Measured and calculated bend loss for different bending radii at wavelength a) 1500 nm and b) 1600 nm.
Fig. 5 .
Fig.5.Calculated (with one-coating layer and with two-coating layers, respectively) and measured bend losses in wavelength range from 1500 nm to 1600 nm for a) for R= 10 mm and b) R=9 mm.
Fig. 6 .
Fig.6.Calculated (with and without elastooptical correction) and measured bend losses in wavelength range from 1500 nm to 1600 nm for a) for R= 10 mm and b) R=9 mm. | 2017-04-26T05:53:51.446Z | 2005-06-13T00:00:00.000 | {
"year": 2005,
"sha1": "aa45949a42009f9c1041d393084df8fae29ec631",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1364/opex.13.004476",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "08f5c0385a98d90a87f90ea8d817eb239083c141",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
15301583 | pes2o/s2orc | v3-fos-license | Stretching fibronectin fibres disrupts binding of bacterial adhesins by physically destroying an epitope
Although soluble inhibitors are frequently used to block cell binding to the extracellular matrix (ECM), mechanical stretching of a protein fibre alone can physically destroy a cell-binding site. Here, we show using binding assays and steered molecular dynamics that mechanical tension along fibronectin (Fn) fibres causes a structural mismatch between Fn-binding proteins from Streptococcus dysgalactiae and Staphylococcus aureus. Both adhesins target a multimodular site on Fn that is switched to low affinity by stretching the intermodular distances on Fn. Heparin reduces binding but does not eliminate mechanosensitivity. These adhesins might thus preferentially bind to sites at which ECM fibres are cleaved, such as wounds or inflamed tissues. The mechanical switch described here operates differently from the catch bond mechanism that Escherichia coli uses to adhere to surfaces under fluid flow. Demonstrating the existence of a mechanosensitive cell-binding site provides a new perspective on how the mechanobiology of ECM might regulate bacterial and cell-binding events, virulence and the course of infection.
D espite the intensive research on bacterial adhesins, so far there is no evidence that mechanical factors might regulate the binding efficiency of bacterial adhesins to extracellular matrix (ECM). Pathogens often begin host invasion by binding to ECM proteins, which display a variety of specific adhesion sites for bacteria and eukaryotic cells. Numerous microbes have evolved cellsurface proteins that expose recognition sequences for a majority of host ECM proteins, including fibronectin (Fn) 1 and serum proteins 2,3 , leading to a wide variety of diseases, including wound infections. Fn is a glycoprotein that circulates in body fluids (300 µg ml − 1 ) as a soluble compact dimer and is assembled by cells into insoluble ECM fibres 4,5 . It has a critical role in wound healing, in which its expression and fibrillogenesis are known to be upregulated to assist tissue regeneration 6,7 , thus making it a well-suited target for bacterial binding 8 . Many bacteria, including Staphylococcal, Streptococcal strains and Spirochetes, express cell-wall-anchored Fn-binding proteins (FnBP) that recognize the same binding site on Fn 9, 10 .
The multimodularity of Fn (Fig. 1a) allows spatial distribution of distinct recognition sites along the molecule, enabling diverse interactions such as with other ECM proteins, growth factors, and prokaryotic and eukaryotic cells 11 . Dimeric Fn contains more than 50 modules that belong to one of the three structural β-sheet motifs, FnI, FnII and FnIII 12 . To enhance specificity, the bacterial FnBP exploits the modular structure of Fn by simultaneously binding to several FnI modules that define the amino (N)-terminal 29 KDa region 13,14 (Fig. 1b). The bacterial FnBP aligns antiparallel with up to five FnI modules and undergoes a disordered-ordered transition upon binding to form a tandem β-zipper 15 . A comparison of FnBP across different classes of bacteria shows that the bacterial Fn-binding repeats (FnBR) are each made up of [35][36][37][38][39][40] residues that form the primary binding site to Fn, but the number of FnBR varies considerably across species, containing just one for Borrelia burgdorferi 16 and 11 for Staphylococcus aureus 17 (Fig. 1b). The significance of this variation in relation to the specific modes of host adhesion and invasion is not known.
In wounds, bacteria encounter both soluble plasma Fn and fibrillar matrix Fn 18,19 . Although the interaction of bacterial FnBP with either surface adsorbed Fn or Fn in solution is well characterized 17,20 , it is not known whether tensile mechanical forces might have a role in regulating the interaction of bacterial adhesins with ECM molecules. Fn in fibrillar matrix is known to have distinctly different conformations compared with soluble Fn or surfaceadsorbed dimeric Fn (as reviewed in ref. 11). Cells assemble Fn into matrix fibres 21,22 , and cell-generated forces are sufficient to stretch and partially unfold fibrillar Fn 23,24 , thus stabilizing a broad range of new conformations that are not found under equilibrium conditions. Significant in the context of understanding Fn as a mechanoregulated protein were the findings that the stretching of Fn fibres can disrupt an antibody epitope located between FnI 9 and FnIII 1 (ref. 25) and expose cryptic binding sites in FnIII modules that are buried in relaxed Fn fibres 26 .
How does mechanoregulation of fibrillar Fn impact specific Fnmediated bacterial adhesion? The heterogeneous distribution of Fn molecular conformation in the ECM of living cells 23,24 makes it difficult to directly show bacterial binding being affected by ECM fibre tension, as the size of bacteria is at the typical length scale at which stretch-induced structural heterogeneities exist in native ECM. To circumvent this problem, here we use a cell-free binding assay that allows the stretching of single Fn fibres through the full range of physiologically relevant conformations that are present in cell culture 25,27 . To explore whether tensile forces exerting an effect on protein fibres can regulate the binding of an adhesin, a bacterial peptide that is part of an FnBR from Streptococcus dysgalactiae, was used as a high-resolution structure in complex with the fragment FnI 1-2 is available 15 . S. dysgalactiae, the causative agent for bovine mastitis 28 , has four FnBRs that can potentially interact with not just one but multiple Fn molecules within a fibril. The FnBP-containing adhesins expressed on the bacterial surface are up to 1,000 amino-acids long and contain potentially other specific and nonspecific binding sites that could interact with Fn or other ECM components 10 .
Here, we explore how the specific binding of two short bacterial peptides B3 and STAFF5, derived from S. dysgalactiae and S. aureus, respectively, are modulated by stretching Fn fibres (Fig. 1b,c). To understand the underlying structural aspects of mechanoregulation, we used steered molecular dynamic (SMD) simulations to determine how tensile force applied to the B3T-FnI 1-2 complex could alter the hydrogen-bond network defining this receptor-ligand interaction. Together with the experimental results derived from the Fn-binding assay, we show that both bacterial peptides bind significantly less to stretched than to relaxed Fn fibres, thus demonstrating the mechanoregulation of a cell-binding site on the N-terminus of Fn.
Results
Stretching Fn fibres destroys the bacterial binding site. To determine whether mechanical strain alters the binding of a bacterial adhesin, a fragment B3 of FnBR-4 from S. dysgalactiae (Fig. 1b) was used. The B3 peptide (Fig. 1c) was synthesized with an additional N-terminal cysteine residue (B3C) in order to label it with Alexa Flour-488 dye (B3C-488). A binding assay that allows the stretching of single Fn fibres through the full range of physiologically relevant conformations (from fully relaxed to breakage) 27 was used in combination with optical colocalization studies to experimentally verify strain-dependent binding 25 . Fn fibres were manually deposited on stretchable silicone sheets (Fig. 2a,b). To quantify the binding, ratiometric measurements of labelled B3C-488 bound to Fn fibres that contained Cy5-labelled Fn were observed as a function of fibre extension (mechanical strain; Fig. 2c). Depositing fibres in different orientations on the same sheet allowed an overview of differential binding as a function of strain. To increase statistical significance, we deposited fibres parallel to each other that are under the same mechanical strain (Fig. 3a-f). The intensity ratio of the labelled B3C-488 (I B3C-488 ) per labelled Fn (I Fn-Cy5 ) versus the mechanical strain shows that the binding of the bacterial peptide B3C-488 decreased significantly when fibres were stretched (Fig. 4a).
To demonstrate that mechanosensitivity is a more common feature of bacterial FnBPs, the experiment was repeated using a different peptide STAFF5, which is part of the fifth FnBR in the Fn-binding protein A (FnBPA) of S. aureus (Fig. 1b,c). The peptide STAFF5 binds to Fn using the same mechanism as B3C binding to Fn, but recognizes FnI 4,5 (ref. 29) close to the N-terminus of Fn. This peptide was also synthesized with an additional N-terminal cysteine residue that was used to conjugate Alexa Flour-488 dye (STAFF5C-488). As the mechanical strain increased, we observed a decrease in binding of STAFF5C-488 to Fn fibres, similar to our findings with B3C-488 (Fig. 4b).
Mechanosensitive binding unaltered by heparin and soluble Fn.
The same N-terminal region of Fn to which most Gram-positive bacteria adhere also functions as a binding site for soluble Fn 30 and heparin 31 (Fig. 1a). High concentrations of soluble Fn are present in serum 32 , and heparin is frequently administered as a preoperative anticoagulant 33 . As both of these molecules might coregulate the binding of bacterial adhesins to Fn, we asked whether the presence of either one of the molecules might impact the force-regulated binding of bacterial FnBRs to fibrillar Fn. In the presence of physiological concentrations of soluble Fn (300 µg ml − 1 ), the binding of B3C to relaxed and stretched Fn fibres decreases by 40 or 60%, respectively, but is still strain dependent (Fig. 4c). Confirming the trend seen in Figure 4c, more bacterial peptide binds to relaxed than to stretched Fn fibres, which indicates that the peptide is sensitive to Fn fibre tension even in the presence of soluble Fn. The decrease in B3C binding to Fn fibres in the presence of soluble Fn could be attributed to either B3C binding to soluble Fn in solution without further interaction of this complex with fibrillar Fn, and/or soluble Fn binding to fibrillar Fn and partially blocking B3C binding to the fibres.
Similar results were obtained for heparin, in which we observe a reduction in the binding of B3C to relaxed and stretched Fn fibres of 25 or 40%, respectively (Fig. 4d). This is in good agreement with an earlier study in which 100 µg ml − 1 of heparin reduced bacterial adhesion to implant surfaces coated with Fn 34 . Our data indicate that both soluble Fn and heparin decrease the binding of bacterial Grey regions indicate conserved residues; letters in green are the residues mainly involved in β-sheet interactions with the corresponding Fn modules; and the numbers correspond to the first and last residues. For site-specific photolabelling, the B3 moiety was synthesized with an additional n-terminal cysteine (B3C). B3T, the truncated form of the B3 peptide, is part of the nuclear magnetic resonance structure that has been used for smD simulations (PDB-code 1o9A). The sequence of peptide sTAFF5C is given in the last line. adhesin to Fn fibres, but importantly the presence of either does not alter the mechanosensitivity of the binding of bacterial peptide to Fn fibres.
Fn stretching causes structural mismatch of binding epitope.
To explore the underpinning mechanism by which tensile force exerting an effect on Fn fibres can disrupt bacterial-binding sites, we used SMD to simulate the stretching of FnI modules in complex with the bacterial peptide B3T (B3 truncated; Fig. 1c). The nuclear magnetic resonance structure of the FnI 1-2 -B3T complex 15 (PDB 1O9A) was used as a starting structure for all simulations and hydrated in a box filled with explicit water molecules. The bacterial B3T peptide binds to both FnI modules by the antiparallel alignment of the two binding motifs on B3T with two distinct β-sheets formed by FnI 1 and FnI 2 . For SMD simulations, the solvated system was equilibrated for 2 ns before applying constant tensile force (defined as t = 0 ns; Fig. 5a,b). Three independent simulations were performed, each of which lasted for 7 ns.
When stretching the FnI 1-2 modules with an external mechanical force of 400 pN applied to its terminal ends, the β-sheet formed between FnI 1 and B3T is destroyed after 2 ns of pulling (Fig. 5c,d and Supplementary Movie 1). In addition, we observe that the distance between FnI 1 and FnI 2 increases (green curve in Fig. 6a). This coincides with a decrease in the number of backbone hydrogen bonds formed between Fn and B3T (green curve in Fig. 6b). While stretching the FnI 1-2 modules, the β-zipper motif formed with module FnI 1 was disrupted, whereas the backbone hydrogen bonds formed between FnI 2 and the bacterial peptide remained intact. In the second simulation, B3T detaches from FnI 2 but remains bound to FnI 1 (blue curves in Fig. 6a-c). In the third simulation, the number of backbone hydrogen bonds decreases only slightly (red curve in Fig. 6b). This is because the corresponding starting structure did not show a pronounced β-interaction between the carboxy (C)-terminus of B3T and FnI 1 and thus fewer bonds were broken when compared with the other two trajectories. We also observe a small increase in side-chain hydrogen bonds formed between Fn 1-2 and B3T (red curve in Fig. 6c), which are able to form near the C-terminus of B3T. However, similar to the observations in the other two simulations, the distance between FnI 1 and FnI 2 increases (red curve in Fig. 6a).
In all three simulations, stretching Fn causes a structural mismatch leading to partial detachment of B3T from FnI 1-2 , where the formation of the tandem β-zipper is partially destroyed and reduced to a monomodular interaction between the bacterial peptide and one of the FnI modules. The number of side-chain hydrogen bonds fluctuates significantly and differs among simulations because of the large mobility of the bacterial peptide once it partially disconnects from the Fn fragment (Fig. 6c). However, in all our previous simulations of β-sheet motifs, we found that the major force-bearing interactions were defined by backbone and not by side-chain interactions 35,36 . A detailed overview of both backbone and side-chain intermolecular hydrogen bonds observed in the first simulation can be found in 37 . Taken together, the insights gained from SMD simulations provide a high-resolution structural mechanism that shows how mechanical force pulling on Fn can affect the interaction between B3 and FnI 1-2 , thereby offering an explanation for the experimentally observed decrease in binding of the bacterial peptide to stretched Fn fibres.
Discussion
The finding that the specific binding of bacterial FnBR can be mechanically regulated (Fig. 4) is to our knowledge the first experimental demonstration that mechanical forces functioning on ECM fibres can disrupt a cell-binding site. Various FnBPs that are intrinsically disordered in solution 38 engage up to five FnI modules (Fig. 1) to form the tandem β-zipper that ensures specific binding. Insights into the structural mechanism that mediates the experimentally observed force-induced disruption of the bacterial adhesin interacting with FnI modules (Figs 2-4) were obtained by SMD simulations (Figs 5-7). Stretching of consecutive FnI modules bound to bacterial peptides increases the distance between the bacterial binding sites of Fn. This structural mismatch finally leads to the partial dissociation of the bacterial peptide from FnI modules (Fig. 5c), thus disrupting the multimodular interaction.
Our results with the peptide STAFF5 derived from S. aureus (Fig. 4b) indicate that the structural unbinding mechanism described for the bacterial peptide B3 interacting with FnI 1-2 (Fig. 5) might be more general. Experimental results from STAFF5 illustrate that mechanical forces functioning on Fn can also cause a strain-induced structural mismatch at FnI 4-5 , in which the two FnI modules are separated by a shorter linker chain than in FnI 1-2 . The FnI modules contain many other physiologically significant bindings sites, such as to heparin, collagen, tenascin and fibrin (Fig. 1a), which could also potentially be mechanoregulated.
Soluble Fn and heparin are known to recognize the same N-terminal domain of Fn as the bacterial adhesins 11 ; therefore, we tested the binding of B3C to Fn fibres in the presence of soluble Fn and heparin. The binding of bacterial peptide to Fn fibres is The mean of the intensity ratios (I B3C-488 /I Fn-Cy5 ) of relaxed fibres was set to 1 and the other values were scaled accordingly. All mean values (shown by black bar) are significantly different from each other with P < 0.001 (unpaired two-tailed student's t-test), except where noted. (a) Intensity ratio plot of Alexa488-labelled B3C to Cy5-labelled Fn fibres versus Fn fibre strain (see Fig. 3). Each experiment (shown as differently shaped and coloured data points) consisted of 30 fibres: 10 were relaxed (~7% strain), 10 deposited only in a prestrained state (no mechanical manipulation of the silicone sheet, ~140% strain 26 ) and 10 stretched (~300 or ~380% strain). (b) Intensity ratio plot of Alexa488-labelled sTAFF5C to Cy5-labelled Fn fibres versus Fn fibre strain. Each experiment (shown as differently shaped and coloured data points) consisted of 40 fibres, including 20 relaxed (~0 and ~100% strain) and 20 stretched fibres (~250 and ~380% strain). Inhibition of strain-dependent binding of the bacterial peptide B3C-Alexa488 to Cy5-labelled Fn fibres in the presence (red bars) of (c) soluble Fn (300 µl mg − 1 ) and (d) heparin (100 µl mg − 1 ). The mean of the intensity ratio of relaxed fibres in the absence (blue bars) of soluble Fn or heparin was set to 1 and the remaining three values were scaled accordingly. All values are significantly different from each other (unpaired two-tailed student's t-test) with P < 0.0001, except where indicated. Values are means of intensity ratios of 20 or 10 fibres in the presence of Fn or heparin, respectively, and error bars indicate s.d.
significantly reduced in both instances (Fig. 4c,d), which correlates well with previous studies that show inhibition of FnBP-mediated bacterial adhesion to Fn-coated surfaces 34,39 . The question arises whether the observed inhibitory effects would also be relevant in vivo, as the K d of B3C (a part of an FnBR) to Fn fragments is about 1 µM 15 , whereas complete FnBR can bind to Fn with nanomolar affinity 10 because of the multimodularity of the interaction. However, it is important to note that the affinities were measured to Fn fragments in solution, and not to (full-length) fibrillar Fn. Furthermore, in the case of heparin, evidence suggests that the binding site is located on loop regions and not on the β-strands of the FnI modules 31 , meaning that heparin and bacterial FnBRs do not necessarily compete for the same epitope on Fn. Hence, it is possible that heparin sterically hinders the binding of bacterial peptides to Fn. Regardless of the degree of the observed inhibition, our results indicate that the mechanosensitivity of the FnBP-Fn interaction remains unaltered even in the presence of soluble Fn and heparin at physiological concentrations.
Notably, the force-regulated mechanism discovered here is distinctly different from the catch-bond mechanism that some bacteria have evolved to adhere to surfaces under fluid flow 40 . In the case of E. coli adhesion (type 1 fimbriae adhere to mannose), force regulation is achieved by fluid shear stress pulling on a ligand that sits in a binding pocket, thereby activating the long-lived catchbond state. In contrast, the binding of bacterial FnBR is weakened by stretching Fn, for example, by cell-generated forces. In this case, the force exerts an effect along the Fn fibre axis and destroys the structural match between the receptor and ligand, thereby inhibiting binding even in situations where no force directly pulls on the bacterial adhesin.
Can cells sufficiently stretch ECM fibres to activate the FnI mechanical adhesion switch and thus downregulate the binding of bacterial adhesins? Quantifying this in cell culture is difficult because of the high density of spatially colocalized conformations of differently stretched Fn, as well as the temporal variations in fibre tension 24,27 . It is important to recognize that the Fn strains in our binding assays are within the regime of strains displayed by Fn matrix in cell culture, as shown by fluorescence resonance energy transfer measurements: A considerable fraction of Fn fibres are known to be stretched more than threefold by cells in twodimensional and three-dimensional cell cultures 27,41 , depending on the contractility of the cells and the physiological state of ECM. The observed decrease in binding occurs at Fn fibril strains of 300% or more, whereas in SMD simulations the unbinding of B3T occurs GLY20 TYR22 TYR38 GLY49 GLY50 GLY50 GLY53 PHE54 PHE54 ASN55 ASN55 ASN55 CYS56 CYS56 SER58 GLU61 LYS69 LYS69 ARG83 LYS85 ARG99 ARG99 GLY100 GLY100 ARG101 ARG101 ILE102 ILE102 SER103 CYS104 THR105 THR105 ILE106 ILE106 ASN108 -TRP32 -ASN30 -ASP29 -ASN30 -ASN30 -GLU31 -PHE28 -PHE28 -ASN30 -HIS27 -ASN30 -TRP32 -ILE26 Constant external force of 400 pn was applied starting at 0 ns. Backbone hydrogen bonds (orange) between FnI 1 and B3T are disrupted at ~1 ns (PHE54-PHE28 and CYs56-ILE26), whereas FnI 2 stays bound to B3T (ILE102-GLu18 and CYs104-GLu16). The total numbers of intermolecular hydrogen bonds for this simulation are plotted in the green curve in Figure 6, and the corresponding structures are displayed in Figure 5. side chain hydrogen bonds are displayed in black.
at FnI 1-2 strains of about 50%. This is due to the different mechanical stabilities of the more than 50 different modules per dimeric Fn (types I, II and III). Besides increasing the intermodular distances of FnI modules, the mechanical force pulling on Fn fibre also stretches out the other modules and triggers the unfolding of FnIII modules 24 . As the force needed to activate the increase in the intermodular distance between FnI modules is roughly comparable to that of unfolding the first FnIII modules 35,42 , the increase in the intermodular distances between FnI modules only partially contributes to the total strain of a Fn fibre.
Because of the intrinsic limits for how long all-atom simulations can be run, forces used in SMD simulations are typically higher than those physiologically observed 43 . It is important to note though that it is not the direct correlation with force but the force-induced mechanical strain that defines the switch in the structure-function relationship of stretched proteins. The key motivation for conducting SMD simulations is to identify the force-stabilized structural intermediate states. Several structural mechanisms initially deduced from SMD for other protein systems have been experimentally validated. This includes studies showing the derivation of a first-structural model showing how catch-bond-forming bacterial adhesins work 40,44 , structural predictions proposing a mechanism that elucidates how stretching talin causes the exposure of vinculin-binding sites 45,46 , elucidating the structural mechanism of how titin kinase is force activated 47 and recent efforts to engineer proteins with enhanced mechanical properties 48 .
Taken together, all these observations indicate that our assay using manually deposited Fn fibres provides physiologically relevant insights, and that cell-generated forces are sufficiently high to deactivate the specific binding of bacterial adhesins. However, it should be noted that mechanoregulation of the specific interactions between bacterial FnBP and Fn fibres might be masked since the long protrusions that define bacterial adhesins comprise many other specific and nonspecific binding sites. Each of these epitopes could have a different or non-existent mechanosensitivity. Furthermore, the stretching of Fn fibres gradually exposes hydrophobic amino acids because of the loss of secondary structure, thus additionally promoting nonspecific adhesion 25 .
Many prokaryotic and eukaryotic cell adhesins have evolved polyvalent structural motifs in order to interact with their target host proteins (Fig. 1a). For instance, integrins α5β1 and αIIaβ3 both recognize not just the RGD (arginine-glycine-aspartic acid)loop on FnIII 10 but also the adjacent synergy site on FnIII 9 49 , thus exploiting a bivalent binding strategy to Fn tandem modules to enhance interaction. Interestingly, phylogenetically distinct bacteria use similar motifs to bind to Fn but differ in the number of binding repeats. Multivalent interactions in general extend the lifetime of molecular interactions 50 . How pathogens exploit multivalency in order to colonize highly specialized niches by optimizing their adhesins is poorly understood. Our data provide the first hints that this divergence might equip bacteria with a sensory tool to differentially probe ECM tension.
It is of paramount importance to understand how bacteria evolved their adhesins to optimize their strategies for host invasion and infection. This first demonstration of a mechanoregulated binding site raises intriguing questions whether bacteria can distinguish healthy tissue from wound sites by sensing matrix tension exerting an effect on Fn fibres. At wound sites and areas of inflammation, the ECM fibres are physically or proteolytically cleaved, which should lead to their relaxation. Injured or diseased tissues might thus present Fn in different physical states, and we speculate that this could regulate early adhesion events. The finding that the specific binding of adhesins might be regulated by the tension of ECM fibres provides a unique and new perspective on how the mechanobiology of ECM might regulate early bacterial adhesion and the subsequent course of infection.
Methods
Isolation of Fn and protein labelling. Fn was isolated from human plasma (Zürcher Blutspendedienst SRK) using gelatin-sepharose chromatography based on established methods 51 . Experiments were approved and authorized by the SwissFederal Office for the Environment and Swiss Federal Coordination Center for Biotechnology (notification number A080170). Briefly, 2 mM phenylmethylsulphonyl fluoride and 10 mM EDTA were added to human plasma and spun at 15,000 g for 40 min. Plasma was first passed over the gelatin Sepharose 4B column (Pharmacia) and subsequently the flow-through passed over the Sepharose 4B column (Sigma-Aldrich). The gelatin column was washed with 2 mM phenylmethylsulphonyl fluoride and 10 mM EDTA in PBS. Wash completion was verified when the 280 nm absorbance of the flow-through was < 0.05. The gelatin column was washed with 1 M NaCl, 1 M urea, and finally Fn was eluted from the column with 6 M urea. Purity was approximated by silver stain and western blot. Isolated Fn was stored at − 80 °C in 6 M urea until usage.
The buried cysteines within modules FnIII 7 and FnIII 15 of each Fn dimer were site-specifically labelled with Cy5-maleamide (647 nm; Molecular Probes, Invitrogen) following established protocols 24,27 . Briefly, isolated plasma Fn at about 5 g l − 1 in PBS was denatured in an equal volume of 8 M GdnHCl and incubated for 1 h with a 20-fold molar excess of Cy5-maleimide at room temperature. The labelled Fn was then separated from the free dye by size exclusion chromatography (PD-10 Sephadex, Amersham) into PBS. The labelling ratio of Cy5 per Fn dimer was determined by measuring the absorbance of Fn-Cy5 at 280 and 647 nm and using published extinction coefficients for the fluorophore and Fn. The labelled Fn was stored at -20°C until needed and used within 2 days of thawing. Before use, labelled and unlabelled Fn aliquots were centrifuged at 10,000 g for 10 min to remove aggregates.
The bacterial B3C moiety from S. dysgalactiae and STAFF5C from S. aureus were synthesized (Genscript Corporation) with a terminal cysteine for photolabelling (peptide sequences are given in Fig. 1c). A 200 µg ml − 1 solution of each peptide was labelled with Alexa Fluor 488 (Molecular Probes, Invitrogen) as described above and stored at − 20 °C until needed.
Fn fibre assembly and deposition on stretchable silicone sheets. Fn fibres were pulled from a concentrated droplet of Fn solution and deposited on silicone sheets (Speciality Manufacturing). The silicone sheets were mounted and stretched in a one-dimensional strain device as previously described (Fig. 2b) 27 . Briefly, 0.25-mmthick silicone sheets were cut into 5×1.7 cm rectangles and a 300 µg ml − 1 Fn solution (5% Fn-Cy5, 95% unlabelled) was deposited as a small drop on the sheet. With the aid of a pipette tip, Fn Fibres were drawn by hand out of the droplet and deposited (Fig. 2a). The samples were rinsed and kept hydrated with PBS. The fibres were either deposited in parallel or perpendicular to the strain axis. The fibres are deposited on a prestrained sheet (150% strain), which is relaxed in the x axis and results in stretching of the sheet in a transverse direction when using a onedimensional straining device. Fibre strain was calculated from the externally adjusted strain of the silicone sheet as previously described 24 . The intensity ratios of the samples were measured with confocal microscopy as described below.
Assay to probe the binding of bacterial adhesins to fibrillar Fn. To quantify bacterial peptide binding to fibrillar Fn, samples of manually deposited fibres were prepared and incubated with 0.1 M iodoacetamide to alkylate any free cysteine that might get exposed by fibre stretching [25][26][27] and could potentially react with free Alexa-488 maleimide dye. This was followed by incubation with 4% bovine serum albumin for 30 min to block nonspecific binding. Finally, after rinsing with PBS, the samples were incubated with B3C-Alexa488 or STAFF5C-Alexa488 for 30 min. The sample was finally rinsed with PBS (3×) and imaged under hydrated conditions (that is, immersed in PBS) at room temperature.
Confocal microscopy. All confocal images were acquired with an Olympus FV1000 confocal microscope (Olympus) with a water immersion 0.9 NA ×40 objective. Emitted light from the sample, as well as differential interference contrast images, were detected with two photomultiplier tubes. Images were acquired at 512 × 512 pixel resolution for a 318 by 318 µm field of view. Acquisition parameters including laser transmissivity, pixel dwell time and pinhole size were adjusted to prevent photobleaching while maximizing detection sensitivity. Photomultiplier tube gains were kept constant during measurements within an experiment. Image analysis. Confocal images were analysed using MatLab (MathWorks) and ImageJ. The dark current values were subtracted and the images were scanned and all pixels above a certain threshold were considered as being part of the fibre (to avoid inclusion of intensity peaks of the background). The ratios presented in Figure 3 were taken by dividing the mean of the intensity at 488 nm (that is, the intensity stemming from the Alexa488-labelled bacterial peptide) by the mean of the intensity at 633 nm (that is, the intensity stemming from the Cy5-labelled Fn fibre). All analysed images have a resolution of 512 by 512 pixels.
Computer simulations. Simulations were performed using the open-source molecular dynamics software NAMD and the CHARMM27 force field 52 . The program VMD was used as a visualization tool 53 . Long-range electrostatic forces were calculated with the particle mesh Ewald summation with a grid size of < 1 Å. Van der Waals interactions were simulated using a switching function starting at 10 Å and a cutoff of 12 Å. We used three nuclear magnetic resonance structures of the FnI 1,2 -B3 complex from S. dysgalactiae, which were obtained from PDB (accession code 1O9A) 15 . For every simulation, the structure was placed in an explicit TIP3P water box 54 and ions were added to obtain an electrically neutral system with a physiological salt concentration of 0.15 M. The water box was designed to have 15 Å padding along two axes and 40 Å along the third direction, so as to give the molecule enough space while elongating. The system was then minimized for 2,000 steps while keeping all atoms of the protein fixed. Another 2,000 steps were performed keeping only the backbone atoms of the protein fixed. The next 2,000 steps were performed without fixation of any atoms. This was followed by thermalizing the system, wherein the temperature was raised by 1 K every 100 steps up to 310 K. Thereafter, the system was equilibrated for 2 ns (with an integration time step of 1 fs) using the Berendsen method for keeping both temperature and pressure (P = 1 atm) constant 55 . A constant force of 400 pN was then applied to the two terminal C-α atoms of the protein backbone. The force vectors were pointing in opposite directions along the elongation of the water box. During the first ~100 ps of pulling, the protein aligned to the force direction. Hydrogen bonds were analysed using a distance cutoff of 3.51 Å and an angle cutoff of 30.1°. All simulations were performed on 128 nodes of a Cray XT-3 cluster located at the Swiss National Supercomputing Centre (CSCS). | 2014-10-01T00:00:00.000Z | 2010-12-07T00:00:00.000 | {
"year": 2010,
"sha1": "8232e7a3b90adaca6d78ad6fef830e9a9abde6db",
"oa_license": "CCBYNCND",
"oa_url": "https://www.nature.com/articles/ncomms1135.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "8232e7a3b90adaca6d78ad6fef830e9a9abde6db",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
249956535 | pes2o/s2orc | v3-fos-license | Understanding personality pathology in a clinical sample of youth: study protocol for the longitudinal research project ‘APOLO’
Introduction We propose that a dimensional, multilayered perspective is well suited to study maladaptive personality development in youth. Such a perspective can help understand pathways to personality pathology and contribute to its early detection. The research project ‘APOLO’ (a Dutch language acronym for Adolescents and their Personality Development: a Longitudinal Study) is designed based on McAdams’ integrative three-layered model of personality development and assesses the interaction between dispositional traits, characteristic adaptations, the narrative identity and functioning. Methods and analysis APOLO is a longitudinal research project that takes place in two outpatient mental healthcare centres. Participants are youth between 12 years and 23 years and their parents. Data collection is set up to build a data set for scientific research, as well as to use the data for diagnostic assessment and systematic treatment evaluation of individual patients. Measurements are conducted half-yearly for a period of 3 years and consist of self-report and informant-report questionnaires and a semistructured interview. The included constructs fit the dimensional model of personality development: maladaptive personality traits (dispositional traits), social relations, stressful life events (characteristic adaptations), a turning point (narrative identity) and functioning (eg, achievement of youth specific milestones). Primary research questions will be analysed using structural equation modelling. Ethics and dissemination The results will contribute to our understanding of (the development of) personality pathology as a complex phenomenon in which both structural personality characteristics as well as unique individual adaptations and experiences play a role. Furthermore, results will give directions for early detection and timely interventions. This study has been approved by the ethical review committee of the Utrecht University Faculty for Social and Behavioural Sciences (FETC17-092). Data distribution will be anonymous and results will be disseminated via communication canals appropriate for diverse audiences. This includes both clinical and scientific conferences, papers published in national and international peer-reviewed journals and (social) media platforms.
INTRODUCTION
Recent developments in the field of personality psychology (ie, scientific research on personality structure) and clinical personality psychology (ie, assessment and treatment of personality disorders) show a gradual shift towards a dimensional and personalised understanding of personality pathology. Among others, this has resulted in a proposal for the Alternative Model of Personality Disorders (AMPD) in the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) 1 . Furthermore, an increased focus on developmental trajectories and precursors of personality pathology and the recognition of an individual's wishes, motivations, social roles and the life story as central to understand and treat personality pathology, as opposed to solely deviating patterns in cognition, affect, interpersonal functioning and impulse control. [1][2][3] This is a promising perspective in the search for a valid way to understand pathways of (mal-)adaptive personality development and to recognise personality pathology early in its development. 4 Based on these recent developments, we designed and set up 'APOLO' (Dutch
STRENGTHS AND LIMITATIONS OF THIS STUDY
⇒ This project has a large clinical sample of youth and their parents. ⇒ APOLO (Adolescents and their Personality Development: a Longitudinal Study) has a longitudinal multi-informant, multiconcept and multimethod design. ⇒ Psychometrically sound and age-appropriate measures are used. ⇒ The design allows for between-subject and withinsubject comparisons but has no non-clinical control group. ⇒ Attrition is a major challenge that is handled via clinical embedment.
Open access language acronym for Adolescents and their Personality Development: A Longitudinal Study), a longitudinal twosite research project, along a three-layered integrative model of personality development. In this study protocol, we use the term personality pathology when referring to pervasive, persistent and pathological personality functioning and high levels of maladaptive personality traits, whereas the term personality disorder refers to a categorical DSM-5-II classification. 1 Personality pathology as a developmental, dimensional and multifaceted construct Personality as a construct can be described both with respect to how it varies between individuals, as well as how it is unique for one person. 5 A strong body of research has studied personality development with pivotal contributions that point to general and specific person and environmental factors and their continuous interaction that play a role. 4 6-8 Personality pathology therefore does not appear overnight but can be thought of as the result of a pathway of maladaptive personality development, 9 best described as a process of person-environment transactions in which precursors may be defined. 10 Specifically, person-characteristics that make one vulnerable, such as maladaptive personality trait levels (eg, negative affectivity and antagonism), 11 regulation problems (eg, emotion regulation) 12 and/or pathology (eg, internalising and/or externalising symptoms), 4 may interact with experiencing environmental characteristics that make one vulnerable, such as negative parent-child relations (eg, insecure attachment and harsh parenting), 13 negative peer relations (eg, bullying) 14 and/or experiencing childhood trauma (eg, neglect and sexual abuse). 15 In early adolescence, these transactions may lead to the onset of more severe problems in self and interpersonal functioning, which generally intensify in mid-adolescence and decline in late adolescence. 4 These functioning problems may fluctuate strongly over time and within individuals; however, individual stylistic features of these problems is much more stable. 6 As such, maladaptive personality development is a unique, complex and multidimensional process for every person that may lead to one outcome for the individual: pervasive, persistent and pathological problems, or personality pathology. 16 With regard to personality pathology, this means that classification of personality disorders as distinct categories can essentially be thought of as an simplified reflection of reality. Personality pathology can be described by a combination of maladaptive personality traits and strengths or difficulties in one's functioning. 1 17 18 Accordingly, the AMPD conceptualises personality pathology as one's unique combination of maladaptive traits and facets (criterion B) and one's functioning in the self and interpersonal domain (criterion A 1 ). This gradual shift towards a dimensional perspective ensures an increasingly better understanding of personality pathology as a complex and multidimensional phenomenon, the development of which can be understood through continuous personenvironment transactions. 19 Personality pathology as a combination of multiple layers An integrative theoretical framework that is well suited to study (mal)adaptive personality development is proposed by Dan McAdams. 20 21 This framework has development at its core and conceptualises personality as a multidimensional construct by differentiating three interacting layers. The first layer, dispositional traits, represents broad dimensions of individual differences, accounting for interindividual consistency and continuity in behaviour, thought and feeling across situations over time. This layer is conceived of personality traits like the five-factor model that are thought of as heritable and relatively stable. 22 23 The second layer, characteristic adaptations, represents those aspects of human individuality that concern motivational, social-cognitive and developmental adaptations, contextualised in time, place and/or social role. In other words, the way an individual adapts in a unique way in response to the environment he or she lives in. These adaptations are thought of as less stable. 21 23 24 The third layer, narrative identity, constitutes a personal story about one's life that helps shape behaviour and establish identity. Through autobiographical reasoning, a person creates a narrative of how different parts of, and change in, one's past, present and future are related. 25
APOLO's objectives and relevance
Recently, this model has been used to study personality pathology. [26][27][28][29] However, studies are limited, especially in clinical groups, in both number and/or quality and mainly concern adult participants. The complete model has not been tested in longitudinal studies with (clinical samples of) youth, while this could greatly increase our understanding of pathways of maladaptive personality development and how it relates to current functioning. In addition, longitudinal studies particularly could contribute to early detection of personality pathology, which is essential for improving the prognosis for these vulnerable youths. [30][31][32] This research project builds on existing research providing first evidence for precursors of personality pathology and extends it by studying maladaptive personality development with this integrative model. This provides the possibility to fill important gaps in the literature by integrating and broadening our understanding of maladaptive personality development and personality pathology, specifically, by adding narratives and by conceptualising functioning as both criterion A and achievement of developmental milestones. We herewith hope to contribute to a valid, personal and nuanced perspective on (the development of) personality pathology in youth. This is a perspective that has great clinical utility for both diagnostic assessment as well as timely treatment interventions. With the APOLO project, we aim to enhance our knowledge on personality pathology and its development by examining the interplay between the three layers of personality over time.
Open access
We do this by taking a multimethod, multi-informant, multiconcept and longitudinal approach in a sample that ranges from early adolescents to early adults to capture the most vulnerable period for the onset of personality pathology. 33 We use the term youth to refer to this sample of both adolescents and early adults.
METHODS AND ANALYSIS Patient and public involvement
The design of the APOLO research project is co-created by clinicians, experts by experience and researchers. The dimensional and developmentally sensitive design was based on the need for a personal and nuanced approach to personality pathology, a construct that is often clouded by stigma and controversies, especially in youth. The design was discussed with adolescent experts by experience, who were especially positive about this dimensional and personal perspective. This could help reduce the stigma of personality pathology and lay the focus on strengths, vulnerabilities and identity development while at the same time contributing to young people getting the help they need in time. For this reason, the APOLO project was designed with an explicit dual purpose: (1) to be used to conduct scientific research and (2) to inform the patients' individual clinical trajectory. This study is part of the 'Youthlab' programme in which researchers, clinicians and both clinical and non-clinical youth work together to innovate healthcare processes as well as disseminate results in order to reach the appropriate audience (ie, symposia, infographics, vlogs and website).
Setting
APOLO is a longitudinal two-site research project of which the design started in 2017 and data collection started mid-2018. APOLO is planned to run for at least 5 years. The research project is conducted in two mental healthcare institutes in the Netherlands: Reinier van Arkel and Vincent van Gogh. These outpatient facilities provide diagnosis and treatment to individuals with psychological, self-functioning or social functioning problems and specialise in early detection and treatment of severe psychopathology, including personality disorders. The data collection of APOLO is an integral part of the clinical process of diagnostic assessment and systematic treatment evaluation. The project is completely funded by the collaborating institutes, Reinier van Arkel, Vincent van Gogh and Utrecht University.
Participants
The research population of APOLO consists of youths between ages 12 and 24, and their parents, referred for treatment to the participating institutions with varying levels of severity and/or complexity in psychological problems. APOLO is an ongoing research project. Currently (October 2021), our sample (n=431) consists of youths (29% self-identified male) with ages ranging between 12 and 24 (M=19.3, SD=2.3). APOLO does not have strict exclusion criteria; however, data collection is limited to specific treatment programmes where data collection for APOLO is conducted. In these treatment programmes, adolescents and young adults with diverse types of severe psychopathology, including personality pathology, are included and treated. Patients with other primary DSM-5 diagnoses such as intellectual disability, acute psychotic disorder, severe eating disorder or severe substance dependence are referred to other treatment programmes.
All adolescents and young adults that are at the start of their treatment are asked to participate. In the rare case that an adolescent is included but does not fit the research population due to a wrong referral, he or she will be excluded from follow-up assessments and reallocated to another team or institute for suitable treatment.
Procedure
After youth are referred to one of the two specialised mental healthcare institutes and invited for intake in a team in which data collection for APOLO takes place, they-as well as their parents-receive an email with a link to fill out questionnaires online at home. This assessment is used for treatment indication as part of the diagnostic process at intake and therefore 'care as usual'. The assessment at intake consists of a total of 11 selfreport questionnaires for youths (duration 45-60 min) and a total of six questionnaires for one of the parents (duration 15 min). Youths and parents have access to the questionnaires 3 weeks prior to and after their intake appointment. Failing to fill out the questionnaires within this period results in the data for that wave being registered as missing.
Along with the invitation for their intake appointments (consisting of one appointment for intake and one for feedback and consultation, with usually 3 weeks in between), youths and their parents receive an invitation to participate in APOLO. The invitation letter contains an information folder, directions to the website 34 and an informed consent form. Youths and parents are asked to give their written informed consent for using their data anonymously for scientific research. They are also informed that they can revoke their participation at any time without any consequences and will continue to receive treatment as usual. They are asked to bring the signed consent form to the intake. All therapists conducting intakes are informed of the background and practicalities of APOLO and are trained in conducting the semistructured interview that is part of the assessment. During the intake, participants are again informed of the research project and given the opportunity to ask questions; informed consent is (signed and) handed in, and a Turning Point Interview (TPI) (approximately 5 min) is conducted and recorded on a tablet. Participants who have not yet filled out the questionnaires are given the opportunity to do so in a computer room at the institute.
Follow-up assessments are conducted every 6 months (counted from the date of intake) over a course of 3 years, resulting in a maximum of six waves. Participants receive the same measures (or a shortened test battery; see online supplemental appendix 1), the questionnaires online and the semistructured interview via a face-toface or telephone appointment. Participants have access to these questionnaires 2 weeks prior to and after the intended assessment date. Since dropout is a known issue in longitudinal research and even more so in a clinical setting, the research team makes a great effort in monitoring follow-up assessments and notifying participants (first by e-mail, then if needed by phone) when their next assessment is approaching. Furthermore, to ensure participation and prevent drop out, the assessments are consistently used in the clinical process: for treatment indication at intake, as a screening tool for diagnostic assessments and for systematic treatment evaluation. Additionally, after each wave-whether or not they are still in treatment-participants are invited for a free appointment with a therapist involved with the research project in which extensive individual feedback is provided about the outcomes.
Measures
The measured variables are based on the theoretical model of personality development by McAdams and Pals 20 (see figure 1). Assessment differs slightly between settings (see online supplemental appendix 1). Cronbach's alphas were calculated for each measure with data from our current sample, except where not applicable (Relationship Questionnaire (RQ), Turning Point Questionnaire (TPQ)/TPI and Life Events Questionnaire (LEQ)) or insufficient data (Confusion, Hubbub and Order Scale (CHAOS) and Strengths and Difficulties Questionnaire (SDQ)). In the latter case, Cronbach's alphas from studies with a similar sample are reported. Sample sizes that could be used to calculate Cronbach's alpha differed for each measure due to missings, differences in the test battery between waves and attrition.
Dispositional traits: Personality Inventory for DSM-5 (PID-5)
The Personality Inventory for DSM-5-Short Form (PID-5-SF) 35 is a shortened version of the original 220-item PID-5. 36 The PID-5 is a self-report questionnaire that measures five higher order maladaptive trait domains: Negative Affectivity, Detachment, Antagonism, Disinhibition and Psychoticism, along 25 trait facets. 36 The PID-5 has been translated into Dutch according to international standards under supervision by the Dutch association for psychiatry, with backward translation by the original authors to maintain equivalence. 37 The PID-5-SF (of which all the items are contained in the original form) measures the same five trait domains and 25 facets with 100 items on a 5-point Likert scale ranging from 'completely not true' to 'completely true'. This version was validated for use with adults 35 38 and adolescents. An overview of its psychometric properties with adolescents can be found in Koster and colleagues. 39 Every trait domain consists of the three most distinctive facets with 12 items in total, and in our sample (n=416), Cronbach's alphas ranged from 0.82 to 0.90. The 25-item Personality Inventory for DSM-5-Brief Form (PID-5-BF), 40 also used in this study (see online supplemental appendix 1), is again a shortened version of the original questionnaire that measures the five trait domains with 25 items. The PID-5-BF has been shown to reliably and validly assess the DSM-5 traits in European adolescents and adults. 38 41 Every trait domain consists of five items, and in our sample (n=101), Cronbach's alphas ranged from 0.68 to 0.81. Due to differences between the items included in the PID-5-SF and PID-5-BF, participants in some cases (see online supplemental appendix 1) receive the PID-5-SF and an additional nine items of the PID-5-BF (items 1, 4, 5, 6, 7, 8, 16, 18 and 23) in order to cover all items. This is to allow for the possibility to deduct the PID-5-BF items from the PID-5-SF. Parents receive the informant version, the PID-5-IBF. Every trait domain consists of five items, and in our sample (n=187), Cronbach's alphas ranged from 0.65 to 0.82.
Characteristic adaptations: RQ
The RQ 42 is a five-item self-report measure that consists of four paragraphs describing Secure, Preoccupied, Fearful and Dismissing attachment styles. Respondents are asked to first indicate which attachment style best describes them and second to rate the degree to which the four descriptions characterise them using a 7-point Likert scale, ranging from 'not at all like me' to 'very much like me'. The RQ has been shown to have reasonable validity and stability in use with young adults and undergraduates. 43 44 Results correlate moderately with attachment styles determined by interview. 42 The RQ provides a rapid assessment of attachment quality and has been used with adolescents. 45
Open access
Characteristic adaptations: Inventory of Interpersonal Problems-32 (IIP-32) The IIP-32 48 is a 32-item self-report questionnaire measuring interpersonal difficulties. All items are rated on a 5-point Likert scale ranging from 'not at all' to 'extremely'. The measure yields a score on two underlying dimensions: Affiliation and Dominance, as well as scores on eight subscales: Domineering/controlling, Vindictive/self-centred, cold/distant, Socially inhibited, Nonassertive, Overly accommodating, Self-sacrificing and Intrusive/needy. As found in previous research, the IIP-32 has satisfactory reliability and validity 49 and has been reliably administered to adolescent populations. 50 51 In this research project, we use the Dutch language version. 52 The subscales each consist of four items, and in our sample (n=426), Cronbach's alphas ranged from 0.63 to 0.81; Cronbach's alpha for the total scale was 0.87.
Characteristic adaptations: Network of Relationships Inventory-Behavioural Systems Version (NRI-BSV)
The NRI-BSV 53 is a 24-item self-report questionnaire that measures how frequently different relationships are used to fulfil the functions of three behavioural systems: attachment, caregiving and affiliation. Items are answered on a 5-point Likert scale ranging from '(almost) never' to '(almost) always'. In previous research, the NRI-BSV has been found to have adequate psychometric properties 53 and excellent reliability. 54 We use an 11-item version of the NRI-BSV with which the two broad domains Support and Negative Interactions can be constructed, in which participants rate their relationship with one parent of choice and a relationship with one other important person. 53 The NRI-BSV was translated into Dutch by Van Aken and Hessels. 55 The Support subscale consists of five items (n=432, α=0.79, for both parent relationship and other relationship), and the Negative Interactions subscale consists of six items (n=432, α=0.93, for parent relationship and α=0.88 for other relationship). Parents receive the informant version, in which they rate the relationship with their child. The support subscale consists of five items (n=176, α=0.61), and the negative interaction subscale consists of six items (n=176, α=0.91).
Narrative identity: TPQ and TPI
The TPQ is a qualitative measure designed as an infographic (see online supplemental appendix 2 for the infographic). The TPQ is constructed as part of the theoretical framework of McAdams' 56 life story model of identity, which posits that one's identity is demonstrated through the construction of a life story. Facets of one's identity may be identified by analysing how individuals narrate significant life experiences like turning points. 57 58 Turning points are specific events that are perceived to alter the normal flow and direction of one's life. 59 The TPQ asks participants if they ever experienced a life event that they might call a turning point or-if not-to pick an event that resembles a turning point. They are asked to shortly describe this event, whether they derived a lesson from this event (on a 7-point Likert scale ranging from 'not at all' to 'very much') and whether they have discussed this event with a parent/caretaker. Parents receive an informant version of the TPQ at the first wave, along with the same infographic describing what a turning point is. In this informant version, they are asked if they think their child has experienced a turning point and to shortly describe this event.
Subsequently, the TPQ is expanded with a short, semistructured interview that is conducted by trained clinicians and recorded, the TPI. Participants are asked to narrate about this turning point and, with three follow-up questions, are asked specific details about how this event has influenced the participant. These questions are: 'What did you feel, think and want during this event?', 'Why is this an important event in your life story?' and 'Does this event say something about who you are now or how you see yourself in the future?' The narratives are transcribed and coded for theme, valence, meaning making, agency, communion and coherence. 58 60-62 Stressful life events: CHAOS CHAOS 63 is a questionnaire that measures the quality of the youths' home environment. The questionnaire is built on the premise that youth are function and develop better/more adaptive in home environments with more order and less confusion and hubbub. In previous research, the CHAOS has been found to have satisfactory internal consistency (α=0.79), test-retest stability, as well as validity. 63 The Dutch adaptation of the CHAOS 64 used in the current research project consists of 17 items that are rated on a 5-point Likert scale ranging from 'not at all true' to 'completely true'. Only participants' parents receive this measure.
Stressful life events: LEQ
The LEQ is a self-report measure constructed out of three existing questionnaires which were combined to fit the purpose of this research project. The Life Experiences Survey 65 was used for its structure, in which both the occurrence and the impact of specific life events is assessed. Within this structure, questions of the Childhood Trauma Questionnaire 66 67 and the Levensgebeurtenissen Vragenlijst (a Dutch life events survey) 68 were combined. The LEQ we used in this research project consists of 12 items that cover stressful life events in the family, personal experiences and bullying, and one open item that asks the participant for any stressful event not covered by the items before. The 12 questions consist of two parts: first, the adolescent is asked to indicate whether (yes or no) he/she has experienced the event during his/her lifetime and, second, to indicate how much (on a 4-point Likert scale ranging from +1, 'positively', to −3, 'very negatively') this event impacted his/her life. In all follow-up waves, participants are asked whether they have experienced the events since the last wave.
Functioning and symptoms: Symptom Questionnaire-48 (SQ-48) and SDQ Within the domain of functioning, two questionnaires are used to assess symptoms (see online supplemental Open access appendix 1 for details). The SQ-48 69 is a self-report questionnaire measuring psychological distress with nine subdomains: depression (six items), anxiety (six items), somatisation (seven items), agoraphobia (four items), aggression (four items), cognitive problems (five items), social phobia (five items), work functioning (five items) and vitality (six items). All items are rated on a 5-point Likert scale ranging from 'never' to 'very often'. The SQ-48 has good internal consistency as well as good convergent and divergent validity. 69 An additional study showed that the SQ-48 has excellent test-retest reliability and good responsiveness to therapeutic change. 70 In our sample (n=389), Cronbach's alphas ranged from 0.74 to 0.92 for the subscales and was 0.94 for the total scale.
The SDQ 71 72 is a 25-item questionnaire that measures psychopathological symptoms in children and adolescents with five subdomains, containing five items each: emotional symptoms, conduct problems, hyperactivityinattention, peer relationship problems and prosocial behaviours. All items are rated on a 3-point Likert scale ranging from 'not true' to 'certainly true'. In APOLO, the Dutch translation of the SDQ is used, which has been found to have good concurrent validity. 73 74 For the self-report version, Cronbach's alphas in a study using a similar sample ranged from 0.45 to 0.72 for the subscales and were 0.78 for the total scale. For the parent version, Cronbach's alphas ranged from 0.55 to 0.78 for the subscales and was 0.80 for the total scale. 73 Functioning: Developmental Milestones List (DML) Achievement of youth-specific milestones was assessed using a newly developed measure: the DML. 75 The DML is a 28-item questionnaire including tasks and activities reflective of youth-specific developmental milestones. The first 21 items of this list ask, on a 7-point Likert scale, to what extent the participant experiences trouble in the achievement of youth-specific milestones. These items combine to a total scale. The specific milestones may be divided in three broader domains based on previous work on youth-specific milestones 76 : social (eg, relationships with peers), personal (eg, autonomy) and professional (eg, school/work). The last seven items of this list were included specifically for (our) clinical populations, providing an indication, on a 4-point Likert scale, of clinical severity that may hamper the achievement of milestones (eg, problems in accepting help, auto mutilation and drug abuse). In our sample (n=426), Cronbach's alpha for the total scale was 0.78. Parents receive an informant version of the DML. In our sample (n=179), Cronbach's alpha for all items was 0.88.
Functioning: Level of Personality Functioning Scale-Brief Form (LPFS-BF)
The LPFS-BF 77 was developed as an easy-to-use tool to selfassess whether particular problems were likely related to personality dysfunction. It is a measure of self-functioning and interpersonal functioning, as an operationalisation of global personality functioning. 78 The LPFS-BF consists of 12 questions which are clustered into four subscales (identity, self-direction, empathy and intimacy). These subscales are clustered into two higher domains, selffunctioning and interpersonal functioning. Participants respond to these questions on a 4-point Likert scale ranging from 'not at all true or often untrue' to 'often true or completely true'. In our sample (n=421), Cronbach's alpha was 0.74 for the self-functioning subscale, 0.71 for the interpersonal functioning subscale and 0.79 for the total scale.
Functioning: Satisfaction With Life Scale (SWLS)
The SWLS 79 contains five items to measure global judgments of satisfaction with one's life. We use the Dutch translation of the SWLS. 80 Items are scored on a 7-point Likert scale (1=strongly disagree, 7=strongly agree). The five items are summed. In our sample (n=424), Cronbach's alpha for the total scale was 0.80.
Research questions, power calculation and data handling
This project has the overarching aim to examine the interplay between the three layers of personality development, as proposed by McAdams and colleagues, in an clinical sample of youth and how this interplay is related to (personality) functioning. Specifically, the two primary research questions are as follows: (1) is there evidence for unique or distinctive (group) patterns in which characteristics from McAdams' layered model of personality development are related in a clinical sample of youth? and (2) how are distinctive patterns related to trajectories of change in functioning? Characteristics of McAdams model are operationalised as maladaptive personality traits (dispositional traits, layer 1), attachment, interpersonal style, social network, experienced life events (characteristic adaptations, layer 2) and turning point narratives (narrative identity, layer 3). Functioning is operationalised as the achievement of developmental milestones, self-and interpersonal functioning, satisfaction with life and psychopathological symptoms. Characteristics in the first two layers of McAdams' model have often been identified as precursors of personality pathology in previous studies. Distinctive group patterns in how these characteristics transact as a symphonic structure will be explored cross-sectionally using Latent Class Modelling in Latent Gold. 81 Testing across level and longitudinal associations in the three layers and functioning will be done using structural equation modelling (SEM) in MPlus. Due to the large number of constructs in the complete model, specific associations between different layers will be tested separately to ensure adequate power and avoid the problem of multiple testing. 82 For example, one study will focus on whether and how the predictive association between maladaptive personality traits (layer 1) and agency and communion in narratives (layer 3) is moderated or mediated by interpersonal style (layer 2). Power was considered for these primary research questions, and based on both simulations and rules of thumb of the power needed to analyse complex SEM Open access models with multiple variables and missing data, a sample size of >300 complete cases should be adequate. 83 84 To analyse latent classes, considering the assumed class separation, effect size and complexity of the data, a sample size of >500 is suggested. 85 86 In the case of data difficulties like measurement non-invariance or differential item functioning, which may be likely in a clinical data set with multiple variables, this technique is also suitable. 87 For our primary research questions, we hypothesise that there will be distinctive group patterns that may point to individuals with more or less pronounced vulnerability profiles. We expect that a more vulnerable profile will be associated with a less adaptive developmental course in terms of personality functioning. However, meaning making (reflected by narrative identity, layer 3) may play a moderating or mediating role.
Secondary research questions will address concurrent and longitudinal associations in McAdams' model piece by piece: between precursors, the social network, the narrative identity and specifically criteria A and B of the AMPD. For example, one study will focus on the association between self-event connections (layer 3) and personality functioning over time, controlling for negative affectivity (layer 1) in a regression model. Another study will focus on transactions between maladaptive personality traits (layer 1) and the social network (layer 2) using a random intercept cross lagged panel model. A cooperation was set up with the data laboratory of Utrecht University to store the data that were collected at all locations. 88 This ensures reliable and secure data management while data collection is ongoing.
ETHICS AND DISSEMINATION
APOLO combines a longitudinal scientific study and clinical implementation of a multilayered dimensional model of maladaptive personality development in an outpatient clinical adolescent sample. APOLO measures several constructs according to three-layered model of personality development, taking a multimethod, multiconcept and multi-informant approach. The data collection and handling are set up in such a way that it (1) provides the opportunity to study important scientific questions concerning pathways of maladaptive personality development and (2) informs the individual clinical process, providing patients with a direct benefit of completing the measures. As such, this project is inevitably faced with challenges, of which attrition and the balance between ensuring an anonymous and scientifically sound longitudinal data set while also making appropriate use of the data for individual clinical trajectories are the most prominent. The embedding of this project in the clinical structure is therefore an essential but also unique feature on which a lot of effort and time are spent. Cooperation between the different clinical sites is a challenge that is approached flexibly to ensure clinical embedment and to prevent attrition, resulting in slight differences between the number and type of instruments included.
Furthermore, recruitment of all youths referred to the involved institutes reduces the occurrence of selection bias of participants as well as increases the generalisability of findings to the clinical adolescent population. In addition, the inclusion of narrative identity allows for a unique and in-depth understanding of how (mal)adaptive personality development 'colours' one's subjective experience and meaning making.
The planned dissemination is twofold: first, for the scientific field, the output of this research project will enhance our understanding of maladaptive personality development as a complex phenomenon in which both structural personal characteristics as well as unique individual experiences play an important role. These results will be presented at congresses and published in international peer-reviewed journals, along with proposed directions for future studies. Second, for the clinical field, the results will be made available to clinicians in newsletters and national journals, used to inform workshops and trainings and-for both clinicians, other professionals and youth-integrated in infographics, fact sheets and social media posts to provide information about maladaptive personality development and inform early detection and timely interventions. | 2022-06-24T06:17:47.973Z | 2022-06-01T00:00:00.000 | {
"year": 2022,
"sha1": "3d0f0e80e30875d559f0babc9d016484358c295e",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/12/6/e054485.full.pdf",
"oa_status": "GOLD",
"pdf_src": "BMJ",
"pdf_hash": "594fadea71fefd826ff328f845354f24c7c4555d",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
35955159 | pes2o/s2orc | v3-fos-license | Beyond Fighting Fires and Chasing Tails? Chronic Illness Care Plans in Ontario, Canada
PURPOSE Recent work has conceptualized new models for the primary care management of patients with chronic illness. This study investigated the experience of family physicians and patients with a chronic illness management initiative that involved the joint formulation of comprehensive individual patient care plans. METHODS A qualitative evaluation, framed by phenomenology, immediately followed a randomized controlled trial examining the effect of external facilitators in enhancing the delivery of chronic condition care planning in primary care. The study, set in Ontario family practices, used semistructured in-depth interviews with a purposive sample of 13 family physicians, 20 patients, and all 3 study facilitators. Analysis used independent transcript review and constant comparative methods. RESULTS Despite the intervention being grounded in patient-centered principles, family physicians generally viewed chronic illness management from a predominantly biomedical perspective. Only a few enthusiasts viewed systematic care planning as a new approach to managing patients with chronic illness. Most family physicians found the strategy to be difficult to implement within existing organizational and financial constraints. For these participants, care planning conflicted with preexisting concepts of their role and of their patient’s abilities to become partners in care. The few patients who noticed the process spoke favorably about their experience. CONCLUSIONS Although the experiences of the enthusiastic family physicians were encouraging, we found important individual-level barriers to chronic illness management in primary care. These issues seemed to transcend existing organizational and resource constraints.
INTRODUCTION
I nternational health care planners are becoming increasingly concerned with the morbidity and mortality associated with chronic illnesses. 1,2 Although the burden of chronic illness is borne throughout health care systems, it has particular relevance for primary care. 1,3 Features inherent to primary care-continuity, coordination, and comprehensiveness-are well suited to care of chronic illness, 4 but many primary care organizations struggle to deliver high-quality care to patients with complex chronic disease(s). 5,6 Recent research suggests that outcomes may improve if primary care for the chronically ill incorporates enhanced systems for clinical information, evidence-based practice, health system integration, and patient self-management. [7][8][9] Many of these dimensions have been incorporated into Wagner and colleagues' Chronic Care Model. 10 The World Health Organization has extended the model to a national health policy framework. 1 Mindful of the challenges of implementing such changes across a health care system, some countries, in particular Australia, have oriented primary care chronic disease interventions toward individual practitioners and their patients. 11
CHRONIC IL L NES S C A R E M A NAG EMEN T
Despite the wide promotion of these models, surprisingly little is known about the impact of these strategies on primary care providers or their practices.
This study was designed to evaluate the impact of a holistic, patient-centered, and pragmatic approach to improve the management of chronic disease in the setting of Canadian family practice. The model, called Chronic Illness Care Management (CICM), had 5 essential components (Table 1). Using a structured, written care plan, the CICM was to be delivered to competent patients older than 50 years who had multiple chronic diseases.
The CICM was introduced to family physicians during a series of visits with an experienced outreach facilitator as part of a randomized controlled trial (RCT). 12 Facilitators were registered nurses with master's degrees in administration or adult education. They visited practices approximately once a month during the course of the study to facilitate practice-organization-level change, fi rst for changes related to prevention and screening practices, then for the CICM phase. The time lines of the study are displayed in Figure 1.
In this article we report fi ndings from a qualitative evaluation nested within the RCT. Our aim was to understand the experience of family physicians and patients regarding the care-planning intervention.
METHODS
This qualitative study involved in-depth interviews with family doctors, patients, and facilitators in the Ottawa and Hamilton/Wentworth areas of Ontario, Canada. The study used a phenomenological approach to data collection and analysis in an effort to understand the lived experience of study participants. 13
Physician Participants
All participating family physicians worked in either Primary Care Networks or Family Health Networks. Both practice types are part of the province's reorganization of primary health care, characterized by patient rosters and blended payment mechanisms with incentives for providing preventive services. 14 Participating doctors were chosen from family physicians participating in the RCT investigating whether the CICM care plans, implemented with tailored outreach facilitation, improved the quality of life and quality of care of the most complex, chronically ill patients. All physician participants had already participated in a before-andafter study of a facilitator-led intervention designed to enhance the delivery of preventive care. 15
Recruitment for Interviews
We sought to gather a purposeful sample of physician participants with varying facilitator-perceived engagement with CICM. In the closing stages of the RCT, the Table 1
. Chronic Illness Care Management (CICM): Model and Care Plan Components
The CICM was framed as a patient-centered model for primary care management of persons with multiple chronic illnesses. This was to be accomplished through an evaluation of a patient's care requirements via a written care plan prepared collaboratively between a patient and the patient's family physician. Patient health goals and concerns were to be elicited, and 5 components reviewed: 1. Medication review 2. Education and self-care 3. Psychological and social assessment 4. Community integration and social support 5. Prevention Through this process, patients and physicians could then set mutual goals, with plans for follow-up in planned, scheduled visits. Physicians were compensated $300 for the completion of a care plan. 24 participating physicians were invited by facilitators to participate in a qualitative evaluation of the study. Of the 21 who accepted the invitation, a process of purposive sampling was used to identify potential participants. Sampling was primarily based on facilitator perceptions of physician satisfaction and engagement with CICM. Facilitators fi rst identifi ed those physicians who were thought to have embraced the care-planning process. Six participating physicians were perceived to have met this description, 2 with each facilitator. The balance of the sample comprised physicians who were perceived as having neutral or negative views about the process. The 15 physicians approached to participate in the interviews showed maximum variation in terms of sex, practice location (rural or urban), and practice size (solo or group). Thirteen of the 15 physicians who were approached agreed to be interviewed. One refused, citing lack of time. A second consented but was unable to arrange an interview time.
Through prior consent, the research team had access to the names and contact information of patients participating in the RCT. To recruit patients for interviews, consenting physicians were provided a list of their CICM intervention group patients. They identifi ed patients who had completed CICM care planning, whom they thought were able to provide useful insights, and who were capable of participating in an interview in the near future (eg, not currently hospitalized). Identifi ed patients were then telephoned to discern their availability and willingness to participate in an interview. All but 1 of the 21 patients approached agreed. Three chose to be interviewed with a spouse or child present.
All 3 study facilitators were also invited to participate in 2 interviews: 1 at the onset of data collection, the second after early analysis of physician and patient interviews. Physicians and patients were compensated for any participation costs.
Data Collection
Data collection involved in-depth, face-to-face individual interviews. Physicians were interviewed in their offi ces, and patients were interviewed in their homes or their physicians' offi ce. Interviews were designed to explore the participant's experience with CICM. Initial interviews followed a written interview guide (available in the online-only Supplemental Appendix at http:// www.annfammed.org/cgi/content/full/6/2/146/ DC1) based on themes identifi ed from a literature review and interviews with the RCT project team. Question sequencing was fl exible to allow participant's responses to guide the discussion. The guide was modifi ed progressively in keeping with iterative processes of data collection and analysis, allowing insights from early interviews to inform topics discussed in subsequent interviews.
Interviews with physicians and patients were conducted after the facilitation intervention phase of the RCT, between December 2005 and April 2006. A research associate (P.T.) with a clinical background in physical therapy conducted all but 2 physician and 2 patient interviews, which were conducted by an academic family physician (G.R.). Both interviewed 1 patient together. Patient, physician, and facilitator interviews ranged between 30 and 60 minutes. Interviews continued until the interviewer obtained a clear picture of the participant's experience. Interviews were recorded, transcribed verbatim, and reviewed for accuracy before data analysis. Participants received a short summary of the interview for their review, as well as an accompanying invitation to make comments, corrections, or clarifi cations.
Textual data provided additional information. A number of documents used as part of the study procedures were reviewed. These documents included the facilitator training manual, minutes of meetings of the study team, and the interim and fi nal reports to the funder. These data provided background and context to understanding the design and implementation experiences of the larger study. Facilitator narratives (written by facilitators following each practice visit), and fi eld notes of the interviewers (written immediately following an interview) were also reviewed in depth (see below).
Data Organization and Analysis
Immersion-crystallization framed the analysis. 16 Data organization began with incorporation of the transcripts, fi eld notes, and facilitator narratives data into textual fi les within NVivo 2.0 (QSR International, Australia). Study documents and facilitator narratives provided context for the interviews and allowed for examination of consistency of physicians' perspectives, whereas the fi eld notes written by interviewers captured immediate impressions of the tone of the interview and provided some insight into interpretation.
Coding was completed by 2 authors (P.T. and G.R.) in 2 stages. First, they read interview transcripts independently to identify key concepts and themes. They then coded transcripts, fi eld notes, and narrative data by a series of major headings (eg, experiences with care planning). These headings had been previously determined from the research questions, interview guides, and existing literature. In the second stage, after a closer review of written node reports on key emergent areas, they identifi ed secondary level codes from the data. Weekly meetings were held to discuss emergent themes, patterns, and connections within and across transcripts.
The process of analysis was designed to allow us to refute or clarify interpretations through consensus and ongoing reference to the data. Two potential frameworks for understanding the data were discussed at length. The fi rst framework considered the physician participants in terms of whether they had understood the principles of care planning and whether they seemed to have implemented practice change. Refl ection and further analyses suggested that the framework described below was best able to represent the data faithfully. Further iterative refl ections on coding summaries and on facilitator narratives confi rmed these emergent typologies. Theme saturation was reached after the 11th physician and the 14th patient interview. The remaining interviews allowed for identifying cases that confi rmed and disconfi rmed the themes which emerged. The study was approved by the Ottawa Hospital Research Ethics Board.
RESULTS
The 13 family physician participants varied by sex (11 male, 2 female), clinical experience (from 9 to 35 years in clinical practice), and facilitator-perceived success with implementing CICM (6 were characterized as very successful with CICM). Six worked in rural or semi-urban areas, and 9 worked in solo sites. Seven used electronic medical records, and 8 had nurses or nursepractitioners working in their clinic. Six patients interviewed were male, 14 were female, and they ranged in age from 50 to 90 years. Duration of patient-physician relationships ranged between 1 and 30 years.
The study revealed multiple layers of experiences with care planning among the physician participants. Few participating physicians could articulate with ease the underlying concepts of chronic illness care planning. Individual care planning seemed time consuming and confl icted with many practitioners' perceptions of their role and of their patients' capacities to be partners in care. The patient-centered principles of the intervention seemed inconsistent with many physicians' biomedical models of chronic disease management. The few patients who noticed the process spoke favorably about their experience.
Conceptualizing Care Planning
Physician participants viewed the CICM approach to planning care as having up to 3 components: systematic chronic illness management; involvement of patients in planning their care; and a broader, more holistic approach to care.
It was clear that most physicians conceptualized the CICM care plan as a systematic, rather than a piecemeal, approach to planning patient care: I've never done … care plans for patients. Like you just really do little bits and pieces of it, and it's useful to sort of think in terms of the whole thing ... it's good for the patient because it sort of puts everything together in one place and sort of allows you to see if you're doing what you said you're going to do (Doctor [D] 4).* Although several welcomed the fact that the care plan "… helped organize my thinking ... it makes you look further and dig a little deeper" (D11), most viewed the process as a framework to ensure the completion of a schedule or series of biomedical clinical tasks. One physician said explicitly: "I think it was more, either myself or the nurse, covering the preventive issues and encouraging them to participate in whatever the preventive measures would be" (D7). Many found it diffi cult to conceptualize patient problems beyond biomedical disease terms (eg, hypertension, congestive heart failure).
The explicit incorporation of patient needs was not apparent to all physician participants, but several valued the opportunity of being able to integrate the worlds of the patient and the provider.
Probably the biggest difference for me was actually paying a little bit more attention to, in a formal way, to the patient's desires and requests. I think I am, have been, fairly open to what the needs were, but this particular study made me … stop and look at more than just the medical things, which is what I focused on, mostly. I mean, I'm a rural or country practice, where I know these people and their families quite, quite well; but some of their social needs and other things were in the questionnaire that we had to go over with them, which I probably hadn't addressed as much before, at least not formally (D1).
Several physicians viewed the care-planning process as opening the door to a more comprehensive and holistic care: Even though it's not a pill I'm giving her, it's something that's making her healthier because, you know, it's a, it's a change in her social condition, so … she got subsidized better accommodation, she got more money to buy food and will get some subsidies for her orthotics and shoes" (D4).
The Enthusiasts
It was clear that some of the participating physicians embraced CICM. Facilitators described these participants as being open to the model from the outset, positive about its implementation, and able to provide constructive solutions for the future. They came from varied practices and both rural and urban communities. These enthusiasts described core concepts of CICM with ease and provided rich descriptions of their process and experiences of using the care plan tool as an adjunct to enhancing chronic illness management. * All quotations have been edited slightly to improve readability.
Enthusiasts acknowledged the contribution of the facilitators in highlighting a need for a change. Speaking of the facilitator's impact, one older physician suggested that "what she did is actually turn my whole thinking around … (from) acute episodic to preventative" (D11). Enthusiasts' views of their roles changed in a positive way. One male physician who worked in a solo site of a group practice remarked, "In some ways (it was) more satisfying to me anyway, like I'm more of a manager than sort of just putting out fi res. I mean I keep putting out fi res and they keep starting" (D4).
The process caused them to refl ect upon the deficiencies of their usual model of care: I'm not actually doing anything proactive to try and help them out, but you're doing reactive, reactive, reactive, and … always chasing your tail. Um, the thing I liked about this was it was more proactive. (I was able to ask) "What can we do to keep you out of trouble?" (D12).
They spoke of surprising new insights into patients with whom they had long standing relationships: I tend to assume things about patients … [that] I know everything about them that I need to know medically, and that's not true. You've got to be careful about that and that's very humbling (D13).
One rural family physician related a patient's response when he asked her to describe her biggest health challenge: I thought, her biggest challenge was all sorts of pain issues … she's diabetic, has terrible rheumatoid arthritis,… (but) she says, "No it's being able to get my groceries and cook my dinners and stay in my house. That's my biggest challenge … I don't care about the pain, all those other issues, I mean, they're only concerning as far as they affect my function." … She was basically able to say, "all those other things only matter in so far as helping me stay in my house … that's my only goal." And so for me that was … an eye opener. It was, Okay it's not about me dealing with your symptoms, it's me dealing with your end goal, and so it became less focused on medications, more focused on getting her a scooter, so she could scooter up to the grocery store-for 4 or 5 months of the year she is totally independent.… I hadn't thought of that. My approach was no longer, you know, symptom, treatment; symptom, treatment; symptom, treatment. When she starting complaining about these things,… (I now think) "Okay, what do I do for this so that it doesn't get out of control and stop her from being at home?" (D12).
The Unenthused
In contrast, many physician participants were either dismissive of the CICM approach to care planning or overwhelmed by its demands. They spoke at length about the challenges and barriers to planning care in general and about using the CICM care plan format more specifi cally. Their descriptions of the process of care planning with patients were cursory and lacking in color.
'This is Not How Doctors and Patients Work Together'
Several physicians told us unequivocally that what CICM asked of them was "not my role": "I think somebody else could have done it just as well.… It's not my training … it was interesting, but I think obviously somebody else could have done better" (D6); and "… typically a nurse would be able to go through it. I don't think that a family physician would be able to fi ll one of those [care plans] out" (D5).
Some noted that the patient's role of involvement in the care plan was unrealistic, believing that their patients were not capable of engaging in the sorts of decisions essential to collaborative care planning: "I don't think those are the highly sophisticated bunch of folk. ... None of them … really could participate in that shared model of care" (D2).
'This is Not Different From What I Already Do'
Planning and scheduling care were pivotal to the care plan. Although the principle of scheduling care was not challenged, some participants remarked they already provided planned follow-up with these patients, sometimes obliquely through the expiration of prescriptions for long-term medications. When asked about his perception of CICM's proactive model of care, a more openly resistant physician told us: Family physicians like to think that we do this all the time. We don't, perhaps, sit down in an hour, and decide, you know, "We're going to look at your chest pain, we could look at psychosocial issues." Sometimes we do it if we've got time … (D2).
Many of the resistant physicians seemed more comfortable with the explicitly biomedical components of the care plan (medication reviews and prevention activities), and gave less priority to psychological and social issues.
Even though several enthusiasts modifi ed the tool and discovered new insights into their long-term patients, the unenthusiastic physicians found the tool to be infl exible and, at times, superfl uous to their needs: It was kind of unusual because a lot of these people I've known for a long, long time … a lot of it's written down on in the charts, so it was sort of, it felt a little redundant to go through this with them. Kind of awkward, actually … (D9).
'Our Biggest Problem Was Resource Constraints'
For many physician participants the CICM was either fi nancially or organizationally impractical. Notwithstanding the reimbursement associated with completion of care plans, it was diffi cult for many to schedule 30 minutes for the fi rst CICM planning visit. Many physicians saw allied health professionals, both within and outside family practices, as necessary to making CICM work in the future: "Solo GP doctors, I don't think, can manage this without those supports" (D7). A number remarked on what they believed to be a lack of community-based resources to complement patient care.
Patients' Perspectives
Whereas most patients barely noticed any change with CICM, some recalled experiencing a different approach to care with their physician, especially during the initial planning visits. Of those noting a difference in their care, all but 1 was paired with an 'enthusiastic' physician.
One gentleman with osteoporosis described his fi rst structured visit: It was really impressive. ... It wasn't just a quick little visit amidst a busy day. It was something different. It was something very dedicated, very planned. And it was nice to be able to talk on something which, perhaps, I was on the other side looking in. You know, he wasn't looking at my leg or my arm, or an illness" (Patient [P] 18).
Those reporting a positive impact believed the care-planning process gave an opportunity for the physician to know them more as a person: I remember her asking, um, you know, what do I do if I get upset? You know, do I have someone to talk to? … just things like that. … I think that's a good idea, because a lot of people won't approach their doctors about personal things (P8). Some believed that their care coordination had improved as a result of care planning and reported perceived personal benefi ts, such as taking fewer medications, reduced anxiety about their conditions, having new strategies to manage their health problems, or functional improvements. Very few patients, however, articulated improved community linkages or enhanced self-management skills. Even though all patients welcomed the additional time available through CICM care planning, a number speculated as to whether it was cost-or time-effective for physicians.
DISCUSSION
Initiatives to improve the care of chronically ill patients have varied in philosophy and design. 17 Recent US models have emphasized broad systematic change, often delivered within the relatively controlled environment of managed care organizations. 18 By contrast, Australia has implemented a pragmatic strategy, part of which involves fi nancial incentives for family doctors completing written patient care plans. 19 Our study provided an opportunity to understand the experiences of patients and physicians with an intervention similar to the Australian model. The fi ndings provide insight into some unique challenges associated with primary care management of chronic illness.
At the heart of the practitioner experiences was a contrast between the enthusiasts and those practitioners who seemed unmoved by the initiative. As described, the enthusiasts were evenly spread among facilitators, community locations, and practice characteristics. Their willingness to embrace ongoing learning and quality improvement seems consistent with early adopters of change. 20 For the enthusiasts, CICM style care planning moved traditional family practice beyond reactive care ("chasing tails" and "fi ghting fi res") to a more proactive and comprehensive model. Important lessons follow from the experiences of the unenthusiastic, however.
Barriers to Change
The successful implementation of change in clinical care depends on multiple factors, including features of the change itself, the level and nature of the evidence, the context or environment into which the research is to be placed, and the method of facilitation. 20,21 Participating physicians generally viewed the CICM initiative as time intensive and unrealistic for widespread implementation. All physician participants had an enduring relationship with an experienced facilitator, and they experienced implementing change in practice with regard to preventive care that led to statistically signifi cant improvements. 15 They worked in practices with similar supports (human resource and technology) and in the same communities. Even so, there were distinct differences among physicians in levels of enthusiasm for a comprehensive, patient-centered, planned care approach. The enthusiastic physicians spoke of the promise of patient-centered care planning and of small ways in which they have already integrated components of the care plan. By contrast, the skepticism of the unenthusiastic was reinforced by diffi culties in accessing allied health professionals, limited availability of community resources, and lack of a supportive fee schedule. Although other studies have listed or identifi ed some of the same barriers, 5,19,22 this study illuminates what we believe to be fundamental barriers at the physician level that may need to be considered in future attempts in improving chronic illness management in primary care.
Most physicians welcomed the concept of more patient involvement in care, but the idea of collaborating with patients to develop a comprehensive plan focusing on shared goals confl icted with some of the unenthusiastic physicians' self-perceived job responsibilities. The concept that shared and proactive care is "not my job" would seem to be an important barrier to the success of collaborative models emphasizing the value of patientphysician partnership. 8,23 Similarly, a number of the unenthusiastic participants doubted that their chronically ill patients could manage to fulfi ll a role as their own principal caregiver. 8 Not surprisingly, few acknowledged their own role in preparing patients for such a change.
Others have reported how primary care providers are often convinced that they provide optimal chronic illness care. 24 As in Australia, our lack of a tool for auditing patient-centered, collaborative chronic illness care made it diffi cult for the physicians to measure the quality of their chronic illness care. Heightening awareness through the use of either administrative data or validated tools, such as the Assessment of Chronic Illness Care 25 or the Patient Assessment of Chronic Illness Care, 26 could highlight gaps in care and potentially increase physicians' motivation to change.
Future Implications
Although the facilitators seemed to work well with the more reluctant family physicians, the lack of organizational support and the complexities of unenthusiastic physician perspectives stood in the way of meaningful change. Despite the evidence that the initiative had tapped a need in the enthusiasts, pervasive individual barriers combined with a lack of system based support suggests that this approach is unlikely to have a major impact in the Ontario health care system at this time. After more than 3 years of experience with a similar care-planning strategy in Australia, Wilkinson et al found that 10% of the general practice workforce was responsible for completing 80% of all care plans. 19 Our fi ndings point to several practical strategies to consider in chronic illness management in primary care.
The partnership role and responsibility changes in collaborative chronic illness care models may require a range of strategies at an individual level to help the different groups implement effective chronic care management. Some argue medical education does not prepare physicians for the demands of a complex, collaborative health care environment. 8 With wide policy interest in primary care delivery of chronic illness care, it may be time for professional organizations to reconsider whether undergraduate or postgraduate training programs should be reorientated toward the demands of a collaborative health care environment where patients are understood as their own primary caregiver.
Other studies have spoken of the importance of group culture 27 and colleague support 22 in addressing some of individual barriers to chronic care management. Opinion leaders and cross-practice mentorship or collaboration may be helpful to foster the shift to patient-centered and proactive chronic illness management. Our fi ndings suggest that facilitation needs to take account of the physicians' approach to care and at the very least assist with practice audit techniques to help practices identify gaps in the quality of clinical care, particularly those around patient-centered, collaboratively oriented care.
Transferability of the study fi ndings is limited in that physician participants all practiced in relatively small capitated practices in 2 regions of Ontario. Each had already participated in 2 distinct intervention studies designed to investigate new methods of delivering primary care. All had at least 1 decade of postgraduation experience. Although the sampling technique included an awareness of the need to search for alternative and disconfi rming cases, we may have been unable to capture different perspectives shared by other practitioners. Specifi cally, more recent medical graduates or physicians working in interdisciplinary primary care settings may have been more supportive of the principles of collaborative chronic condition care. Our methodology asked physicians to nominate patients for interview, thereby possibly excluding patients with negative experiences of their care.
Our use of phenomenology was well suited for capturing patient and practitioner experience. Although we gained a good understanding of practitioner orientations to chronic condition management, other methods, particularly those using ethnographic techniques of direct observation, are better suited for understanding behavior in the clinical setting. Epidemiologic methods would be required to examine the infl uence of practitioner orientation on adherence to clinical guidelines or effectiveness of care.
Implementing comprehensive, patient-centered chronic illness care management involves more than organizational change. Our study highlights the importance of the personal attributes and perspectives of individuals in addition to larger system issues. More complex barriers to change, including attitudes and professional culture, should be considered in future attempts to improve the delivery of chronic illness care in primary care practices. Our fi ndings illuminate the need for additional methods of support for both family physicians and patients while they transition to the adjusted roles and responsibilities of collaborative and proactive management of chronic illness.
To read or post commentaries in response to this article, see it online at http://www.annfammed.org/cgi/content/full/6/2/146. | 2018-04-03T03:03:23.625Z | 2008-03-01T00:00:00.000 | {
"year": 2008,
"sha1": "df47509a662ae9e127416e30b2a9ddbbe3bb02fb",
"oa_license": null,
"oa_url": "http://www.annfammed.org/content/6/2/146.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a71dd255ee8327bbec0d8afab0d2fe08a46e8f7a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237953941 | pes2o/s2orc | v3-fos-license | Examining the role of blockchain technology against fraud in SMEs
The study examined the use of blockchain technology for the prevention of fraud in small and medium enterprises. Fraud in businesses is a significant problem that stifles the growth of businesses. The methodology used is a review of relevant and extant literature, after which a conclusion was made. The transparency, security, and traceability of blockchain enhance its credibility as the most valuable technology to stem fraud in a business organization. The study contributed to knowledge by evaluating how blockchain can be adopted to stem the tide of fraud in businesses in Nigeria. © 2021 by the authors. Licensee SSBFNET, Istanbul, Turkey. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Introduction
Most businesses in Nigeria suffered greatly or close shop as a result of fraudulent activities by members of staff who have a conflicting interest to that of the organization. Fraud stifles growth and liquidate many businesses in Nigeria through loss of profits and depletion of shareholders fund. Unethical practices perpetuated by staff include manipulation and misrepresentation in terms of falsification, alteration, concealment and misappropriation of funds, fraudulent manipulation of accounting information, forging of cheques and documents, funds diversion, secret commission, bribery, false invoicing, theft of inventory assets. Fraudulent activities therefore involve the use of deceit and tricks to change the truth so as to deprive another person of his right. Despite the effort of most organizations to put mechanism in place to check the excess of their staff, it is always circumvented. In a guide to reduce employee fraud; CPA, 2009 (as in Olanrewaju andJohnson-Rokosu, 2019) emphasizes that misplaced trust, inadequate living and supervision policies and failure to implement strong internal control create an environment that is ripe for employee to commit fraud.
Is in the light of the above, the study attempts to review relevant literature on blockchain as well as its potential business application as an effective tool to checking fraud in an organization. The paucity of academic journal that discuss the functional technology that underlay blockchain create a research gap the research wish to address. Such exploration will give business organizations in Nigeria insights as to its workability and possible usage to stem fraud. The research question is how blockchain can assist in checking fraud in business organization in Nigeria. To achieve our goal, the work is structured into introduction, literature review; which provide detailed understanding of the concept and conclusion.
Literature Review Conceptual Background
A blockchain is essentially a digital ledger of transactions that is duplicated and distributed across the entire network of computer systems on the blockchain. Each block in the chain contains a number of transactions that are linked together using cryptography and every time a new transaction occurs in the blockchain, a record of that transaction is added to every participant's ledger. For clarity, each block contains a cryptographic hash of the previous block, a time stamp and transaction data. The time stamp proves that the transaction data existed when the block was published in order to get into its hash. As block, each contains information about the block previous to it. They form a chain with each additional block reinforcing the ones before it. The decentralized database managed by multiple participants is known as Distributed Ledger Technology (DLT). Such decentralization makes blockchain to be resistant to modification of their data because once recorded, the data in any given block cannot be altered retroactively without altering all subsequent blocks. Blockchain is a type of DLT in which transactions are recorded with an immutable cryptographic signature called a hash.
Blockchain technology can help fraud detection because it enables the sharing of information in real time and all participants in a blockchain have visibility over transactions. Therefore, errors and complexity are thwarted, fake data, errors in approval, double purchases, are prevented within the linked blockchain process. Fraudulent data cannot be inserted into the blockchain. Blockchain help to foster trust, accountability and transparency in business relationship. Blockchain can help to reduce and prevent fraud through greater transparency and improved traceability of products. It's very difficult to manipulate blockchain; which is an immutable record that can only be updated and validated through consensus among network participants. If a product is digitized on blockchain, it can easily be traced to its origin because the information is on share distributed ledger and changes can only be possible through consensus of the majority to do so.
Node application
Every computer connected to the internet, needs a node application specific to the blocchain ecosystem that it wants to participate in.
Distributed ledger (database)
The distributed ledger means the share contents and databases available to the participants of a particular blockchain ecosystem.
Consensus Algorithm
The consensus algorithm provides permanence and security to the data in the blockchain
Distributed ledger technology
All network participants have access to the distributed ledger and its immutable record of transactions. With this shared ledger, transactions are recorded only once.
Immutable records
No participant can change or tamper with a transaction after it's been recorded to the shared ledger. If a transaction record includes an error, a new transaction must be added to reverse the error and both transactions are then visible.
Smart contract
To speed transactions, a set of rules called a smart contract is stored on the blockchain and executed automatically. A smart contract can define conditions for corporate bond transfer.
Public Blockchain
A platform where every participant would be able to read on and write to.
247
A platform where only the owner of the blockchain has the right to making any changes in rule or other terms.
Consortium Blockchain
This platform is partially like the private blockchain. Instead of allowing anyone person/company to have full control and participate in the verification of transactions process, some specific number of nodes are selected in a predetermined manner and the control is vested on them.
Importance of Blockchain
Nakamoto 2008 (as in Oyebanjo et al.,2021) posit that blockchain is a technology used as a distributed ledger that rides on a pointto-point network, enabling trust between unknown parties within the system and enabling seamless payment without human intervention. It help keep history of transaction because it is not subject to alteration, manipulation, once verified through mutual agreement by the various peers (nodes) involved in the transaction. Security of data and the transaction is achieved through cryptography, thus enabling secure transaction, integrity and privacy ( Baranwal,2020 as in Oyebanjo et al., 2021). Unlike traditional systems of managing funds and contracts, blockchain can make fund management and the contracting process more transparent and secure and ensure accountability and efficiency of process, in a way not possible through the conventional system ( Kumar, 2017 as in Oyebanjo et al., 2021). Thus the main stakeholders can verify transactions within the system to confirm its validity and legitimacy. Zheng et al. (2017) posits that Block chain is a sequence of blocks, which holds a complete list of transaction records like conventional public ledger. Figure 1 illustrate an example of a blockchain with a previous has only one parent block. The first block of a blockchain is called genesis block which has no parent block.
The internals of Blockchain Block
A block consists of the block header and the block body as shown in figure 2, in particular, the block header includes: (i) Block version. Indicates which set of block validation rules follow: i.
Merkle tree root hash: the hash value of all the transactions in the block ii.
Time stamp: current time as second in universal time iii.
nBits: target threshold of a valid block hash iv. Nonce: an 4-byte field. Which usually starts with 0 and increases for every hash calculation v.
Parent block hash: a 256-bit has value that points to the previous block.
The block body is composed of a transaction counter and transactions. The maximum number of transactions that a block can contain depends on the block size and the size of each transaction. Blockchain uses an asymmetric cryptography mechanism to validate the authentication of transactions. Digital signature based on asymmetric cryptography is used in an untrustworthy environment
Digital signature
Each user owns a pair of private key and public key. The private key that shall be kept in confidentiality is used to sign the transactions. The digital signed transactions are broadcasted throughout the whole network. The typical digital signature is involved with two phases; signing phase and verification phase. The typical digital signature algorithm used in blockchain is the elliptic curve digital signature algorithm (ECDSA)
Key Characteristics of Blockchain
i. Decentralization: in conventional centralized transaction systems, each transaction needs to be validated through the central trusted agency. In contrasts to the centralized mode, third party is no longer needed in blockchain. Consensus algorithms in blockchain are used to maintain data consistency in distributed network. ii.
Persistency: transactions can be validated quickly and invalid transactions would not be admitted by honest miners. It is nearly impossible to delete or rollback transactions once they are included in the blockchain. Blocks that contain invalid transactions could be discovered immediately. iii.
Anonymity: Each user can interact with the blockchain with a generated address, which does not reveal the real identity of the user. Noted that blockchain cannot guarantee the perfect privacy preservation due to the intrinsic constraint iv.
Audibility: Bitcoin blockchain stores data about user balances based on the unspent Transaction output(UTXO) . Any transaction has to refer to some previous unspent transactions once the current transaction is recorded into the blockchain, the state of those referred unspent transactions switch from unspent to spent. So transactions could be easily verified and tracked. v.
Taxonomy of blockchain systems: current blockchain systems are categorized roughly into three types: public blockchain, private blockchain and consortium blockchain. In public blockchain, all records are visible to the public and everyone could take part in the consensus process. Differently, only a group of pre-selected nodes would participate in the consensus process of a consortium blockchain. As for private blockchain, only those nodes that come from one specific organization would be allowed to join the consensus process.
A private blockchain is regarded as a centralized network since it is fully controlled by one organization. The consortium blockchain constructed by several organizations is partially decentralized since only a small portion of nodes would be selected to determine the consensus.
i. Consensus determination: in public blockchain, each node could take part in the consensus process. And only a selected set of nodes are responsible for validating the block in consortium blockchain. As for private chain, it is fully controlled by one organization and the organization could determine the final consensus.
ii. Real permission: Transaction in a public blockchain are visible to the public while it depends, when it comes to a private blockchain or a consortium blockchain iii. Immutability: since records are stored on a large number of participants, it is nearly impossible to tamper transactions in a public chain. Differently, transactions in a private blockchain or a consortium blockchain could be tampered easily as there are only limited numbers of participants.
iv. Efficiency : it takes plenty of time to propagate transactions and blocks as there are a large number of nodes on public chain network. As a result,transaction through put is limited and the latency is high with fewer validations, consortium blockchain and private blockchain could be more efficient.
v. Centralized: The main difference among the three types of blockchains is that public blockchain is decentralized, consortium blockchian is partially centralized and private blockchain is fully centralized as it is controlled by a single group vi.
Consensus process: Everyone in the world could join the consensus process of the public blockchian. Different from public blockchian, both consortium blockchian and private blockchian are permissioned. Since public blockchian is open to the world , it can attract many usres and communities are active. Many public blockchians emerge day to day. As for consortium blockchian, it could be applied into many busimess applications.
Approaches to consensus
POW (proof of work) is a consensus strategy used in the Bitcoin network. In a decentralized network, someone has to be selected to record the transactions. The easiest way is random selection. However, random selection is vulnerable to attacks. So if a node wants to publish a block of transactions, a lot of work has to be done to prove that the node is not likely to attack the network. Generally, the work means computer calculations. In POW, each node of the network is calculation a hash value of the block header. The block header contains a nonce and miners would change the nonce frequently to get different hash values. The consensus requires that the calculated value must be equal to or smaller than a certain given value. When one node reaches the target value, it would broadcast the block to other nodes and all other nodes must mutually confirm the correctness of the hash value. If the block is validated, other miners would append this new block to their own blockchains. Nodes that calculate the hash values are called miners and the POW procedure is called mining in Bitcoin.
In the decentralized network, valid blocks might be generated simultaneously when multiple nodes find the suitable nonce nearly at the same time. As a result, branches may be generated. However it is unlikely that two competing forks will generate next block simultaneously. In POW protocol, a chain that becomes longer is judged as the authentic one. Consider two forks created by simultaneously validated block U4 and B4. Miners keep mining their block until a longer branch is found. B4, B5 forms a longer chain, so the miners on U4 would switch to the longer branch. Miner have to do a lot of computer calculation in POW, yet this work waste too much resources to mitigate the loss, some POW protocols in which work could have some side applications have been designed.
POS (Proof of stake) is an energy saving alternative to POW. Miners in POS have to prove the ownership of the amount of currency. It is believed that people with more currencies would be less likely to attack the network. The selection based on account balance is quite unfair because the single richest person is bound to be dominant in the network. As a result, many solutions are proposed with the combination of the stake size to decide which one to forge the next block. In particular, blockchain uses randomization to predict the next generation. It uses formular that looks for the lowest hash value in combination with the size of the stake. Peercoin favour coin age-based selection. In peercoin, older and larger set of coin have a greater probability of mining the next block. Compared to POW, POS saves more energy and is more effective. Unfortunately, as the mining cost is nearly zero, attacks might come as a consequence. Many blockchains adopt POW at the beginning and transform to POS gradually.
Challenges of Blockchain
Scalability: with the amount of transactions increasing day by day, the blockchain becomes bulky. Each node has to store all transaction to validate them on the blockchain because they have to check if the source of the current transaction is unspent or not. Beside, due to the original restriction of block size and the time interval used to generate a new block, the Bitcoin Blockchain can only process nearly 7 transactions per second, which cannot fulfill the requirement of processing millions of transactions in real time fashion. Meanwhile as the capacity of block is very small, many small transactions might be delayed since miners prefer those transactions with high transaction fee.
Privacy leakage: Blockchain can preserve a certain amount of privacy through a public key and private key. Users transact with their private key and public key without any real identity exposure. Blockchain cannot guarantee the transaction privacy since the values of all transaction and balances for each public key are publicly visible.
Selfish mining: Blockchain is susceptible to attacks of colluding selfish miners. In selfish mining strategy, selfish miners keep their mined block without broadcasting and the private branch would be revealed to the public only if some requirements are satisfied. As the private branch is longer than the current public chain, it would be admitted by all miners. Before the private blockchain publish, honest miners are wasting their resources on useless branch while selfish miners are mining their private chain without competitors. So, selfish miners tend to get more revenue.
Blockchain in business application
Sarmah (2018) posits that Bank and payment systems have stated using blockchain to make their operations smoother, efficient and secure. Funds can be efficiently and safely transferred with the decentralization technology. Blockchain become increasingly popular in healthcare industries as it is able to restore the lost trust between the customers and health care providers. With the help of blockchain, authorization and identification of people have become easier and fraud and records loss can be avoided. Due to blockchain ability to store and verify documents efficiently, businesses have started using blockchain to verify records and document securely. Blockchain can significantly reduce the court cases and battle by providing an authentic medium to verify and confirm truthfulness of legal document.
Rigging of election result can be avoided with an effective use of blockchain. Voter registration and validation can be done using blockchain and ensure the legitimacy of votes by creating a publicly available ledger of recorded votes. Industries such as insurance, education, private transport and ride sharing, government and public benefits, retail, real estate etc have started implementing blockchain to reduce cost, to increase transparency and to build trust. ii.
Benefits of Blockchain
Blockchain are expensive and resource intensive as every node in the blockchain repeats a task to reach consensus.
iii. Blockchain is characterized with complexity and complicacy to understand.
Empirical Review
Risius and Spohrer (2017) studied the applicability of blockchain technology and where it has mentionable practical effects. The study adapts an established research frame work to structure the insights of the current body of research on blockchain technology. The frame work differentiates three groups of activities (design and features, measurement and value, management and organization) at four level of analysis (users and society, intermediaries platforms, firms and industry). The review shows that research has predominantly focused on technological questions of design and features while neglecting application, value creation and governance. Olaniyi (2018) investigate the relationship between blockchain technology and financial market. The US and China are used as case studies for the 2008-2016 period using fully modified least square and Toda-Yamamoto causality technique. The estimates show that blockchain technology has positive and significant relationship with the financial market in US and China.in other words, the higher the levels of blockchain innovation in these countries, the more developed the financial market. Liu and Ye (2021) studied the relationship between trust and user acceptance. 254 questionnaires about blockchain applications were collected and analyzed with smart PL3.0. The results show that trust and information quality has positive effects on the users' behaviour intentions, except in terms of output quality.
Oseiweh (2018) studied how frauds have affected the banking sector financial performance in Nigeria. Data spanning 1993 to 2016 was used. The method of data analysis was co-integration and error correction mechanism. The finding from the estimation revealed that a three period lag of number of fraud cases has negative effect on the banking sector financial performance and was statistically significant. Ijeoma and Aronu (2013) studied the impact of fraud management on organizational survival. The objective of the study is to determine if business organization or companies adopt holistic approach to fraud management. A sample size of forty four (44) staff was used to evaluate the chi-square test statistics. It was observed that adoption of holistic approach to fraud management do not help companies in preventing fraud in Nigeria. Omokaro and Ikpere(2019) did a study on the impact of fraud management activities on organizational survival in Nigeria. The study used structured questionnaire which was administered to 270 respondents. The Wilcoxon test was used to analyze the data obtained in the study. The study revealed that the major measures of fraud management in Nigeria are deterrence measures, analysis measures, investigation measures and prosecution measures. The result implies that effective implementation of fraud management activities does not significantly impact on fraud management in organizations. Surjandari and Martaningtyas (2015) studied the effect of performance incentives, internal control system, organizational culture, on fraud of Indonesia government officer. They made used of questionnaire, stratified random sampling and structural equation modeling. Two results were opposite with previous studies. (a) The performance incentive did not affect the fraud because the incentive was not based on performance. Most of the fraud was done by those with below 5years working period. (b) Internal control system did not affect the fraud because application on internal control did not confirm with PP number 60, year 2008 as a good guidance and most fraud was done because of opportunity presence. The organization culture, only one in line with previous studies had effected the fraud, because of success in punishment socialization, officer trainings , transparency and accountability. Archambeault and Webber (2018) examined the survival of nonprofit organizations after the discovering of fraud. An analysis of 115 nonprofit organizations experiencing a fraud shows that over one fourth of these organizations did not survive at least 3years beyond the publication of the fraud, a rate considerably higher than the typical nonprofit failure rate. Frazer (2012) determined whether internal control system influenced restaurant managers' perceptions of undesirable behaviours also known as deviation in restaurant. Deviation in this study was defined as fraud, waste and errors. A random sample of restaurants doing business in Nassau country in the state of New York was selected. The data was analyzed using multiple regression and descriptive statistics. The result from the study indicated that there was a statistically significant relationship between internal control and deviation (that is errors, fraud and waste). Enofe et al.(2017) did a study on bank fraud and preventive measures in Nigeria. Primary data were used for this study. This study was carried out by collecting data from 15 quoted commercial banks in Nigeria as at 31 st December, 2015. The study utilized ordinary least square regression model. It was observed that strong internal control system, good corporate governance and compliance with banking ethics have positive and significant influence in fraud prevention in banking industry. Micheni (2016) did a study on the effectiveness of internal control on detection and prevention of fraud on commercial bank listed in Nairobi.
Questionnaires were administered to solicit information from respondents. Data collected were analyzed descriptively using figures and tables and inferentially using Microsoft office spread sheet program (EXCEL). The finding of the study revealed a strong positive association between internal controls instituted by organization.
Conclusion
Blockchain with its characteristics of decentralization, immutable records, persistency, anonymity, security, auditability, transparency, accuracy, verifiability and sharing of information has the potential of transforming business organization in Nigeria.
Blockchain peer-to-peer connections help to identify fraud activities in the network and distributed consensus. No doubt blockchain can help fraud detection because it enabled the sharing of information in realtime and all participants in a blockchain have visibility over transaction. Its traceability can help keep staff at bay and enhanced the profitability of the firm. Blockchain record keeping attribute helps the various stake holders to confirm validity and legitimacy. Blockchain has strong connectivity to business survivals, businesses in Nigeria must move from the conventional method of fraud control to a more digitized method which is more efficient and effective and could be more durable method to solve fraudulent activities of staff that undermines the growth of business organization and deplete shareholder funds. Adequate understanding on how and why people commit fraud will help to deploy the necessary technological tools to stem the tides. Payment processing, contract management, supply processing, money transfer and account recording can be better protected with blockchain. It almost impossible to invade the network as attacker can impart the network only when they have majority control of the node. | 2021-09-27T20:03:11.170Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "91c1ca53541ac364def9c1786c9a0b291341af84",
"oa_license": "CCBYNC",
"oa_url": "https://www.ssbfnet.com/ojs/index.php/ijrbs/article/download/1311/970",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7e535d11edb61da1d2bb6506e6c1f3064fdb4f49",
"s2fieldsofstudy": [
"Business",
"Computer Science"
],
"extfieldsofstudy": []
} |
260074404 | pes2o/s2orc | v3-fos-license | Study of the Dynamics of Oil Destruction with the Use of Bacterial Biological Preparations in the Conditions of the Far North
. A study of the dynamics of crude oil decomposition was carried out using the bio-preparations "Glaukoil" and "Lenoil" and various methods (aeration, application of a sorbent and a biological product at different times, the use of a polymer shelter, the addition of a moisture-retaining component and the distribution of the dose of mineral fertilizers) in order to optimize the oil destruction technology at emergency oil spills. It is shown that the most effective methods are the use of a polyethylene film to maintain a higher temperature during oil decomposition, additional aeration, and the use of a water-retaining component. Other methods did not show a significant acceleration of oil degradation.
Introduction
Petroleum hydrocarbons were formed in the earth's crust in a natural way, and in nature there are many microorganisms capable of assimilating these compounds. But in reality, rapid biodegradation to eliminate oil spills is difficult for a number of reasons: -a specific type (strain) of bacteria can break down only individual components of oil, which is a complex mixture of hydrocarbons. The qualitative and quantitative composition of which is individual for each deposit [1].
-individual components of oil are not only difficult to break down by bacteria, but also have a bactericidal and bacteriostatic effect, which makes it difficult to break down even the most easily assimilated hydrocarbons -in case of spills of oil and oil products on soils, part of the hydrocarbons can be firmly bound to soil components, which reduces their availability for oil-destructing microbes [2].
To accelerate the decomposition of oil that has entered natural ecosystems, it is necessary to provide certain conditions, namely, the temperature regime, the availability of minerals, the optimal level of humidity, oxygen access, the use of highly efficient crops of natural and artificial origin.
Crucial in the process of oil biodegradation is played by abiotic environmental factors, such as temperature, humidity, availability of oxygen and minerals.
Most strains are active in the range from 10 to 40 0C. The turning point is a temperature of +50C, below which biodegradation slows down sharply. At temperatures below 0°C, bacterial reproduction and biodegradation stop completely, and after defrosting, all bacteria used resume their activity [3].
To maintain the activity of bacteria at a sufficient level, it is necessary to maintain a soil moisture content of at least 40%. In the southern regions, where the soil is sufficiently water-intensive or the moisture capacity of the soil is provided by the remnants of the grass sod, the required water content is maintained by periodic sprinkling and reservoir turnover. For coastal gravelly, sandy or rocky soils, maintaining the required level of moisture can be very difficult.
The bacteria used are aerobes, so providing air access becomes one of the decisive factors. As a rule, plowing is enough for this, but in the conditions of the coasts of the North Seas, the use of such a technique seems to be very difficult.
The composition of nutrient media for the cultivation of oil-degrading microorganisms, as a rule, includes nitrogen, potassium, phosphorus, sulfur, magnesium, sodium, and chlorine [3].
Bacteria-biodestructors of oil and oil products were first isolated from canal water, and later from soil. Some preparations use bacteria that live in the intestines of farm animals. In particular, one of the biological preparations contains bacteria isolated from pig farm waste (Bacillus pumilus, Bacillus sphaericus, Micrococcus hylae, Arthrobacter viscosus, Bacillus licheniformis) [4].
In conditions of fertile soils with rich microflora, to eliminate small spills, when the degree of pollution with oil products does not exceed 10%, it is possible to do without the use of biological products and the introduction of mineral growth factors. In the conditions of the North, where soils are poor in hydrocarbon-decomposing bacteria and nutrients, it is necessary to apply both biopreparations and fertilizers even with slight pollution [3].
Natural strains isolated from the soil are used in isolation or form artificial associations, for example, Acinetobacter, Bacillus and Pseudomonas 1:1:1, and it has been shown that during cultivation on a nutrient medium containing glucose, there is no change in the ratio of the number of bacteria of different species In addition to natural strains, genetically modified microorganisms can be used in the biodegradation of oil and oil products. In particular, the introduction of certain plasmids into bacterial cells of the genus Pseudomonas makes it possible to split individual oil fractions: the OCT plasmid provides for the decomposition of octane and hexane, the XYL plasmid -xylene and toluene, the NAH plasmid -naphthalene, the CAM plasmidcamphor.
Isolated OCT and CAM plasmids cannot exist in the same cell, since they have homologous regions; therefore, if it is necessary to introduce two plasmids at once, they are prehybridized to form a larger hybrid CAM/ OCT plasmid [6]. As a result of sequential hybridization of plasmids and bacterial strains, pseudomonads were obtained containing all four types of plasmids listed above. Genetic engineering solutions make it possible to outline the prospect of obtaining microorganisms capable of breaking down hydrocarbons at low temperatures. For example, the TOL plasmid (providing toluene assimilation) of a mesophilic strain of Pseudomonas putida was introduced into the cells of a psychrophilic (with a low temperature optimum) strain capable of cleaving salicylic acid at a temperature of about 00C. The results of this experiment, of course, only show the possibility of obtaining hybrid oil-degrading microorganisms capable of functioning at low temperatures, but so far such strains have not found practical application and there are doubts that oil biodegradation using transgenic bacteria will be the main technology in the near future [7 ]. The most promising drugs are considered to be a complex of several types of natural strains of bacteria with the addition of transgenic forms [8].
One of the approaches to soil restoration is phytoremediation, a technology in which the decisive role is assigned to higher plants, in the rhizosphere of which oil-destructing bacteria intensively spill. In this case, no significant acceleration of oil components is observed, but due to the fact that with this approach a closed plant community is quickly formed at the spill site, phytoremediation receives the support of environmental NGOs and the public.
The possibility of transformation of natural ecosystems as a result of the use of microbial biological preparations for remediation after accidental spills is not ruled out [7]. The possible consequences of the use of bacterial preparations for soil biota are assessed mainly by the antibiotic activity of the studied bacterial strains. At the same time, the authors note the resistance of the studied species to a number of antibiotics [3].
The vast majority of modern preparations for the remediation of oil-contaminated soils do not require additional measures to deactivate the microorganisms that make up their composition.
The purpose of the study is to modify the technology of using biotechnological oil destructors to the conditions of the Far North.
Tasks: -study of the influence of environmental factors on the rate of oil degradation (temperature, aeration, the procedure for introducing mineral growth factors) -study of the dynamics of oil degradation with the combined introduction of two preparations with different microbiological composition
Research Methodology
For the experiment, two domestic certified drugs were used: Lenoil-nord and Glaukoil. These drugs have a similar purpose, but differ significantly in technical regulations. Manufacturers of drugs declare the possibility of oil destruction in one season (provided that the degree of pollution does not exceed 10%) The drug "Lenoil-nord" contains Pseudomonas sp. as oil-degrading bacteria. and does not contain a sorbent component and mineral additives. Before use, in accordance with the technical regulations, in most cases, preliminary activation is required when diluted with industrial water (5 kg of the drug per 1000 liters of industrial water), adding 0.5 kg of nitrogen-phosphorus-potassium mineral fertilizer and 1 liter of diesel fuel to ensure a nondifference nutrition of bacteria .
The preparation "Glaukoil" is much richer in the content of hydrocarbon-decomposing bacteria (Bacillus megaterium, Bacillus subtilis, Pseudomonas putida, Pseudomonas sp.), but the technical regulations of this preparation require the introduction of a significant amount of mineral fertilizers, in particular, for the disposal of 1 ton of oil, the introduction of 393 kg of nitrophoska is required.
The use of the drug "Econadin" in the experiment had to be abandoned due to the liquidation of the manufacturer.
Fucus algae were used as a moisture-retaining component. quantities of this material in materials amounts of materials and materials out of quantities, it can be in sufficient amounts of materials volunteers in the immediate vicinity of spills, while other materials need to be specially transported to the venue, collection on the coast can be difficult.
For the study, model experiments were carried out, during which portions of the soil were placed in plastic containers with a capacity of 2 l, 100 ml of crude oil, mineral fertilizers were added in accordance with the technological regulations of each drug and various additives in accordance with the experimental scheme (tables 1-6) . Optimization of the temperature regime was carried out using a polyethylene film. The work was carried out on the territory of the educational and scientific base of the Federal State Budgetary Educational Institution of Higher Education of the Moscow State University (Murmansk region, Kola district) in the field season of 2017. The total content of oil products was determined on the basis of the FBUZ "Center for Hygiene and Epidemiology in the Murmansk Region" by the fluorographic method during the extraction of oil with hexane. The initial concentration of oil in the experiment averaged 85550 mg/kg.
The sum of effective temperatures was determined based on the fact that in this case the temperature above +5 0С is considered effective. We used data on the average daily temperature in the village of Tuloma on the resource www.gismeteo.ru.
Results and Discussions
Data on the oil content in the samples during the period of the experiment are shown in Table 7. It follows from Table 7 that the presence of a film cover is crucial for accelerating oil degradation, which allows to accumulate heat and create a sufficiently high temperature even with variable cloudiness. This pattern can be traced for all options for bookmarking experience. In the early stages of oil degradation, even at high air temperatures, the decomposition of oil occurs rather slowly, which is apparently associated with a long period of activation of microbial preparations.
Aeration is of great importance for accelerating oil degradation: double mixing of soil containing oil allows achieving good results. But the application of this technique in practice is associated with large expenditures of funds and labor of volunteers, especially if the spill occurred in a remote area of the coast.
The division of the dose of applied mineral fertilizers into two and three portions does not lead to an acceleration of oil degradation.
Increasing the dose of the drug by one and a half times "Lenoil-nord" without the use of a film cover does not give a significant activation of oil degradation, therefore, such a method at significant financial costs will not give good results.
The best results were achieved with a combination of two preparations, however, in practice, the combined use of two biodestructors is difficult due to the high cost of the Glaukoil preparation and the need to apply high doses of mineral fertilizers. In the conditions of the Far North, where soils are extremely poor in nitrogen and phosphorus, and the reproduction of hydrocarbon-decomposing bacteria is slow (hence, the assimilation of inorganic substances is slow), the application of fertilizers in such an amount will lead to new pollution, this time with minerals.
The use of TSHR sorbent and sphagnum together with microbiological preparations did not cause a significant acceleration of oil degradation, both with the sequential introduction of the sorbent and the biological product, and with simultaneous application.
More promising, in comparison with other methods, in our opinion, is the addition of thalli of fucus algae, which, firstly, play the role of a moisture-retaining component and allow stabilizing the water content in the soil, and secondly, gradually decomposing, the algae themselves become a source of minerals for the growth of bacteria, thirdly, the thalli of fucuses are quite dense and allow, when mixed, to create a looser structure of the sample, which improves aeration. Fucus algae grow in abundance in the littoral of the Barents Sea and it is quite possible to collect them in large quantities to clean up spills by volunteers already during the initial treatment of the oil-contaminated part of the coast. During the experiment, the appearance of mold fungi on the surface of the contents of the container was noted. It is obvious that the spores of these fungi were introduced into the containers on the fucus thalli. During the next growing season, an analysis will be carried out to determine the species of these fungi and their possible participation in the process of oil degradation.
Measurements of the content of oil products in the control variants of the experiment show that in the conditions of the Far North it is impossible to achieve any significant splitting of oil due to oil-degrading bacteria that it contains. With the sum of the effective temperatures accumulated during the experiment 1939, about 10% of the oil is split in the control variants.
Thus, such manipulations as the creation of a film shelter and the addition of thalli of fucus algae are of the greatest importance for accelerating oil degradation.
In general, in none of the variants of the experiment, the result declared by the manufacturers was achieved -complete oil destruction in a season, however, the implementation of certain techniques can significantly increase the rate of oil destruction.
Conclusions
The most effective methods for accelerating oil destruction in the conditions of the Far North are -creation of a film shelter -the introduction of fucus algae as a moisture-retaining component -carrying out periodic loosening in order to aerate the oil layer Compliance with these measures makes it possible to achieve a three-fold reduction in the oil content in the soil (with an initial degree of pollution of 10%).
Acknowledgements
The studies were carried out within the framework of R&D "Study of biodiversity and bioremediation of natural ecosystems in the Arctic" № 123032400080-7. | 2023-07-23T15:24:35.926Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "d4d03cf1645aa1de93b3565c2520e1518a156e4d",
"oa_license": "CCBY",
"oa_url": "https://www.bio-conferences.org/articles/bioconf/pdf/2023/08/bioconf_ase2023_05011.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b99733a91c371e629a6c8afbc15ca29ca0b3527b",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
269746671 | pes2o/s2orc | v3-fos-license | Harnessing the Power of Light: The Synergistic Effects of Crystalline Carbon Nitride and Ti3C2Tx MXene in Photocatalytic Hydrogen Production
Abstract Photocatalytic hydrogen evolution is an environmentally friendly means of energy generation. Although g‐C3N4 possesses fascinating features, its inherent shortcomings limit its photocatalytic applications. Therefore, modifying the intrinsic properties of g‐C3N4 and introducing cocatalysts are essential to ameliorate the photocatalytic efficiency. To achieve this, metal‐like Ti3C2Tx is integrated with crystalline g‐C3N4 via a combined salt‐assisted and freeze‐drying approach to form crystalline g‐C3N4/Ti3C2Tx (CCN/TCT) hybrids with different Ti3C2Tx loading amounts (0, 0.2, 0.3, 0.4, 0.5, 1, 5, 10 wt.%). Benefiting from the crystallization of CN, as evidenced by the XRD graph, and the marvelous conductivity of Ti3C2Tx supported by EIS plots, CCN/TCT/Pt loaded with 0.5 wt.% Ti3C2Tx displays an elevated H2 (2) should be subscripted evolution rate of 2651.93 µmol g−1 h−1 and a high apparent quantum efficiency of 7.26% (420 nm), outperforming CN/Pt, CCN/Pt, and other CCN/TCT/Pt hybrids. The enhanced performance is attributed to the synergistic effect of the highly crystalline structure of CCN that enables fleet charge transport and the efficient dual cocatalysts, Ti3C2Tx and Pt, that foster charge separation and provide plentiful active sites. This work demonstrates the potential of CCN/TCT as a promising material for hydrogen production, suggesting a significant advancement in the design of CCN heterostructures for effective photocatalytic systems.
Introduction
The rapid population growth and the flourishing development of human society have led to tremendous consumption of DOI: 10.1002/gch2.202300235 fossil energy, which in turn induces energy shortages and environmental issues. [1]In this regard, the exploration of sustainable and renewable energy has invigorated the growing attention of society over the past decades. [2]Lightdriven H 2 production is an attractive and sought-after strategy for the transformation of inexhaustible solar energy into clean and renewable chemical fuels. [3]Polymeric carbon nitride (CN) has emerged as a promising candidate for photocatalytic H 2 evolution ascribing to its merit of well-matched electronic structure, outstanding stability, nontoxicity, and cost-effective synthesis. [4]he basic building blocks of CN are tri-s-triazine units, where the three nitrogen atoms and three carbon atoms are arranged in a hexagonal network similar to graphene. [1]The layers, held together by weak van der Waals, stack on top of each other, forming a bulk material.Despite exhibiting fascinating properties, pristine CN exhibits weak photocatalytic activity as a result of the rapid recombination efficiency of photoactivated electron-hole pairs, which is caused by the inherent -conjugated electronic system. [4,5]ypically, CN nanosheets are prepared via thermal polymerization of precursors, such as urea, dicyandiamide, and thiourea, where the resultant CN suffers from low crystallinity due to incomplete condensation. [6]The disordered structure or phase defects would serve as electron traps that restrict the intraor/and interlayer charge transfer to a certain extent, thus leading to elevated charge recombination and limiting the photocatalytic performance. [7]Therefore, many efforts have been dedicated to explore effective modification strategies, including structural engineering, [8] defect engineering, [9] and heterojunction engineering. [10]Aside from external activity promotion, there is also much room for improvement in the inherent characteristics of CN, where the ameliorated intrinsic activity will benefit the external promotion in return. [11]Thus, in pursuit of enhancing the intrinsic properties and further improving the photoactivity of g-C 3 N 4 , researchers endeavored to mitigate the structural defects of the material by reinforcing its crystallinity. [11]This was achieved through the introduction of metal cations, intercalated within the structure and stabilized by the surrounding nitrogen bridge atoms, thereby forming well-defined structural pores. [12]rystalline carbon nitride (CCN) displays great superiority as it retains the merits of CN while exerting ameliorated charge segregation efficiency.CCN consists of well-ordered, periodic structures of carbon and nitrogen atoms.Nonetheless, depending on the molten salts used in preparing the CCN, different alkali metals will be incorporated within the structure of CCN, thus giving rise to different effects on the crystal structure, morphology, and optical characteristics of the CCN. [6,13]CN is generally crystallized in the presence of molten salts (also known as ionothermal method) [14] or solid salts, [15] where the salts assist in inducing more complete polymerization of the in-plane heptazine-based melon chains, which subsequently brings about bolstered photocatalytic H2 evolution performance.Molten salts are capable of dissolving monomers and intermediates, making them a suitable liquid reaction medium that can remain stable at high temperatures and operate beyond the upper limits of organic solvents. [16]This property thus enables the polycondensation process of carbon materials that generally necessitate high temperatures.Different combinations of salts (for example, LiCl/KCl, NaCl/KCl) will lead to varying melting points of the molten salts, which will subsequently affect the polymerization process and the ensuing CCN. [14,16]n addition to the optimization of CN's inherent properties, incorporating another semiconductor or metal into CN to form a heterostructure is also one of the effective methods to facilitate charge segregation for enhanced photocatalysis. [17]In view of this, explosive interest has been attended to introduce a variety of cocatalysts into CN for accelerating charge transport and separation. [4]In recent years, MXenes, a family of 2D transition metal carbides, nitrides, and carbonitrides, which are commonly obtained from selective etching and exfoliation of their parent MAX phases, have received growing attention attributed to their multifarious intriguing and distinctive characteristics, such as remarkable conductivity, suitable Fermi level, ease of heterojunction construction, and numerous surface terminations. [18]n the one hand, the extraordinary metallic conductivity and appropriate Fermi level bestow MXenes the ability to function as an electron reservoir, where the formation of Schottky junction between MXenes and n-type semiconductors are beneficial for the efficient charge transport and segregation, and hence the boosted photoactivity. [19]On the other hand, the abundant functional groups on the surface of MXenes drastically raise the density of active sites, which is conducive to the adsorption and activation of the desired reactant molecules and the following redox reaction. [20]Meanwhile, the firm interface established between MXenes and semiconductors favors the charge migration across the heterojunction interface, which dramatically enhances photocatalytic capacity. [21]Stemming from all the excellent features of MXenes, MXenes exert striking potential in photocatalytic reactions.
Taking into account the unique characteristics of MXenes, Ti 3 C 2 T x -based MXenes have been extensively used for enhancing photocatalytic activity in recent years. [22]For example, Liu et al. constructed a g-C 3 N 4 /Ti 3 C 2 T x /Pt heterojunction for photocatalytic hydrogen evolution and contaminant degradation, where the as-synthesized catalysts achieved an H 2 production rate of 1948 μmol g −1 h −1 . [23]Although the photocatalytic performance of CN/TCT has been improved due to the formation of heterojunction and the presence of TCT serving as effective cocatalysts, the potential of CN/TCT has not been fully exploited due to the amorphous structure of CN that restricts the efficient charge transport and separation, thereby limiting the photoactivity.In this regard, coupling MXene with CN with high crystallinity is a propitious strategy for surmounting this bottleneck.Inspired by the significance of the crystallinity of CN and the astounding attributes of MXenes, we have successfully devised CCN/TCT MXene heterostructure and investigated their use as photocatalysts for hydrogen production.The as-prepared CCN/TCT hybrids displayed a robust and significantly enhanced hydrogen production rate in comparison with pristine CN and CCN.In particular, the hydrogen generation rate of the optimal sample, in this case, CCN loaded with 0.5 wt.% Ti 3 C 2 T x was 2651.93 μmol g −1 h −1 , which was 43.48-and 1.09-fold higher than that of bare CN and CCN.The ameliorated charge transfer dynamics and enhanced photoconversion efficiency originated from the synergistic effects of the highly crystalline structure of CCN, distinguished conductivity as well as abundant active sites offered by Ti 3 C 2 T x and Pt.
Synthesis of Amorphous g-C 3 N 4 Nanosheets (CN)
Urea of 6 g was placed in a crucible with a lid and heated to 550 °C at a rate of 10 °C min −1 in air atmosphere and maintained at 550 °C for 2 h.Upon cooling down, the as-acquired g-C 3 N 4 was ground to obtain it in powder form.
Synthesis of Crystalline g-C 3 N 4 Nanosheets (CCN)
CCN was prepared via the ionothermal method.Initially, 10 g of urea was mixed and ground with 10 g NaCl, 9.5 g KCl, and 0.5 g KOH at a mass ratio of 20: 20: 19: 1 (urea: NaCl: KCl: KOH) for 15 min in a mortar.Subsequently, the mixture was transferred to a crucible with a cover.The mixture was heated to 550 °C at a rate of 10 °C min −1 in the air atmosphere and maintained at 550 °C for 2 h.Upon cooling down to room temperature, the product was washed with de-ionized water at a temperature of ≈60 °C for several times to remove the residual salts.Finally, the obtained powder was dried overnight at 60 °C.
Synthesis of Ti 3 C 2 T x
Ti 3 C 2 T x was prepared by first dispersing 2 g LiF in 40 mL 9 m HCl solution and stirring for 5 min, followed by the slow addition of 2 g Ti 3 AlC 2 powder into the solution over a 5 min period, and stirred at 40 °C at 60 h.The mixtures were then washed with distilled water and centrifuged at 3500 rpm for 5 min until pH reached ≈6.Next, the mixture was dried overnight at 60 °C under vacuum conditions.
Synthesis of Crystalline g-C 3 N 4 /Ti 3 C 2 T x
CCN of 100 mg was dissolved in 20 mL DI water and ultrasonicated for 1 h.0.5 mg Ti 3 C 2 T x was added into 20 mL DI water and ultrasonicated for 1 h.The Ti 3 C 2 T x solution was then added dropwise to the CCN suspension over 5 min, and the mixture was stirred for 4 h to achieve uniform suspension.The mixture was washed one time and freeze-dried to obtain the composite, which was denoted as CCN/TCT-0.5.Later, a series of CCN/Ti 3 C 2 T x heterostructures were prepared by varying the mass of Ti 3 C 2 T x , based on the designed Ti 3 C 2 T x to CCN mass ratio: 0.2, 0.3, 0.4, 1, 5, 10 wt.%, which were denoted as CCN/TCT-1, CCN/TCT-5 and CCN/TCT-10, respectively.Figure 1 displays the synthetic process of CCN/TCT.CCN was prepared by molten saltassisted polymerization method with urea acting as the precursor, where NaCl, KCl, and KOH were added to assist in the formation of nanosheets and crystallization, respectively.Meanwhile, the multi-layered Ti 3 C 2 T x was obtained through the etching of Ti 3 AlC 2 by LiF and HCl.The bulk Ti 3 C 2 T x was subsequently exfoliated into few-layer ultrathin Ti 3 C 2 T x nanosheets under soni-cation.CCN/TCT was acquired upon mixing for 4 h and freezedrying.
Characterization
The crystalline phases, morphologies, and microstructures of the samples were characterized by X-ray measurement using PANalytical X'Pert Pro with Cu K as the radiation source ( = 1.54 Å), scanning electron microscopy (SEM), and transmission electron microscopy (TEM), respectively.The SEM images were obtained with a field-emission microscope by a Carl Zeiss GeminiSEM 500-70-22 instrument whereas the TEM was carried out using FEI Tecnai G2 20 S-TWIN system.The Fourier-transform infrared (FTIR) spectra were measured on PerkinElmer FTIR Frontier in the 4000 to 400 cm −1 range.The atomic environment and chemical composition were analyzed using an X-ray photoelectron spectroscopy (XPS, ESCALab220i-XL) with a monochromatic Al K X-ray source.The C 1s peak at 284.9 eV was used as an internal standard.The optical properties were studied by UV-vis diffuse reflectance spectrophotometry (UV-vis DRS) in a Jasco V-750 spectrophotometer with clean fluoride tin oxide (FTO) glass as the reflectance standard.
Photoelectrochemical (PEC) Measurements
The photo-electrochemical experiments were carried out in a standard three-electrode system at room temperature (27 °C) by an electrochemical workstation equipped with a 300 W Xe lamp.The reference, counter, and working electrodes were Ag/AgCl, Pt wire, and FTO coated with catalyst, respectively.The working electrode was prepared by dropping 10 μL suspension onto the glass conductive side of fluoride tin oxide (FTO) with (1 cm × 1 cm) area.The detailed steps were as follows: FTO glass was rinsed using ethanol several times to clean the surface.The suspension was prepared by dispersing 5 mg of catalyst in a mixture of 30 μL Nafion and 125 μL absolute ethanol solution and stirred until a homogenous solution was obtained.By using a measuring micropipette, 10 μL of the sample was loaded on an FTO clean glass electrode with (1 cm × 1 cm) coated area and dried naturally.All these tests were performed in 0.1 m Na 2 SO 4 aqueous solution (electrolyte).The Mott-Schottky (MS) plots were measured in the voltage range of −1 to 1 V with the frequency of 500, 750, and 1000 Hz.The electrochemical impedance
Photocatalytic Performance Evaluation
The photocatalytic HER was carried out in a quartz reactor with a closed gas circulation system (Figure 2).A water bath was used to maintain the reactor at a fixed temperature (25 °C).A 300 W Xelamp with an ultraviolet cut-off filter ( ≥ 420 nm) was used as the light source.20 mg of the as-prepared samples were dispersed in 60 mL 10 vol% TEOA aqueous solution with 0.534 mL 0.995 g L −1 H 2 PtCl 6 •6H 2 O and sonicated for 30 min in the dark to form uniform dispersion.Next, the whole reaction system was stirred and evacuated using N 2 gas to remove air and reach adsorption and desorption equilibrium between catalysts and TEOA solution in a closed gas circulation system for 30 min, and then sealed.The reaction was carried out (300 W Xe-lamp, ≥ 420 nm) for 4 h.The evolved H 2 amount was determined by gas chromatography (Agilent GC-7890B, MolSieve 5A Column, Ar as carrier gas) every 30 min.The apparent quantum efficiency (AQE) at 420 nm was calculated under the same experimental conditions using the following equation: The long-term stability test was performed under identical conditions to that of the photocatalytic HER.
Morphology and Microstructure
The morphology and structure of the Ti 3 C 2 T x were first characterized by FESEM.As indicated in Figure 3a and Figure 3b, the Ti 3 C 2 T x after etching treatment exhibits a multi-layered structure due to the removal of the Al layer, [24] where the sheet-like structure is observed at the edge of the MXene.Upon sonication, the exfoliated ultrathin sheet-like structure is expected, which is beneficial in aiding the efficient mobility of photoexcited charge carriers. [25]During the fabrication process of CCN/TCT, CCN and TCT self-assemble due to various forces, such as van der Waals, and form a composite structure in a solution during the 4 h stirring process.When the self-assembled structures are frozen and subjected to freeze-drying, the frozen solvent undergoes sublimation without disrupting the ordered arrangement structure, leaving the final product in a dry state while maintaining the structure and arrangement achieved during selfassembly.
To gain further insights into the structure of Ti 3 C 2 T x , CCN, and CCN/TCT hybrid, the TEM and HRTEM measurements are performed, and their images are presented in Figure 3c-g.As depicted in Figure 3c, Ti 3 C 2 T x exerts a multi-layered sheet-like structure, which is aligned with the FESEM images of Ti 3 C 2 T x (Figure 3a,b).The lattice fringe of Ti 3 C 2 T x appears notably in the inset of Figure 3c, where the lattice spacing of 1.25 nm corresponds to the characteristic (002) plane. [26]Meanwhile, the several layered thick structures with holey texture defects with pore sizes of 9.2-16.3nm displayed in Figure 3d and Figure 3e belong to CCN.After introducing 0.5 wt.% Ti 3 C 2 T x into CCN, the morphology of CCN is nearly unchanged (Figure 3f,g), since the loading amount of Ti 3 C 2 T x is considerably low.The TEM images of the CCN/TCT-0.5composite (Figure 3g,h) depict that the porous nanosheets and nonporous nanosheets are stacked closely, where a distinct interface between them is observed.The commendable interaction between CCN and Ti 3 C 2 T x is further ascertained by the inset of Figure 3h, where the clear lattice fringe of Ti 3 C 2 T x is detected in the 2D/2D layered arrangement CCN/TCT-0.5 heterostructures, which is similar to the previous works on g-C 3 N 4 /Ti 3 C 2 T x . [27]
Structure and Composition
In order to confirm the formation of Ti 3 C 2 T x as well as CCN, the composition and crystallinity of the samples, namely Ti 3 C 2 T x , Ti 3 AlC 2 , g-C 3 N 4 , CCN, and CCN/TCT-0.5,were characterized by X-ray diffraction (XRD) analysis.From Figure 4a, the sample with intense peaks is indexed to the Ti 3 AlC 2 MAX phase.After LiF/HCl etching treatment, the diffraction peak of (002) plane at 9.6°is broadened and shifted to a lower angle of 5.8°, indicating an expansion of the c lattice parameter from 0.921 to 1.523 nm, which is consistent with the HRTEM image of Ti 3 C 2 T x (Figure 3h). [28]The extended lattice spacing is attributed to the replacement of the Al layer with functionalities such as ─OH and -F. [28]Meanwhile, the disappearance of the most intense peak of (104) plane at 38.8°, which corresponds to the characteristic peak of Ti 3 AlC 2 , implies the complete removal of the Al layer and the successful transformation of Ti 3 AlC 2 to Ti 3 C 2 T x . [29]Moreover, the overall peak intensities of Ti 3 C 2 T x are weaker than that of Ti 3 AlC 2 , which originated from the thinner layered structure of Ti 3 C 2 T x . [30]esides, as implied in Figure 4a, the amorphous sample of g-C 3 N 4 shows two typical diffraction peaks at 12.9°and 27.4°, ascribing to the (100) in-plane distance of the nitrogen-linked hep-tazine units and (002) interlayer - stacking, respectively. [31]or crystalline carbon nitride, the (100) peak shifts to 7.9°, indicating an extended in-plane arrangement distance from 0.686 to 1.118 nm.The enlarged intralayer packing of polymeric melon units stems from the presence of Na and K atoms with larger atomic radii than that of C and N within the heptazine structure. [32]This depicts that the CCN exerts enhanced crystallinity with an unfolded in-plane framework that is associated with sufficient condensation of the conjugated network. [33]Whereas the main peak of (002) facet moves from 27.4°to 27.6°, corresponding to the narrowed interlayer spacing from 0.325 to 0.323 nm. [32]The reduced interlayer distance is attributed to the introduction of Na and K atoms, where the intercalated metal ions disrupt the ordered periodic stacking of the carbon nitride structure and strengthen the interlayer coupling by coordinating with adjacent N atoms, thereby resulting in a compacted packing and enhanced crystallinity of the melon framework. [25,34]Overall, the close interlayer packing is advantageous to the transfer of photogenerated charge carriers between the layers, and the subsequent photocatalytic activity.34a] In addition, the full width at half-maximum (FWHM) of the (002) peak of the CCN (2.66°) sample is narrowed compared to that of g-C 3 N 4 (2.71°), which again signifies the ameliorated crystallinity of CCN over bulk g-C 3 N 4 .Based on the XRD results and literature studies, [13,15] the hypothetical in-plane and interlayer structures of g-C 3 N 4 are signified in Figure 4b,c, respectively, whereas Figure 4d,e displays the postulated structures of CCN.As a whole, the higher crystalline degree of CCN provides a higher separation and transfer efficiency of photoexcited charges, which is propitious to the enhancement of photocatalytic performance, [35] and this statement will be affirmed by the EIS plots in Section 3.4.It should be noted that the CCN/TCT composite presented similar XRD patterns to that of CCN (Figure 4a), manifesting that the condensation of melon units is not influenced by the existence of Ti 3 C 2 T x .Further observation infers that the characteristic peak intensities of CCN at 7.9°and 27.6°in CCN/TCT are slightly lower than that of bare CCN due to the presence of Ti 3 C 2 T x .Notably, no apparent diffraction peaks of Ti 3 C 2 T x are observed, which are accredited to the low content and high dispersity of Ti 3 C 2 T x , [22b] such that the introduction of Ti 3 C 2 T x has a negligible effect on the crystal structure of CCN in the CCN/TCT heterojunction.According to the Scherrer equation, the average crystallite sizes of Ti 3 AlC 2 , Ti 3 C 2 T x , g-C 3 N 4 , CCN, and CCN/TCT are determined to be 39.08, 25.83, 3.74, 10.63, and 11.05 nm, respectively.The larger crystallite sizes observed in CCN and CCN/TCT compared to g-C 3 N 4 confirm the enhanced crystallinity resulting from the ionothermal treatment and incorporation of Ti 3 C 2 T x , respectively.
FTIR spectra were used to reveal the chemical groups and structures of CCN.As shown in Figure 4f, the FTIR spectra of all the samples are very similar, exemplifying that the g-C 3 N 4 , CCN, as well as CCN/TCTs share identical chemical groups and structures.The main absorption bands at 3000-3500 cm −1 are caused by the ─OH stretching vibration of the surface adsorbed water molecules as well as the vibration of amino groups (─NH 2 or ─NH) either in terminal or residual parts. [36]While the fingerprint signals at 900-1600 cm −1 represent the stretching vibration modes of C─N and C═N of the conjugated triazine heterocyclic ring, [36] the characteristic band within 700-800 cm −1 is assigned to the out-of-plane bending vibrations of sp 3 C─N bonds in heptazine ring. [37]Furthermore, the distinctive peaks of CCN and CCN/TCTs at around 2174 cm −1 are related to the presence of cyano groups (─C≡N) with high electron-withdrawing ability, which are originated from the deprotonation of ─C─NH 2 , [38] in other words, the partial decomposition of heptazine units during the ionothermal process.This peak is an indication of the formation of a huge amount of surface defects in the CN net-works when CCN is prepared via the molten salt method. [39]his result is further ascertained by the lowered peak intensities within 3000-3500 cm −1 of CCN as compared to g-C 3 N 4 , revealing that OH − groups released from KOH during the thermal polymerization process will react with ─NH groups to generate cyano groups. [40]Besides, the occurrence of a new peak at 990 cm −1 is corresponds to the symmetric and asymmetric vibrations of NC 2 bonds in metal-NC 2 groups, [41] owing to the incorporation of metal ions between the heptazine-based melon chains. [15,41]Moreover, it is noteworthy that there are no distinct peaks in Ti 3 C 2 T x and the frameworks for all the CCN/TCTs are similar to that of CCN, implying that the signals in the hybrids are mainly from CCN.The absence of extra peaks upon incorporating Ti 3 C 2 T x further verifies that the combination of Ti 3 C 2 T x has no effect on the structure of CCN due to the low loading amount of Ti 3 C 2 T x .Nonetheless, it is observed that the intensities at 3000-3500 cm −1 are lowered after the introduction of Ti 3 C 2 T x into CCN, manifesting that the ─NH x groups on the edges serve as the linkers that form bonds between Ti 3 C 2 T x and CCN. [42]Meanwhile, after the hybridization of CCN with Ti 3 C 2 T x , a slight shift toward a higher wavenumber is observed in the characteristic peaks of CCN/TCT hybrids (Figure 4 g).This signifies that new chemical bonding is formed between CCN and Ti 3 C 2 T x , where the chemical interactions favor the directional electron migration between CCN and Ti 3 C 2 T x . [10]o further probe the element compositions and chemical states of CCN/TCT, XPS analyses are carried out.As depicted in the XPS survey spectrum (Figure 5a), signals corresponding to the elements C, N, O, Na, K, and Ti are detected, confirming the existence of both CCN and Ti 3 C 2 T x in CCN/TCT hybrids.In the high-resolution C 1s spectra of CCN/TCT (Figure 5b), three distinct peaks located at 284.9, 287.0, and 288.5 eV are observed, which are assigned to C─C/C═C from carbon contaminants, C≡N/C─NH x , and sp 2 -bonded carbon (N─C═N), respectively. [38]n terms of N 1s spectrum (Figure 5c), the signals are deconvoluted into three peaks, in which the peaks at 398.8, 400.4,and 401.3 eV originate from C─N═C, N─(C) 3 , and C─N─H, respectively. [30]The results of C 1s and N 1s spectra prove the formation of the heterocyclic structure of g-C 3 N 4 heptazine in the CCN/TCT sample.Meanwhile, three oxygen peaks are observed in the O 1s spectrum of CCN/TCT-0.5 sample (Figure 5d).While the peak located at 531.9 eV is correlated to C═O, the peaks ≈533.1 and 535.5 eV represent the C─O and chemisorbed water molecules on the surface of the sample, respectively. [43]Apart from that, XPS investigations verify the presence of Na and K as displayed in Figure 5e,f, respectively.As indicated in Figure 5e, the binding energy of 1071.4 eV is aroused from the Na + , [33a] revealing the successful incorporation of Na in CCN/TCT hybrids, which corresponds well with the XRD results (Figure 4a).Two peaks are fitted in the K 2p spectrum (Figure 5f) around 293.3 (K 2p 3/2 ) and 296.1 eV (K 2p 1/2 ) with doublet separation energy of 2.8 eV, demonstrating that K + is introduced into the composite. [36,44]Considering the superior conductivity of the metals, the integration of Na and K in the composite is beneficial to the transfer of charge carriers and the enhancement of the photocatalytic activity. [36]On the other hand, it is noteworthy that the characteristic peak of TCT is not clearly implied in the XPS survey spectrum (Figure 5a), which is associated with the low loading content of TCT in the hybrid.Nonetheless, the presence of Ti element in CCN/TCT is further verified by the high-resolution Ti 2p spectrum (Figure 5 g).The Ti 2p spectrum is deconvoluted into four peaks, in which 454.0 and 460.9 eV are related to the Ti 2p 3/2 and Ti 2p 1/2 of Ti─C bond, respectively, whereas 458.4 and 463.9 eV belong to Ti 2p 3/2 and Ti 2p 1/2 of Ti─O bond, respectively. [45]
Light Absorption Properties
The optical properties of the samples were investigated using UV-vis DRS spectra (Figure 6a). Figure 6a indicates the optical absorption of pristine CN, CCN, and a series of CCN/TCTs.Between wavelengths of 300 and 450 nm, both CN and CCN demonstrate high absorption intensities with CCN displaying superior light harvesting capability.It is observed that the absorption edges of CCN experience an obvious red shift in compar-ison with CN, which is consistent with the slightly darker yellow appearance of the CCN sample.The redshift is attributable to the increased -electron delocalization with enhanced structural condensation of CCN. [41]At wavelengths beyond 450 nm, both CN and CCN exert lower light absorption ability.Besides the expanded absorption in the visible region, CCN demonstrates elevated light absorption in the UV region, as evidenced by the higher absorption intensities of CCN than that of CN (Figure 6a).As a rule of thumb, the light-harvesting ability of a conjugated polymer is dependent on its structural rigidity. [14]In view of this, the bolstered light absorption of CCN is originated from the increased chain stiffness of the highly crystalline structure of CCN with enhanced interactions between subunits. [14]pon coupling CCN with Ti 3 C 2 T x , boosted absorption intensities are observed in the DRS spectra of CCN/TCTs hybrid with respect to those of CN and CCN (Figure 6a).In the wake of increasing Ti 3 C 2 T x blending amount, the color of the hybrid powders gradually darkens and the light-harvesting ability of the composites becomes stronger, [26] as evidenced by the higher absorption peak intensities in Figure 6a.Meanwhile, with the increasing Ti 3 C 2 T x loading, CCN/TCT composites exert a slight red shift as compared to that of CN and CCN, implying greater visible light absorption of the hybrids.This is attributable to the deeper color of the samples, which is beneficial for photocatalytic performance.However, excessive loading of Ti 3 C 2 T x on CCN will block the active sites while exerting a light-shielding effect, which will adversely impact the photocatalytic performance. [10]he band and electronic structures are vital in determining the photocatalytic reaction.As such, the band gap energies (E g ) of CN and CCN are estimated from Tauc plots calculated via the Kubelka-Munk function, where the E g values of CN and CCN are determined to be 2.78 and 2.71 eV, respectively (Figure 6b).Overall, CCN exhibits narrower bandgaps in comparison with CN, indicating enhanced light absorption abilities of CCN, which are in line with the observation in Figure 6a.The steeper absorption edges (Figure 6a) and narrower band gaps (Figure 6b) are again stemmed from the enhanced conjugate structure with elevated in-plane crystallinity. [41]On the other hand, the band gap energies are calculated to be 2.69 eV for both CCN/TCT-0.5 and CCN/TCT-1; whereas both CCN/TCT-5 and CCN/TCT-10 record the same E g of 2.70 eV (Figure 6b).The E g of all CCN/TCT samples is similar to that of CCN, revealing the fact that there is no substantial effect in altering the band gap energy upon loading Ti 3 C 2 T x onto CCN.Similar results have been demonstrated by Yang's team [46] and Tahir's group, [47] where the introduction of Ti 3 C 2 T x into g-C 3 N 4 did not change the E g of g-C 3 N 4 .Instead, the light absorption ability was enhanced due to the dark color of Ti 3 C 2 T x .
To elucidate the energy band structure, Mott-Schottky analysis was employed to examine the conduction band of the samples.22a,46] From Figure 6c, the flat band potential of CN and CCN are measured to be −1.10 and −1.02 eV (versus Ag/AgCl, pH = 6.7), respectively, which are equivalent to −0.51 and −0.43 eV (vs RHE), according to the conversion formula of Ag/AgCl electrode potential to reversible hydrogen electrode potential, Where pH = 6.7 and E Ag∕AgCl = 0.197 V at 25 °C. [48]enerally, the conduction band potential of n-type semiconductors is 0.1 to 0.3 V more negative than the corresponding flat band potential (E fb ) due to the absence of exact doping concentration. [49]Therefore, the E CB of CN and CCN are calculated to be −0.71 and −0.63 V (versus RHE), respectively.The conduction band potentials are similar to that found in literature, where the E CB of g-C 3 N 4 has been reported to be ranging from −0.56 to −1.29 V (versus RHE). [26,46,49,50]The formula E VB = E CB + E g is used to compute the valence band potentials (E VB ) of the photocatalysts.Figure 6d depicts the schematic band structures of CN, CCN, and all the CCN/TCT samples.As observed, the CCN prepared in the presence of molten salts present a positive shift in the E CB as compared to pristine CN, which is consistent with the results reported by Zhang's group, [51] manifesting the higher reductive ability of photoexcited electrons of CCN.Meanwhile, the E VB of CCNs shifts to more positive values as compared to CN, signifying the elevated oxidative ability of the photogenerated holes in driving oxidation reactions.
The effect of Ti 3 C 2 T x loading on the band structures of CCN was investigated as well.Based on the E g and E CB obtained from Figure 6b,c, respectively, the energy band positions of CCN/TCT composites are illustrated in Figure 6d.Notably, the E CB of CCN/TCT hybrids displays a positive shift compared to pristine CCN (Figure 6c), with increasing Ti 3 C 2 T x loading.This shift implies a reduction in the energy barrier for hydrogen reduction, facilitating electron transfer and augmenting the hydrogen evolution reaction.Concurrently, the E VB of CCN/TCTs also becomes more positive with the addition of Ti 3 C 2 T x , suggesting enhanced suitability for driving oxidative reactions.Overall, these findings suggest a synergistic effect between CCN and Ti 3 C 2 T x , with Ti 3 C 2 T x serving as an electron reservoir and boosting efficient charge transfer and separation.This synergistic effect is further supported by the discussion in Section 3.1.(Figure 3h), where the formation of a commendable interface between CCN and Ti 3 C 2 T x is observed.Specifically, when CCN and Ti 3 C 2 T x come into intimate contact, the photogenerated electrons transfer from CCN to Ti 3 C 2 T x , establishing a favorable interface at the heterojunction.22b]
Photoelectrochemical Properties
To gain insights into the effect of crystallinity in enhancing the charge separation performance of CN, electrochemical tests were conducted.As shown in Figure 7, CCN presents a smaller arc radius than that of CN, signifying that the in-plane crystallinity of CCN, as verified in the XRD (Figure 4a) and FTIR plots (Figure 4f), is propitious to lowering the charge transfer resistance and bolstering charge mobility within the photocatalysts.Besides, the K + intercalated between the melon chains in CCN functions as an electron bridge that facilitates the carrier transfer between adjacent molecular frameworks, such that the charge transfer efficiency in CCN is superior to that of CN. [14] As such, the smaller arc radius of CCN (Figure 7) suggests its potential for enhanced photocatalytic performance, which will be further affirmed in Section 3.5.
On the other hand, Ti 3 C 2 T x exerts the smallest radius (Figure 7) owing to its remarkable electronic conductivity. [18]pon coupling with Ti 3 C 2 T x , CCN/TCT-0.5 displays a smaller EIS radius compared to pristine CCN, demonstrating that an appropriate loading of Ti 3 C 2 T x contributes to reduced charge transfer resistance and heightened interfacial charge transport ability, favoring efficient separation of photomotivated charge carriers.22a] Although metallic Ti 3 C 2 T x brings about vast electron mobility and magnificent conductivity of CCN/TCT, it should be noted that excessive loading of Ti 3 C 2 T x cocatalyst creates recombination centers and induces light shielding effect, which hampers the charge separation and photocatalytic activity. [26]From Figure 7, it is observed that the interfacial charge transfer efficiency of the composites decreases when the loading amount of Ti 3 C 2 T x exceeds 0.5 wt.%.In particular, CCN/TCT with 0.5 wt.% Ti 3 C 2 T x exhibits the smallest impedance radius, indicating that this loading amount is optimal for promoting charge separation and migration in the hybrid system.Beyond this value, a further increase in the cocatalyst content in the composite leads to weaker charge segregation performance, as evidenced by the expanding radius of other CCN/TCTs with increasing Ti 3 C 2 T x amount in Figure 7. Overloading semiconductor photocatalysts with cocatalysts leads to the generation of recombination centers and brings about light-screening effect, ultimately harming the photocatalytic capacity. [10]
Photocatalytic Activity
The photocatalytic H 2 evolution reaction was performed in a TEOA aqueous solution composed of 6 mL TEOA, 54 mL DI water, 0.534 mL H 2 PtCl 6 •6H 2 O, and 20 mg photocatalysts, using a Xe-lamp as the light source ( > 420 nm). Figure 8a compares the photocatalytic H 2 formation of CN, CCN, and a series of CCN/TCTs over the course of 4 h of reaction time.Inspiringly, CCN displays impressive activity toward hydrogen production with a decent amount of 2434.44 μmol g −1 h −1 , which is around 39.92 times higher than that of pristine CN.The improvement of photocatalytic performance of CCN as compared to CN is assigned to its sufficient condensation of the conjugated framework, as the unreacted amino groups that act as recombination sites are significantly reduced in the CN framework. [43]his accords with the narrower peaks of (100) and (002) planes in the XRD curves (Figure 4a) and the lowered peak intensities of ─NH groups in FTIR plots of CN and CCN (Figure 4f).The ameliorated crystallinity of CCN facilitates charge separation and transfer, [53] such that more highly reactive electron-hole pairs are able to partake in the surface reactions.Meanwhile, the suitable band position of CCN (Figure 6d) enables it to harvest a wide range of visible light to motivate more electrons and accelerate charge segregation to drive the photocatalytic reductive reaction.On the other hand, the broadband energy (Figure 6d) and low crystallinity of CN restrict the efficient formation and segregation of photoexcited charge, [53b] which leads to low hydrogen production.As a result, CCN, which exerts excellent hydrogen formation efficiency (Figure 8a), presents a better performance in the H 2 evolution reaction as compared to that of CN.This result corresponds well with the smaller arc radius of CCN as compared to that of CN in the EIS plots (Figure 7) as discussed in Section 3.4.
To further enhance the performance of CCN in light-driven hydrogen generation, Ti 3 C 2 T x is integrated into CCN to form CCN/TCT hybrids.The photocatalytic hydrogen evolution performance of the as-prepared CCN/TCTs has been assessed with different loading amounts of Ti 3 C 2 T x to identify the optimal loading contents under visible light illumination with TEOA as hole scavenger and Pt as the cocatalysts (Figure 8a).Compared to CCN/Pt, CCN/TCT-0.5/Ptdemonstrates ameliorated H 2 generation rates.When pure CCN is used as photocatalysts, a lower H 2 yield of 2434.44 μmol g −1 h −1 is attained due to the low separation efficiency of photoinduced charge carriers. [67]This limited separation results in only a fraction of effective electrons being capable of activating the conversion of protons into H 2 .When 0.5 wt.% Ti 3 C 2 T x is anchored with CCN/Pt, a magnificent H 2 production rate of 2651.93 μmol g −1 h −1 is achieved (Figure 8a), where the augmented performance arises from the introduction of Ti 3 C 2 T x that aids in charge carrier segregation and serves as reactive sites. [68]Meanwhile, a high AQE of 7.26% (420 nm) is achieved by CCN/TCT-0.5/Pt,manifesting the impressive photoactivity of CCN/TCT-0.5/Pt.The synergistic effect of the superior conductivity of Ti 3 C 2 T x , abundant active sites offered by Ti 3 C 2 T x and Pt, and the close interfacial contact developed between CCN and Ti 3 C 2 T x , as evidenced by the tight interface in the TEM images (Figure 3g,h), is conducive to facilitating the charge carrier separation and migration within the composite. [69]Due to the high conductivity of Ti 3 C 2 T x , the photogenerated electrons from the CCN tend to flow to Ti 3 C 2 T x .On the other hand, due to the difference in Fermi levels between Ti 3 C 2 T x and Pt, the electrons held by Ti 3 C 2 T x are prone to transferring to Pt, achieving a lower energy state.It has been reported that the presence of Pt would facilitate the transfer of electrons accumulated in Ti 3 C 2 T x to Pt, where this additional electron transfer pathway further intensifies the charge separation ability of the composite. [70]Once the electrons reach Pt, they partake in a hydrogen evolution reaction; whereas Ti 3 C 2 T x continues to capture and transfer electrons, ensuring a continuous supply of electrons for photocatalytic reactions occurring at the Pt sites.Aside from the abovementioned orientation, Pt and Ti 3 C 2 T x may distribute separately, where the photomotivated electrons directly transfer into the cocatalysts and store inside them, giving rise to the enhancement in charge segregation.Thus, the presence of Pt and Ti 3 C 2 T x in the composites enables the spatial isolation of photogenerated electron-hole pairs, preventing the undesired charge recombination in the photocatalysts, thereby allowing efficient charge separation.In the context of hydrogen production, both Ti 3 C 2 T x [54] , 2) [55] , 3) [56] , 4) [23] , 5) [57] , 6) [58] , 7) [59] , 8) [60] , 9) [61] , 10) [62] , 11) [63] , 12) [64] , 13) [65] , 14) [66] ) c) Photocatalytic hydrogen production rate as a function of illumination time over CCN/TCT-0.5.d) FTIR spectra of CCN/TCT-0.5 before and after hydrogen production stability test.
and Pt provide adsorption and active sites for hydrogen reduction reactions due to their affinity toward protons.The captured electrons in both cocatalysts reduce protons to hydrogen gas at the active sites.Therefore, while the incorporation of Ti 3 C 2 T x as an electron sink fosters the charge segregation, the collaborative synergy between Ti 3 C 2 T x and Pt dual cocatalysts that provides increased surface reactive sites further endows CCN/TCT-0.5/Ptwith superior photoactivity.
However, increasing the loading amount of Ti 3 C 2 T x beyond 0.5 wt.% brings adverse impacts on the hydrogen generation activity (Figure 8a), where the production of H 2 reduces with increasing Ti 3 C 2 T x amount.This result aligns with the reported charge separation performance of the composites shown in Figure 7, where CCN/TCT-0.5 exerts the second smallest arc radius, signifying superior charge segregation efficiency; whilst other CCN/TCTs (CCN/TCT-y, y = 1, 5, 10) show an expanding arc radius with increasing Ti 3 C 2 T x loading, which brings about reduced charge separation ability, and hence lower H 2 yield.This phenomenon is associated with the excessive loading of cocatalysts, leading to the creation of recombination centers and light shielding effect. [10]Among all the CCN/TCT/Pt composites, the hybrid integrated with 0.5 wt.% Ti 3 C 2 T x demonstrates the highest H 2 production rate, signifying that 0.5 wt.% is the optimal loading amount of Ti 3 C 2 T x cocatalysts on CCN in photocatalytic HER.Meanwhile, the decline in efficiency of CCN/TCT/Pt composites with loadings ranging from 0.2 to 0.4 wt.% can be attributed to several factors, such as possible aggregation of Ti 3 C 2 T x particles within the CCN matrix and possible inhibition of reaction by Ti 3 C 2 T x at lower loadings.These limitations hinder the catalytic activity of CCN/Pt composites loaded with 0.2 to 0.4 wt.% Ti 3 C 2 T x , leading to diminished production rates compared to pure CCN/Pt and CCN/TCT-0.5/Pt.As a whole, the superior performance of CCN/TCT-0.5/Ptunderscores the importance of optimizing Ti 3 C 2 T x loading amounts for maximizing the catalytic performance of CCN composites.Further research is essential to elucidate the underlying mechanisms governing the interactions between CCN and Ti 3 C 2 T x to unlock the full potential of these composites in various applications.
In Figure 8b, a comparison of various catalysts for hydrogen evolution is presented.Generally, precious metals including platinum, ruthenium (Ru), rhodium (Rh), palladium (Pd), and gold (Au) are used as cocatalysts on semiconductor photocatalysts to address the limitations of photocatalysts.It is important to note that Ru, Rh, Pd, and Au often come with higher price tags than Pt due to limited availability and higher production costs.As illustrated in Figure 8b, catalysts like Pd/g-C 3 N 4 , [54] Au/g-C 3 N 4 /BiVO 4 , [58] Ru-CoP/g-C 3 N 4 , [56] and Rh/NiFeLDH, [63] which comprise of these more expensive noble metals, are less economically favorable.Aside from this, the loading amount of Pt varies among different photocatalytic systems.Some systems with low Pt loading, such as 0.254 wt.% Pt/TiO 2 [59] and 0.5 wt.% Pt/Graphdiyne, [62] demonstrated lower performance levels in terms of H 2 generation and AQE as compared to the optimal sample in this work (1 wt.% Pt/0.5 wt.% Ti 3 C 2 T x /CCN).Conversely, systems like N-doped VC/C 3 N 4 coupled with 2 wt.%Pt displayed a hydrogen production rate of 766 μmol g −1 h −1 and an AQE of 0.2% (420 nm), [55] while 3 wt.%Pt/CoFe 2 O 4 /ZnIn 2 S 4 exerted remarkable HER performance with a hydrogen evolution efficiency and an AQE of 800 μmol g −1 h −1 and 5% (420 nm), [65] respectively.Despite ameliorating the photocatalytic activity, these high Pt loading systems (with loading amounts > 1 wt.%) incur increased costs, limiting their viability for large-scale applications.It is noteworthy that Pt/Ti 3 C 2 (O, OH) x /Zn 2 In 2 S 5 [66] showed a comparable HER efficiency and AQE (2596.8μmol g −1 h −1 , 8.96%@420 nm) to the optimal catalyst in this work, but the higher loading amount of Pt and MXene required in the Pt/Ti 3 C 2 (O,OH) x /Zn 2 In 2 S 5 hybrids renders the system less economically feasible.Therefore, considering both cost-effectiveness and performance, the catalyst developed in this work, CCN loaded with 1 wt.%Pt and 0.5 wt.% Ti 3 C 2 T x emerges as a superior or comparable option to the catalysts shown in Figure 8b.It delivers an impressive hydrogen production rate of 2651.93 μmol g −1 h −1 and a commendable AQE of 7.26% (420 nm).Overall, the combination of its exceptional photocatalytic performance and cost-effectiveness positions it as a promising choice for photocatalytic hydrogen evolution applications.
The durability of a photocatalyst is a crucial parameter in the potential application of photocatalytic hydrogen production reaction.To evaluate the long-term stability in a continuous reaction system, it is beneficial to conduct long-run activity tests rather than cyclic tests.A stable photocatalyst should maintain a constant rate of hydrogen evolution, while any significant deviation in hydrogen generation upon reaching a steady state indicates a decline in photocatalyst activity.In light of this, the long-term stability test result of the representative CCN/TCT-0.5 is presented in Figure 8c.Distinctly, the CCN/TCT-0.5photocatalyst reaches a steady state of photocatalytic H 2 generation (≈23 000 μmol g −1 h −1 ) after 8 h of reactions.Upon that, the reaction rates remain steady without apparent change in hydro-gen evolution at extended illumination times.This result signifies that the CCN/TCT-0.5photocatalyst possesses marvelous stable performance even under long-run working conditions, highlighting its potential application in continuous processes suitable for commercialization.Additionally, FTIR measurement was utilized to characterize the spent CCN/TCT-0.5 after photocatalytic HER (Figure 8d).It is found that the overall FTIR spectra of both fresh and spent CCN/TCT-0.5 are similar, with little to negligible change between them, thus further manifesting its stable structure after photocatalytic reaction.
To further demonstrate the potential of CCN/TCT-0.5/Pt,the photocatalytic performance of carbon nitride and its hybrids toward the H 2 evolution reaction is being compared with that of the optimal catalysts in this work (Table 1).As observed, generally, g-C 3 N 4 /Ti 3 C 2 T x hybrids without Pt as cocatalysts exerted subpar photocatalytic performance to that of carbon nitride-MXene system with incorporated Pt.For instance, p-g-C 3 N 4 /Ti 3 C 2 T x prepared by Xu et al. [22c] achieved a H 2 evolution rate of 727 μmol g −1 h −1 whereas the p-g-C 3 N 4 /Ti 3 C 2 T x hybrid reported by Kang and coworkers [22b] attained a high H 2 production rate of 982.2 μmol g −1 h −1 .22a,50a] It should be noted that the g-C 3 N 4 /Ti 3 C 2 /Pt prepared by Liu and coworkers [23] exhibited a relatively high photocatalytic activity, recording a H 2 production performance of 1948 μmol g −1 h −1 and an AQE of 3.83% (420 nm).The 3D interconnected structure as well as the presence of Ti 3 C 2 and Pt as cocatalysts contributed to the ameliorated photoactivity of g-C 3 N 4 /Ti 3 C 2 /Pt.The composite with a 3D interconnected structure is expected to have elevated surface area as compared to that of the CCN in the current work, which is usually beneficial for the ameliorated photoactivity as the reactants have a higher possibility to contact with the catalysts.Despite the fact that the CCN generated in this study may be inferior to that of the 3D g-C 3 N 4 reported by Liu's team, CCN/TCT-0.5/Pt in this work still outperforms g-C 3 N 4 /Ti 3 C 2 /Pt in terms of both H 2 evolution rate (2651.93μmol g −1 h −1 ) and AQE (7.26%, 420 nm), which is ascribed to the synergistic effect of CCN with high crystallinity and Ti 3 C 2 /Pt dual cocatalysts that offers ample active sites and promotes charge separation.It is worth mentioning that each g-C 3 N 4 /Ti 3 C 2 hybrid possessed excellent stability (>15 h), indicating the stability of carbon nitride/MXene composites in general.However, it should be noted that the stability of other g-C 3 N 4 /Ti 3 C 2 composites, aside from the catalyst from this study, was evaluated based on cyclic tests, which focus on the reusability of catalysts, rather than long-term stability tests.This limitation hinders the direct comparison of the catalysts' stability in long-run activity.Overall, the performance of CCN/TCT-0.5/Pt is superior to that of other similar hybrids, showcasing the vast potential of CCN/TCT to serve as an efficient system in light-driven catalysis processes.
Possible Charge Transport Mechanism
In light of the aforementioned discussion, the overall mechanism proposed for the photocatalytic H 2 evolution over CCN/TCT/Pt heterostructure is illustrated in Figure 9. Benefitting from the extraordinary conductivity as well as the more negative E F of 1.61% (420 nm) >60 Dong et al. [ 22a] 2D/3D g-C 3 N 4 /Ti 3 C 2 10 vol% Triethanolamine 300 W xenon lamp ( > 420 nm) 727 NA >15 Li et al. [ 50a] 3D/2D g-C 3 N 4 /Ti 3 C 2 /Pt 10 vol% Triethanolamine 300 W xenon lamp ( > 420 nm) 1948 3.83% (420 nm) >15 Liu et al. [ 23] Crystalline carbon nitride/Ti 3 C 2 /Pt 10 vol% Triethanolamine 300 W xenon lamp ( > 420 nm) 2651.93 7.26% (420 nm) >15 This work Ti 3 C 2 T x , when CCN and Ti 3 C 2 T x come into contact, the electrons in CCN transfer to Ti 3 C 2 T x for equilibrating the Fermi level between the two materials. [70]The same goes for Pt.Under visible light illumination, the electrons in CCN are excited and jump from the VB to the CB of CCN, leaving an equivalent number of holes in the VB.The photogenerated electrons fleetly migrate across the intimate heterointerface to Ti 3 C 2 T x and Pt, which serve as the electron reservoirs.While the crystalline structure of CCN hinders the recombination of isolated photomotivated charges during the transportation process, [48] it should be noted that the CCN with high crystallinity still has unavoidable defects on it, where during the charge transport and migration process, some electrons and holes will recombine in the defects, resulting in limited charges partaking in the photocatalytic reactions.Meanwhile, the presence of electron reservoirs greatly restrains the photoactivated electrons migrated to the cocatalysts from backflowing, hence enabling an elevated amount of strong reductive electrons to participate in the photocatalytic HER. [42]Simultaneously, the photoexcited holes with strong oxidative capability that are accumulated in the VB of CCN partake in the oxidation of the TEOA sacrificial agent.As a result, the CCN/TCT-0.5/Ptexhibits a bolstered performance for light-driven H 2 evolution.
Conclusion
In summary, the CCN/TCT hybrids with ameliorated crystallinity and charge transport capacity have been successfully prepared using a one-step calcination process assisted by alkaline salts, followed by a freeze-drying method.The effect of crystallinity in charge transfer and separation efficiency as well as the photocatalytic performance in the H 2 evolution reaction are evaluated, in which the crystallinity of CCN is ascertained by XRD, FTIR, and XPS.Overall, CCN exerted superior photocatalytic activities than CN, as the more ordered arrangement of the heptazinebased melon chains has lesser reintegration centers, thereby enabling fleet electron-hole pair separation and transfer, which is conducive to bolstering the surface redox reactions.On the basis
Figure 1 .
Figure 1.Schematic diagram of the synthesis of CCN/TCT.
Figure 2 .
Figure 2. Photocatalytic performance evaluation of carbon nitride and hybrid photocatalysts for HER.
Figure 4 .
Figure 4. a) XRD pattern of CCN/TCT-0.5,CCN, g-C 3 N 4 , Ti 3 C 2 T x , and Ti 3 AlC 2 .Postulated b) in-plane and c) interlayer crystal structures in g-C 3 N 4 .Postulated d) in-plane and e) interlayer crystal structures in CCN.f) FTIR of TCT, g-C 3 N 4 , CCN, and CCN/TCT hybrids with varying Ti 3 C 2 T x loading amounts.g) Zoomed-in FTIR of CCN and CCN/TCTs.
Figure 6 .
Figure 6.a) UV-vis DRS spectra, b) Tauc plots, c) Mott-Schottky plots, and d) Schematic band structures of CN, CCN, and a series of CCN/TCT (CCN/TCT-y, where y is the loading amount of Ti 3 C 2 T x ) with varying Ti 3 C 2 T x loading amounts.Inset of (a) shows the photographs of the studied catalysts.
Figure 7 .
Figure 7. EIS Nyquist plots of the as-prepared samples of TCT, CN, CCN, and CCN coupled with varying Ti 3 C 2 T x amounts. | 2024-05-13T15:28:23.720Z | 2024-05-11T00:00:00.000 | {
"year": 2024,
"sha1": "136ac2e925f1a93bdf9539ba7d71a5fde489d4f4",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "61946a639978dd2524ed3b228323e28023456290",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234234539 | pes2o/s2orc | v3-fos-license | The Effects of Globalization and the Sharing Economy on the Intercultural Communication of the Young Generation
Research background:The pros of globalization processes are a challenge for society, but also a warning not to turn to society itself. E.g. disproportionate support of consumption and consumerist lifestyle can totally destroy the social belonging of individual cultures and the environment. With the growing globalization processes of the world’s population, communication is an important means of the stability of postmodern society. It can establish and eliminate the causes of communication noise, which can grow into a conflict. All this has an impact on the global movement of people and intercultural communication, especially among the younger generation.Purpose of the article:The article points to the need to create a unified and fair communication platform for the young generation to properly understand their position and position in the intercultural environment and not to be manipulated by global communication, which is managed by multinational companies.Methods:The main methods to be used will be structured analysis and simple description of facts, as well as methods of synthesis, logical and deductive procedures in order to identify and formulate the effects of globalization on the formation of relations of the young generation.Findings & Value added: The article will present both a description of the current behavior of the young generation in the context of their possibilities of intercultural communication and a recommendation that information and knowledge become a source of economic growth.
Introduction
Globalization (a world without borders) has a very important relation to consumer society and changes in a group of influential economic players such as China, Brazil and India. These players will become much stricter opponents of the current leading economies -the US and the EU. For example, in normal times China's influence in the equity market has risen to a level close to that of the United States, although the relative impact of the United States became stronger in crisis periods. Nonetheless, China's bond market remains a negligible player. China's role may be interpreted as a "regional pull" factor, while that of the United States remains a key "global push" factor [1].
The global movement of people has led to the fact that individual states have become multicultural societies consisting of members of different cultures. Contacts are constantly developing within the framework of international cooperation, which has contributed to the unceasing geographic growth in the labour market. It is therefore necessary to prepare this human capital not only from a specialist and linguistic point of view, but also from the point of view of the specific cultural characteristics of the individual nations. This involves preparation for life in a multicultural reality and preparation for the social, political and economic aspects of interaction between people within the framework of a different cultural environment. [2] Differences in the cultural background and value structure of different ethnic groups pose significant risks of confrontation, which can be stimulated through global means of communication. However, it is desirable throughout society for different cultures to get to know each other and create coherent and compromise relationships. That is why global communication must create a peaceful environment, be a means of bringing together and recognizing different cultures, and not encourage incitement and hostility. For example, in the late 1990s, protests emerged against the effects of globalization, which were linked to rising unemployment due to relocations to lower labour costs and unequal subsidies for the production of certain products that were sold at dumped prices. [3,4] Politically, these processes were hidden under the so-called liberalization of international trade. Although countries are aware of the dangers (e.g., the loss of their own economic identity in certain sectors), global processes continue, with people being served as positives rather than negatives. So the debate about globalization does not stop and many authors have different views on this matter. Some authors [5,6] focused on the relationship between the state and the globalizing market and talked about the arrival of a qualitatively new situation -a global economy whose forces are undermining the position of the state. On the other hand, sceptics [7] have argued that the position of the state has not changed and that there can be no question of a homogeneous global market [8]. No matter how the discussions are conducted, it is clear that this issue is crucial for the human population and it is necessary to constantly study and analyse it from different perspectives.
From the point of view of intercultural communication, the topic of globalization is very topical in connection with the progressive enlargement of the European Union and the current ever stronger migratory movements. According to Welsch [9], the concept of interculturality is based on the traditional idea that cultures are a kind of islands, strictly limited and separate entities that can ignore, underestimate, fight against each other or try to understand each other, exchange values, models, ways of acting and living. The dependence between globalization and intercultural communication is the subject of research by many scholars [10][11][12][13][14][15].
Material and methodology
The main methods used in the elaboration of this topic were structured analysis and description of facts, as well as methods of synthesis, logical and deductive procedures to identify the effects of global communication and shared economy on shaping the attitudes of young people influencing their opinion structure in post-modern society. The topic is organized according to the logic of facts and the deduction of results from the analysis of professional scientific texts into a complex meaning, which identifies the basic impacts of current communication in the global environment on the lifestyle of the young generation.
Results and discussion
The Internet and new digital media are a key environment for communication in today's post-modern society. Almost no industry can do without them, they have become part of our lifestyle and expression. However, their influence on the mental sphere of people is alarming. In the field of neurobiology, the use of smart media is reflected in certain changes in people's behaviour and thinking. Perception, thinking, experiencing, feeling and acting leave memory traces in the brain, which can be photographed or filmed today. These synapses with electrical signals between nerve cells have begun to change since the turn of the millennium (when they were first imaged). The human brain develops through constant learning. However, the time spent with digital media is a stagnant period for the brain. Over a long period of evolution, the brain has been constantly adapting to something, giving rise to diseases of civilization that are more common today. The mental and mental spheres of society are determined and begin to manifest themselves in different mechanisms and processes that affect a person's cognitive performance, such as attention, speech development or intelligence. It is certain that the media significantly influence emotional and socio-psychological processes, including moral ethical attitudes and personal identity. This phenomenon is even referred to as digital dementia. Digital dementia is created by the uneven development of the brain. The current young generation is not learning, but is working with information that has already been created, which may not even be verified and true. Perception is narrowed down to information and passed on without becoming stuck in memory. [16]
Generace Z a média
Generation Z is a very interesting generation. It's not just that it's a generation of Digital Natives. It is also very important for the definition to realize that this is the last whole generation, brought up by digital immigrants and at the same time the first generation of interactive media. It lives on the border between online and offline, at a time when the online world is just evolving. Meyrowitz [17] already points mainly to television, but both television and digital media are changing social reality. Previously, parents had the opportunity to check the contents that children read in books. At present, children already have the opportunity to find content on the Internet that would not be considered appropriate in the era of the printed word. Growing up in an environment where, according to my previous research, parents of young children (up to 6 years of age) approach media education passively, control the time spent watching AV content and also the content of what children watch, but do not use active media education in preparation work with the media. [18] Digital content thus comes as an "uninvited guest" to her homes, to a place that is considered safe. Volek [19] describes it as the place where our emotional experiences are strongest, to the basic formation: I, that's how it happens in the space of home.
SHS Web of Conferences 9 2, 0 (2021) Globalization and its Socio-Economic Consequences 2020 5008 https://doi.org/10.1051/shsconf/20219205008 Both authors point to the clear effects of the media on children and young people. From a global perspective, Generation Z can be characterized by the mainstream culture of most countries in Europe, the United States, especially the large cities of these countries. Their strong economic environment creates conditions for higher product production and higher consumption by customers. People no longer buy just what they need to survive, but what they enjoy, what brings them pleasure and happiness. This is how most of society and the Z generation behave. It can simply be stated that the media significantly support the development of marketing and communication skills and the development of "consumerism". The young generation develops their model of behaviour, which can be characterized as "methodological collectivism". It is an individual who pursues his own benefit, however, his actions are influenced by the opinions and values of the environment. In essence, this means that the behaviour of an individual is not independent and is influenced by the environment in the form of various regulators, styles, sanctions. [20] Generation Z is the first generation to be surrounded by interactive media and growing up in a Web 2.0 environment. It is also referred to as iGeneration. While their parents had various devices for watching television, playing video games, playing music, making phone calls, and so on, this generation does all of these tasks on a single device that fits in its pocket. This high-tech era helps them to be powerful both online and offline. Generation Z lives in both virtual and physical reality, has easy access to world events. He sees the world's problems; he wants to find a solution. [21] Our new electric selves in the digital age go beyond the old boundaries of human experience, people have become electrified: "Humans become electric", new disciplines are emerging, such as cyber psychology that explains human behaviour in cyberspace "), in which the term" digital psyche "is usedwhat is human within the digital world. Suler points out that thanks to the digital world, people are able to manifest, discover their inner characteristics. And he adds that as we better understand the possibilities of cyberspace, we realize that a healthy individual is able to integrate offline and online life. But for now, we do not yet know this world well enough and we must realize that we are in the same situation as before the birth of the universe. [22] It is clear that Generation Z is affected by strong global media pressure, which is caused by new information technologies and especially social networks, which are becoming more and more accessible to everyone. (Table 1, Table 2 For a comparison with the world, the use of social networks of Z generations in the Czech Republic is shown in Table 3. Generation Z lives in a visual culture. At a time when the Internet is the most important source of information, change rules without a fixed order. The vast majority of this generation is connected to the Internet and also has it on their mobile phones. But because digitization in the Czech Republic is still evolving, this generation is forced to communicate face-to-face and use "old approaches", especially as far as authorities, health care and education are concerned. It is thus on the border of two worlds, one online -more advanced and the other offline. Of course, they are trying to take advantage of a simpler online path, which is why this Z generation often seems lazy. The total use of the current digital world according to the DESI Index has the entire EU28 reserves, even though the young generation of the Czech Republic, according to statistics, is above the EU28 average. This means that although the Czech Republic is generally lagging behind developed Europe, it will soon become relevant with this generation. Experience from US research (Goldman Sachs Global Investment Research, 2015) shows that lower income and higher debt of Generation Y are changing its approach to ownership, and Generation Z is gradually focusing on renting and buying goods as services. These are mainly music, luxury goods and cars. American economist Jeremy Rifkin claims that by a quarter of a century, car sharing will be the norm, while his individual ownership will be anomaly.
Generation Y avoids stereotypes, as evidenced by the massive growth of its interest in a better health lifestyle and higher spending on sportswear. On the contrary, Generation Z is looking for non-standard paths that are not burdened by the effort of administrative tasks and personal contact. This can be interpreted as meaning that the shared economy will be more acceptable for generation Z than for generation Y.
Generation Z and sharing economy
Sharing economy is a common consumption based on the mutual exchange of objects, which is provided by its owner for use to those who do not pay to buy an object for various reasons. Most of these reasons include the low level of its use or the financial situation. Family lending has become the model of a sharing economy, the difference being that foreign people borrow items from each other. In addition to objects (car, real estate), the provision of services (e.g. the offer of one-off job offers) is also included in the sharing economy. Today, multinational corporations are behind a sharing economy and have become a strong competitor in many sectors. The subjects of business in a sharing economy can be divided into five areas: In the Czech Republic, the sharing economy has established itself more slowly than in the surrounding developed European countries (e.g. Italy, Spain, Great Britain). In 2017, only 7 -8% of the population of the Czech Republic used shared services, currently it is around 10 -15%. The process of sharing economy is based on information technology, where specific shared services are recommended using applications. It is an interesting form of interactive communication, which is based on trust and references of a shared subject. Generation Z can use the sharing economy much better thanks to its popularity to search for information in the online environment -in applications.
For Generation Z a sharing economy is a beneficial economic phenomenon based on making new contacts without meeting in person. The trust placed in the shared service is the basis for the self-regulation of the subject of business provided and leads the Z generation to economic independence and literacy. However, the shared economy also has disadvantages. One of the most serious disadvantages is that shared service providers often do not use legal protections as in traditional business relationships, and in the event of a breach of the rules, the injured party does not have sufficient legal basis to recover damages. Nevertheless, the sharing economy is estimated by Deloitte to generate around two miliard crowns in the Czech Republic. The sharing economy in the Czech Republic is facing certain regulatory changes, supporting the precise definition of providers of shared objects and the objects of business themselves. This is a decision: • which service fulfils the characteristics of a business or has the character of permitted or prohibited activities, • which will make partial changes to the legal environment in defining more precise guidelines between entrepreneurship and other gainful activity, • which comprehensively solves extra income in a sharing economy.
It follows that the shared economy will develop as a new economic and business trend. For Generation Z this is a great challenge and opportunity to combine its focus on information technology with business or the use of shared objects. The reason for this statement is that Generation Z avoids face-to-face communication and communicates primarily through online chatting. It seeks out information and communicates with new people through social networks and applications, which is a beneficial factor for the sharing economy, given that the vast majority of the offer of shared items is kept in applications. They also avoid public transport and look for alternative services. [24,25] SHS Web of Conferences 9 2, 0 (2021) Globalization and its Socio-Economic Consequences 2020 5008 https://doi.org/10.1051/shsconf/20219205008
Conclusion
Analysis of the communication behaviour of the Z generation has shown that it is the result and part of the effects of globalization. It is influenced by strong global media pressure, which is caused by new information technologies and especially social networks, which are becoming more and more accessible to everyone. Her communication activity is manifested mainly in the Internet environment and lives in a visual culture. It is the first generation that is surrounded by interactive media and grows in a Web 2.0 environment and is referred to as iGeneration. The high-tech era helps them to be powerful both online and offline. Generation Z lives in both virtual and physical reality, has easy access to world events. This adds to the development of a sharing economy that makes full use of global information technology. Through applications, it easily gets into the field of view of young generations Y and Z. There is a realistic assumption that generation Z in particular will make more use of the offers of a sharing economy, as ownership of objects will not be a priority for them. For Generation Z, the immediate fulfilment of its needs will be dominant, and this is a great potential for a sharing economy. | 2021-05-11T00:07:06.656Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "a30726024528d4cf832c6bb38f975e853dc46b7e",
"oa_license": "CCBY",
"oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2021/03/shsconf_glob20_05008.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "c34ac47a3094ed325c942b485e487d0c663e3523",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Sociology"
]
} |
7816911 | pes2o/s2orc | v3-fos-license | in vitro human skin permeation of endoxifen: potential for local transdermal therapy for primary prevention and carcinoma in situ of the breast
Purpose: Oral tamoxifen, a triphenylethylene (TPE), is useful for breast cancer prevention, but its adverse effects limit acceptance by women. Tamoxifen efficacy is related to its major metabolites 4-hydroxytamoxifen (4-OHT) and N -desmethyl-4-hydroxytamoxifen (endoxifen [ENX]). Transdermal delivery of these to the breast may avert the toxicity of oral tamoxifen while maintaining efficacy. We evaluated the relative efficiency of skin permeation of 4-OHT and ENX in vitro, and tested oleic acid (OA) as a permeation-enhancer. Methods: 4-OHT, ENX, and estradiol (E2) (0.2 mg/mL of 0.5 µ Ci 3 H/mg) were dissolved in 60% ethanol-phosphate buffer, ± OA (0.1%–5%). Permeation through EpiDerm TM (Matek Corp, Ashland, MA) and split-thickness human skin was calculated based on the amount of the agents recovered from the receiver fluid and skin using liquid scintillation counting over 24 hours. Results: In the EpiDerm model, the absorption of 4-OHT and ENX was 10%–11%; total penetration (TP) was 26%–29% at 24 hours and was decreased by OA. In normal human skin, the absorption of 4-OHT and ENX was 0.3%; TP was 2%–4% at 24 hours. The addition of 1% OA improved the permeation of ENX significantly more than that of 4-OHT ( P , 0.004); further titration of OA at 0.25%–0.5% further improved the permeation of ENX to a level similar to that of estradiol. Conclusion: The addition of OA to ENX results in a favorable rapid delivery equivalent to that of estradiol, a widely used transdermal hormone. The transdermal delivery of ENX to the breast should be further developed in preclinical and clinical studies.
Introduction
For more than three decades, oral administration of tamoxifen (TAM), a triphenylethylene (TPE), has been a standard component of the treatment of estrogen receptor-α (ERα)-positive breast cancer, 1 and, more recently, has been used for prevention, for both pre-and postmenopausal women. 2 However, TAM is a prodrug that requires conversion by the phase I drug metabolizing enzymes cytochrome P450 (CYP2D6 and CYP3A4/5) to its major antiestrogenic metabolites, 4-hydroxytamoxifen (4-OHT), and N-desmethyl-4-hydroxytamoxifen (endoxifen [ENX]), which have equivalent affinity for ERα that is 100× . TAM. Recent reports suggest that ENX is the dominant metabolite responsible for the therapeutic effect of TAM because of its greater abundance (10× higher than 4-OHT in serum) and its ability to cause proteosomic degradation of ERα. 3 The effectiveness of TAM may be compromised in about 33% of women because of enzyme polymorphisms, which result in decreased availability of ENX. In addition, long-term systemic exposure to TAM is associated with hot flashes, night sweats, and menstrual irregularity, as well as the more serious risks of thromboembolism and endometrial cancer. 6 Thus, the systemic delivery of TAM is problematic both from the perspective of efficacy through inefficient metabolism and toxicity through high systemic exposure. However, in women with ductal carcinoma in situ (DCIS) and those at high risk for breast cancer, effective concentrations are required only in breast tissue; systemic exposure is redundant, and side effects related to it may be largely avoided by transdermal delivery of active TAM metabolites through breast skin, as suggested by Mauvais-Jarvis and others. [7][8][9] Promising results have been reported from a presurgical study of postmenopausal women with estrogen receptor (ER)-positive breast cancer. The topical application of 4-OHT gel to the breast skin resulted in inhibition of tumor cell proliferation to the same degree as that seen with the standard dose of oral TAM (20 mg/day) but with much lower plasma levels, 2%-11% of those achieved with oral TAM. 9 In the present study, the in vitro skin permeation of 4-OHT and ENX was evaluated, first using a reconstituted human epidermal skin model, and then normal human skin. Oleic acid (OA) was investigated as a permeation enhancer with the objective of increasing uptake of ENX to that of estradiol (E2), a well-established transdermal agent.
Preparation of 3 h-endoxifen
Twenty microcuries of 3 H-4-OHT were incubated in 0.5 mL of 70 mM phosphate buffer (PB) containing 10 µM Mg 2+ , pH 7.4, with 0.5 nmol CYP2D6 for 30 minutes at 23°C. The product was extracted into ethyl ether and was chromatographed on a silica gel thin layer. TAM, 4-OHT, N-desmethyltamoxifen, and ENX in this system had R F values of 0.60, 0.29, 0.25, and 0.06, respectively. Rechromatography in a second solvent system (benzene:methanol, 1:1) gave R F values of 0.51 and 0.25 for 3 H-4-OHT and 3 H-ENX, respectively. The products were eluted into methanol and stored at 4°C. The yield of 3 H-ENX was .50%. The specific activity of the product was 36 Ci/mmol.
skin preparations
The EpiDerm was incubated for 1 hour at 37°C with 5% CO 2 prior to dosing. Each of the EpiDerm batches was used within 3 days of delivery. The acquisition of anonymous human skin samples from the operating room was approved by the Institutional Review Board of Northwestern University. The subcutaneous fat from fresh mastectomy and abdominoplasty specimens was removed and the full-thickness skin was immobilized to obtain split-thickness skin (STS) using a surgical blade (George Tiemann and Co, Hauppauge, NY) or an electric dermatome (Robins instrument Inc, Chatham, NJ). The thickness of STS samples was measured with an electronic digital micrometer (Tresna Instruments Co, Guilin, China) as 367 ± 0.039 microns.
skin imaging
The EpiDerm and STS samples were cut into 3 mm × 10 mm pieces, placed horizontally in a Cryomold ® (Tissue-Tek ® ; Sakura Finetek USA, Torrance, CA), embedded in a tissue freezing medium (OCT TM Compound; Tissue-Tek), frozen in the microtome/cryostat chamber, sectioned at 7 microns, and stained with Mayer's hematoxylin and eosin. Images were taken with a 20× objective using a Nikon Eclipse (Tokyo, Japan) optical microscope with a 10 micron bar superimposed on the skin images.
skin cell viability
Ethanol toxicity to the skin was determined using the 3-(4, 5-Dimethylthiazol-2-yl)-2, 5-diphenyltetrazolium bromide (MTT) toxicology kit (MTT-100, MatTek) by measuring epidermal cell viability of the EpiDerm. A PB (2 mM KH 2 PO 4 , 4 mM Na 2 HPO 4 , pH 7.0) was used as a negative control and 70% (v/v) ethanol-PB was the test material. 0.4 mL of each solution was loaded in the donor chamber of the Mat-Tek permeation device (MPD) (EPI-100-FIX) and 5 mL of Dulbecco's modified Eagle's medium (DMEM)-based assay medium (EPI-100-ASY) was added in each receiver well. Skin samples were incubated at 37°C with 5% CO 2 . After 6 and 24 hours, skin samples were processed by the MTT assay, following the manufacturer's protocol.
Diffusion studies
Permeation of TPes using the MPD TPE solutions (0.2 mg/mL of 0.5 µCi 3 H-TPEs/mg) were prepared in the control vehicle, 60% (v/v) ethanol-PB. To evaluate OA effect on permeation of TPEs, the control vehicle was supplemented with 1%-5% (v/v) OA. The receiver chamber contained 5 mL of phosphate-buffered saline (PBS) with stirring at 37°C. Skin samples were placed in the MPD with a skin exposure area of 0.256 cm 2 . A drug solution of 0.2 mL was loaded into the donor chamber (final dose was 312.6 µg/cm 2 for EpiDerm and 156.3 µg/cm 2 for human STS). General procedures for the permeation study followed the manufacturer's protocol. After 6 and 24 hours, receiver fluid was collected, the solution remaining in the donor chamber was removed, and the exposed skin and donor chamber were washed twice with 0.4 mL of PBS and cleaned with a cotton swab and removed from the permeation device. The amount of 3 H-TPEs from the washes of the donor chamber, the donor and receiver fluids, and the skin samples was determined by liquid scintillation counting (Beckman Coulter LS6500, Fullerton, CA). For each receiver fluid, 1 mL aliquots in triplicate were placed in 20 mL liquid scintillation glass vials, and 10 mL of Ecolite (+) TM (MP Biomedicals, Solon, OH) liquid scintillation fluid was added. Skin samples were cut into small pieces with surgical scissors and homogenized using an ultrasonic processor (Cole Parmer, Vernon Hills, IL); the 3 H-TPEs were extracted three times into methanol; the solvent was evaporated and resuspended in 0.5 mL methanol and 10 mL of Ecolite (+) liquid scintillation fluid. 3 H-TPEs from the donor chamber were measured as follows: 3 H-TPEs from the cotton swabs were extracted into 1 mL of methanol three times, then this was added together with the remaining donor solution and washes; the solvent was evaporated and the residue was dissolved in 0.5 mL methanol and 10 mL of Ecolite (+) liquid scintillation fluid. Total recovery of 3 H-TPEs was greater than 95%.
Permeation of enX using Franz diffusion cells ENX and estradiol (E2) (0.2 mg/mL of 0.5 µCi 3 H/mg), were prepared in the control vehicle, 60% (v/v) ethanol-PB. OA (0.1%-1%) was added to the control vehicle, to test its permeation-enhancing effect on ENX. Static Franz diffusion cells (skin exposure area of 0.38 cm 2 and receiver chamber volume of 5 mL) were used for the permeation experiments. PBS was supplemented with 4% (w/v) POE (20) to overcome the possible artificial dermal retention of lipophilic compounds. 10 0.1 mL of drug solution was loaded into the donor chamber (final dose was 78.9 µg/cm 2 ). Samples of 0.25 mL were collected at the predetermined intervals over 24 hours. After 24 hours of sampling, the exposed skin area was washed as described, and the epidermis was separated from the dermis by forceps. All other procedures were the same as described previously.
Data and statistical analysis
Permeation parameters of TPEs using the MPD were expressed as the mean and standard error of the mean (SEM) of the percent of the applied dose from replicate experiments. Absorption is defined as the amount (µg/cm 2 ) of TPEs reaching the receiver fluid. The total penetration (TP) was defined as the sum of the absorption and skin contents of TPEs at the predetermined time points. For Franz diffusion cell experiments, the permeation profiles of ENX and E2 were analyzed by plotting the absorption (µg/cm 2 ) of the compound as function of time (h). The permeation rate (or flux) at steady state (µg/cm 2 /h) was calculated from the slope of the linear portion of the permeation curve over 24 hours. The lag time was determined by extrapolating the linear portion of the curve to the x-axis. Permeation parameters of ENX and E2 were expressed as the mean and SEM. The Kruskal-Wallis and Wilcoxon rank sum tests were used for comparisons across the compounds and between OA conditions, with Bonferroni corrections for significance testing as detailed in the Results. Statistical analyses were done using the SAS statistical software (SAS OnlineDoc ® 9.2, SAS Institute Inc, Cary, NC).
stratum corneum thickness and ethanol skin toxicity
The stratum corneum (SC) of EpiDerm was thinner than normal human skin ( Figure 1). The skin toxicity of 70% (v/v) ethanol-PB was evaluated in the EpiDerm by the MTT assay. Epidermal cells of EpiDerm treated with PB remained highly viable (100%), whereas 37% and 44% of epidermal cells died when treated with 70% ethanol-PB for 6 and 24 hours, respectively ( Figure 2). lower than that of TAM and 4-OHT (P = 0.006 for both) at 6 hours. At 24 hours, the total permeation of all TPEs was similar, ranging from 25%-29%.
Permeation of TPes across epiDerm
On addition of 1% OA into the control vehicle, a uniform and significant decrease in the absorption of TAM, 4-OHT and of ENX over 24 hours was found. The largest reduction was in the TP of TAM (approximate 80% reduction, P = 0.0008). The exception to this adverse trend was a transient increase in the absorption of ENX at 6 hours, in contrast to TAM and 4-OHT, which decreased significantly at 6 hours.
Permeation of TPes across human skin samples
Next, we examined the absorption and TP of 4-OHT and ENX across normal human STS, comparing three concentrations of OA (1, 2.5, and 5%) to the control vehicle over 24 hours (Table 2). In the control vehicle, the absorption of 4-OHT and ENX were equivalent, but the TP of ENX was approximately half that of 4-OHT (P = 0.002). On addition of 1%-5% OA to the control vehicle, both the absorption and the TP of 4-OHT and ENX increased (Kruskal-Wallis P , 0.0001, for all comparisons). The addition of 1% OA enhanced the absorption of 4-OHT and ENX by approximately 3× and 18× , and the TP of these agents by 3× and 18× , respectively, compared with vehicle controls. The addition of 1% OA improved the TP of ENX 1.4× higher than that of 4-OHT (P = 0.004). At higher concentrations of OA (2.5%-5%) the absorption of 4-OHT and ENX did not further improve; rather, the absorption of ENX decreased when compared with 1% OA (1% vs 2.5% OA: P = 0.002, 1% vs 5% OA: P = 0.0013).
Permeation of enX at lower concentrations of OA
Having established that 1% OA enhanced permeation of ENX to a greater extent than 4-OHT in human skin, we explored the effect of lower concentrations of OA (0.1%-1%) on permeation of ENX using Franz diffusion cells over 24 hours ( Figure 3A). The permeation parameters (lag time, flux, absorption, skin content, and TP) of ENX were compared with that of E2, since the transdermal delivery of this hormonal agent is well established (Figures 3 and 4).
In the control vehicle, the lag time of ENX and E2 was similar, the flux of ENX was ,E2 (Figures 3B and 3C), and the absorption of ENX was 50% than that of E2 (P = 0.045) ( Figure 3D). In contrast, the total skin content of ENX was 1.8× higher than that of E2 (P = 0.045). This increase was attributable to the epidermal content of ENX,
65
human skin permeation of endoxifen which was 4× higher than that of E2 (P = 0.006), while the dermal content of two compounds was similar ( Figure 4A). Finally, the TP of ENX was slightly higher (1.5×) than that of E2, but the difference was not significant (P = 0.17) ( Figure 4B).
Next, we evaluated the effect of lower concentrations of OA (0.1%-1%) on the permeation parameters of ENX. The addition of 0.1% OA to the control vehicle showed nonsignificant permeation-enhancing effects on the lag time and the flux of ENX in comparison with that in the control vehicle. On addition of 0.25% and 0.5% OA, lag time of ENX was shorter (P = 0.0062 for both), the flux was significantly enhanced by 2.6-2.8× (P , 0.05 for both), and the absorption was increased by 3.2× and 3.9× , respectively, in comparison with that of ENX in the control vehicle. At the concentration of 1% OA, we observed that there was no further improvement in the permeation parameters of ENX ( Figure 3D). Thus, the flux and the absorption of ENX was lower than that of E2 in the control vehicle, but the addition of 0.25%-0.5% OA enhanced the flux of ENX equivalent to that of E2 and increased the absorption of ENX superior to that of E2 (P , 0.05) (Figures 3C and 3D).
For the skin content, the partitioning of ENX into skin layers was improved by 0.1%-1% OA. These improvements were proportional to the concentration of OA added to the control vehicle, and mainly driven by epidermal rather than dermal content of ENX ( Figure 4A). At a concentration of 1% OA the total skin content of ENX was improved by 1.7× in comparison with that in the control vehicle (P = 0.045) ( Figure 4A). This was 3.4× higher than that of E2 in the control vehicle (P = 0.01).
Finally, we observed that the TP of ENX was in the range of 15.7%-20.9% (12.4-16.5 µg/cm 2 ) in the presence of 0.1%-1% OA ( Figure 4B), so that, with a 1% concentration of OA, the TP of ENX was 1.7× higher than in the control vehicle (P = 0.029), and 2.4× higher than that of E2 in the control vehicle ( Figure 4B).
Discussion
Transdermal delivery has long been recognized as an effective form of systemic therapy, with distinct pharmacokinetic and related advantages, but, due to the effectiveness of the barrier function of the stratum corneum, only a small number of drugs have been successfully formulated for this purpose. Existing data on transdermal permeation of 4-OHT suggest that the relatively small and lipophilic nature of this molecule renders it suitable for transdermal formulation. 7,8 A study using an alcoholic gel formulation of 4-OHT suggests that, when applied to the skin of the breast of postmenopausal women with ER-positive breast cancer, sufficient breast tissue concentrations are achieved for an antiproliferative effect on tumor cells of equal magnitude to that seen with standard doses of oral TAM. 9 Other studies using the same gel for the treatment of mastalgia show a benefit 4-OHT gel at a concentration of 4 mg/day. 11 The authors are using the same formulation of 4-OHT in a multicenter presurgical study in DCIS patients (NCT00952731), with the primary endpoint of decreased cell proliferation. The authors have investigated the relative permeation of 4-OHT and ENX because the binding affinities of 4-OHT and ENX are both 25× greater for ERα and 56× greater for ERβ than that of TAM. 12,13 ENX has been reported to have an advantage over 4-OHT in that it causes proteosomic degradation of ERα and may have more selective antiestrogenic effects. 5 Therefore, ENX is expected to give better therapeutic efficacy than 4-OHT, but its specific toxicity profile is currently unknown. If ENX shares even some of the toxicity of the parent drug and its percutaneous absorption in humans is equivalent to (or better) than that of 4-OHT, it would be an excellent candidate for transdermal delivery. Additionally, the chemical structure of ENX would render it more promising for transdermal application because it is more amenable to conjugation to nanoparticles for controlled release. For this reason, the authors have investigated the relative in vitro percutaneous absorption of ENX in comparison with those of 4-OHT and TAM.
It was found that the total penetration of ENX into human skin was not as efficient as that of 4-OHT in the control vehicle, but the addition of 1% OA greatly improved both absorption and TP of ENX over 24 hours. Although significant increases in these parameters were also seen for 4-OHT, the increase in ENX permeation was larger and brings ENX permeation into a range that is very compatible with transdermal therapy. OA is a well-known permeation enhancer that has been employed to increase absorption of TPEs in the 60% (v/v) ethanol-PB vehicle. 14 Several researchers have observed a permeation-enhancing effect with OA in ethanol-water systems across hairless rodent skin. [15][16][17] In the present study, an ethanol-based vehicle was necessary to solubilize TPEs and ethanol also has the advantage of being a widely used skin permeation enhancer used in topical drug-delivery systems for estradiol, progesterone, fentanyl, and other drugs. It is not clear why ENX benefited more from the addition of OA than 4-OHT, but this may be related to a difference in their structure. ENX is smaller and more polar than 4-OHT because one methyl group at a tertiary amine is replaced with a hydrogen, resulting in a secondary amine that is more hydrophilic than the tertiary amine of 4-OHT. Because OA appears to make the stratum corneum fluidic 18,19 and ethanol gives a continuous driving force, 14,20 ENX may move faster through the skin than 4-OHT. The amine group of ENX may have a favorable balance of hydrophilic and hydrophobic properties to deal with the stratum corneum, which would allow ENX to traverse the stratum corneum more easily. The results here Table 2 OA enhances permeation of 4-OhT and enX across human sTs at 24 h (17) 0.3 ± 0.04 (25) 0.8 ± 0.07 (20) 5.4 ± 0.67 (14) 0.8 ± 0.03 (11) 2.5 ± 0.31 (13) 0.7 ± 0.07 (9) 1.8 ± 0.40 (9) (17) 1.9 ± 0.24 (25) 12.1 ± 0.78 (20) 12.8 ± 0.72 (14) 14.7 ± 1.54 (11) 12.4 ± 0.89 (13) 7.7 ± 0.83 (9) 9.8 ± 1.03 (9) Notes: a n is a total number of observations, the skin samples of 3-9 subjects were used and 3-4 replicates from each subject were employed in each experiment, the results are represented as the percentage of applied dose of 4-OhT and ENX from the receiver fluid and skin as well as total penetration at 24 hours, values are the mean ± seM of observations; b for each concentration of OA, the comparison between 4-OhT and enX was performed using the Wilcoxon rank sum test. The Bonferroni-corrected threshold for significance is P , 0.0083, all the significant values are represented in bold.
67
human skin permeation of endoxifen agree with previous findings using hairless rat skin, which showed that the co-solvent system of OA-ethanol-water efficiently increased skin permeation of both lipophilic and hydrophilic drugs. 15 The effect of OA as a permeation enhancer was markedly divergent between the EpiDerm and human STS; the general permeation-enhancing effect of OA was not only absent in the EpiDerm model, but also the permeation was significantly reduced. The EpiDerm model does not have the papillary dermal layer of normal human STS, and the thickness of the stratum corneum was almost 50% , human STS used in our experiments. Thus, the reason for the inhibition of permeation by OA in the EpiDerm may be related to its lipid characteristics and thin, imperfectly developed stratum corneum, derived from cultured human keratinocytes, so that the favorable effects of OA on the partitioning of compounds through the skin are not observed. This suggests that the reconstituted epidermis is not a suitable model for the testing of permeation enhancers, such as OA, that depend on partitioning with lipids in the stratum corneum.
The permeation enhancing effect of OA was assessed at lower concentrations (0.1%-1%) to find an optimal concentration of OA for ENX in 60% ethanolic solution, with E2 as a reference transdermal compound, to determine whether the permeation of ENX with OA can be improved to a level consistent with effective transdermal delivery. The results show that the addition of 0.25%-0.5% OA Overall, 0.25%-0.5% OA seems to be the optimal concentration for 60% ethanolic vehicle system as a fast and efficient transdermal delivery of ENX. Furthermore, although ENX alone permeates human skin slower than E2, the addition of OA not only improves the absorption of ENX to a level similar to that of E2, but also significantly increases skin deposition of ENX. Together, these results suggest that ENX is an excellent candidate for transdermal delivery.
The direct delivery of active metabolites to the breast through its skin envelope averts first-pass metabolism in the liver, potentially avoiding changes in the clotting cascade that lead to the prothrombotic effects of TAM and raloxifene. 2,21,22 Since risk of thromboembolism is a major concern not only with TAM use, but also with all selective estrogen-receptor modulators (SERMs) tested clinically to date, its avoidance would be a significant advantage for women considering SERM therapy for breast cancer prevention and for treatment of DCIS. Additionally, the very low plasma concentrations of following transdermal application of 4-OHT observed in the studies conducted so far 7,8,11 suggest that uterine toxicity and hot flashes would be reduced by the transdermal delivery of active TAM metabolites to the breast. Furthermore, limitations on the bioavailability of active metabolites that are caused by polymorphisms in TAM metabolizing genes 3,4 would be overcome by this approach.
Finally, the issue of whether transdermal delivery to the breast is a local or a systemic treatment deserves consideration. The preliminary studies conducted by Mauvais-Jarvis and colleagues in the 1980s and 1990s showed that 4-OHT applied through the skin of the breast concentrates in the breast at 10× higher levels than when it is applied to the arm or shoulder. 7,8 The investigators attributed this accumulation to the binding of 4-OHT to ER present in breast tumors and breast epithelium. In fact, ERα expression in nonmalignant breast tissue is very low but ERβ is high and may account for at least some of the localization in the breast. However, receptor binding alone is insufficient to explain 4-OHT retention in the breast. 23,24 A more plausible explanation relates to the embryological origin of the breast as a skin appendage (ie, a modified sweat or apocrine gland). Studies of the embryology of the breast suggest that the breast gland (parenchyma) and its skin envelope are a single unit with a well-developed internal lymphatic (and venous) circulation. 25 These embryological studies are supported by the fact that the skin and parenchyma of the breast drain to the same sentinel nodes. 26 Figure 4 skin content and total penetration of enX compared with e2. The applied dose of the compounds was 78.9 (µg/cm 2 ). The skin samples from three subjects were used and each treatment condition was tested in duplicate on the skin samples from each subject in each experiment. (A) skin content (µg/cm 2 ) of the compounds was measured separately from epidermis and dermis and was combined as total skin content after 24 h. (B) TP of the compounds was determined as the sum of absorption and total skin contents after 24 h. All measurements were expressed as the mean ± seM, n = 5-6. The P-values were determined using the Wilcoxon rank sum test. Notes: ap = 0.045; bp = 0.029. Abbreviations: enX, endoxifen; e2, estradiol; OA, oleic acid; seM, standard error of the mean; TP, total penetration.
69
human skin permeation of endoxifen for breast cancer prevention and for DCIS therapy should be extendable to a variety of agents as long as they show sufficient dermal permeation. It is too early to draw any conclusion for potential clinical use since it remains uncertain whether a level of ENX in the breast by transdermal delivery is equivalent to the clinical efficacy of oral TAM. To address this, the authors have initiated an in vivo preclinical study to assess mammarygland and systemic distribution of ENX by transdermal delivery, and to evaluate the in vivo therapeutic efficacy of ENX, compared with standard doses of oral TAM in the hairless rat model. 28 These studies will guide the development of a clinical study to evaluate the efficacy of this approach for prevention of ER positive breast cancer in the near future.
Conclusion
These results demonstrate that the addition of OA improves absorption of ENX through human skin in vitro to the same range as that seen for E2, providing strong justification for the development of ENX for local transdermal delivery to the breast. The data raise questions about the suitability of the Epi-Derm model for the evaluation of skin permeation enhancement and suggest that the permeation dynamics of human skin differ substantially from those of reconstituted human epidermis.
Breast Cancer: Targets and Therapy
Publish your work in this journal Submit your manuscript here: http://www.dovepress.com/breast-cancer---targets-and-therapy-journal Breast Cancer: Targets and Therapy is an international, peerreviewed open access journal focusing on breast cancer research, identification of therapeutic targets and the optimal use of preventative and integrated treatment interventions to achieve improved outcomes, enhanced survival and quality of life for the cancer patient.
View the full aims and scopes of this journal here. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http:// www.dovepress.com/testimonials.php to read real quotes from published authors. | 2017-06-17T08:00:04.096Z | 2011-07-14T00:00:00.000 | {
"year": 2011,
"sha1": "213ea0dc72e78a86256af015b347c5c70324c26a",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=10543",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e2753c10f35485195fc542d29f9b64c840b042ca",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
84834962 | pes2o/s2orc | v3-fos-license | A near infrared variable star survey in the Magellanic Clouds: The Small Magellanic Cloud data
A very long term near-infrared variable star survey towards the Large and Small Magellanic Clouds was carried out using the 1.4m InfraRed Survey Facility at the South African Astronomical Observatory. This project was initiated in December 2000 in the LMC, and in July 2001 in the SMC. Since then an area of 3 square degrees along the bar in the LMC and an area of 1 square degree in the central part of the SMC have been repeatedly observed. This survey is ongoing, but results obtained with data taken until December 2017 are reported in this paper. Over more than 15 years we have observed the two survey areas more than one hundred times. This is the first survey that provides near-infrared time-series data with such a long time baseline and on such a large scale. This paper describes the observations in the SMC and publishes a point source photometric catalogue, a variable source catalogue, and time-series data.
INTRODUCTION
The Magellanic Clouds offer a number of advantages compared with the Milky Way where studies of stars are concerned: (1) their distances are reasonably well known, (2) they are located at relatively high Galactic latitude so interstellar extinction is small, (3) they are less contaminated by foreground objects, (4) their metallicities are different from those of most stars in the solar neighbourhood, (5) certain E-mail: yita@astr.tohoku.ac.jp stars, e.g., Cepheids, Miras and RR Lyrae variables, can be calibrated as distance indicators.
In the 1990s a number of monitoring projects were started that were aimed at detecting microlensing events (OGLE, Udalski, Kubiak & Szymański 1997;MACHO, Alcock et al. 2000;EROS, Afonso et al. 1999;MOA, Bond et al. 2001). As a by-product of these surveys huge data sets of precise time-series photometry for millions of stars in the Large and Small Magellanic Clouds (LMC and SMC, respectively) were obtained. Some of the survey projects are still ongoing with upgraded observing systems, providing very long-term time-series data (e.g., over 20 years for the OGLE projects).
None of the surveys cited above operates at wavelengths longer than the I-band, and most are at shorter wavelengths. Therefore infrared variable sources such as evolved stars with high mass-loss rates and young stellar objects (YSOs) may have escaped detection. Variability is common among evolved stars with high mass-loss rates, and pulsation is believed to play a key role in the mass-loss process. Studying the variability of YSOs is important for understanding the very early stages of stellar evolution. Up to now a number of infrared photometric surveys have been carried out, some of them covering the Magellanic Clouds (e.g., IRAS, Explanatory Supplement 1988; ISO, Kessler et al. 1997;MSX, Mill et al. 1994;DENIS, Epchtein et al. 1997;2MASS, Skrutskie et al. 1997; IRSF/SIRIUS, Kato et al. 2007). Recently, both the Spitzer (Meixner et al. 2006, Gordon et al. 2011) and the AKARI infrared satellites (Ita et al. 2008, Ita et al. 2010, Kato et al. 2012) mapped the Magellanic Clouds in near-to mid-infrared wavebands. Both Spitzer and AKARI have two epochs of data and Vijh et al. (2009) compared the photometry of those epochs and thereby identified a large number of infrared variable stars in the LMC. None of the previous infrared surveys had sufficient epochs of observation to properly characterize the variability.
Before the aforementioned massive optical time-series photometric surveys there were several shallow but largescale near-infrared multi-epoch surveys toward the Magellanic Clouds. Lloyd Evans, Glass & Catchpole (1988) made J, H and K observation in the Radcliffe Variable Star Field in the SMC, Hughes & Wood (1990) repeatedly observed a 6 • ×12 • area in the LMC in I, J, H and K, and Reid, Hughes, & Glass (1995) repeatedly observed the northern part of the LMC in J, H and K. In addition to these, there are many other infrared variable star surveys in the Magellanic Clouds. Groenewegen (1997) provides one of the most comprehensive reviews of earlier work where he reports that "IRAS triggered researches" were started in the late eighties using the IRAS data products. Those studies concentrated on the brightest AGB stars and red supergiants in the Magellanic Clouds (e.g., Whitelock et al. 2003) due to the limited sensitivity of the IRAS satellite.
Previous observational studies of variable stars in the Magellanic Clouds can be summarized as follows: (1) there are many time-series photometric surveys, but all of them are in the optical and are insensitive to infrared variables, (2) there are a number of deep photometric surveys in the near-, mid-and far-infrared, but at very few epochs, making it difficult to study the nature of variable stars, and (3) there are infrared multi-epoch surveys, but they are not deep enough to detect all red giants. There are also IRAS, ISO and MSX triggered studies but they are biased toward bright red supergiants and relatively massive (hence bright) AGB variables.
Here we present a moderately deep, near-infrared timeseries survey covering one square degree of the central part of the SMC. The data were obtained between July 2001 and December 2017, and there are at least 115 independent epochs at J, H, and Ks. Later papers will discuss the details of various types of variable star and similar papers will deal with the LMC survey. The great strengths of this study are the larger number of epochs and the long time base- line. Hence, the data provide excellent mean magnitudes for known variable stars found, for example, by microlensing surveys, and at a cadence that allows the type of variability to be characterised. The ongoing VISTA survey of the Magellanic Cloud system (VMC, Cioni et al. 2011) will ultimately provide a deeper survey over a larger area, but with fewer epochs than the one described here. VMC is the near-infrared (Y, J, and Ks) multi-epoch survey across an area of about 170 square degrees over the Magellanic Cloud system. The survey is deep enough to measure accurate mean magnitudes for variable stars that are relatively faint in the near-infrared, such as RR Lyrae stars (e.g., Muraveva et al. 2018) and Cepheids (e.g., Ripepi et al. 2016). Their data obtained between November 2009 and August 2013 are now publicly available (VMC-DR4), providing at least 3 epochs of photometric data at Y and J, and at least 12 epochs at Ks for selected fields.
OBSERVATIONS
We have repeatedly observed a total area of 3 square degrees along the LMC bar since December 2000, and a total area of 1 square degree around the SMC centre since July 2001 with instruments and a survey strategy described below.
InfraRed Survey Facility and the SIRIUS near-infrared camera
The InfraRed Survey Facility (IRSF) is situated at the SAAO Sutherland station and is operated as a joint Japanese/South African project. It officially opened and started formal operations in November 2000. The IRSF consists of a dedicated 1.4-m alt-azimuth telescope equipped with a near-infrared camera (the "Simultaneous three-colour InfraRed Imager for Unbiased Surveys" or SIRIUS) for specialized surveys toward the Magellanic Clouds and the centre of the Milky Way. The SIRIUS imager uses three 1024×1024 HgCdTe arrays to make observations simultaneously in three wavebands J(1.25µm), H(1.63µm) and Ks(2.14µm). These filters are similar to the Mauna Kea Observatories (MKO) near-infrared photometric system (Tokunaga, Simons, & Vacca 2002). The SIRIUS camera has a field of view of about 7.7 arcmin square with a scale of 0.453 arcsec/pixel. In the initial phase of SIRIUS camera operations the chip readout and control electronics used the Messia IV system, but it was upgraded to the Messia V system after April 2004 to reduce readout time. The readout noise and gain for each system are tabulated in Table 1 (Nagayama, private communication). Photometric errors calculated in this work are estimated by using these values. Another mechanical upgrade to add an imaging polarimetry capability (SIRPOL, Kandori et al. 2006) was made in February 2013. The SIR-POL is removable if necessary, and it should be noted that data used in this work are all taken without SIRPOL. Due to this upgrade, the pixel field of view of the SIRIUS camera became narrower by about 1%. This change has been accommodated in the data reduction process by matching the scale to the one before February 2013. Further details of the instrument can be found in Nagashima et al. (1999) and Nagayama et al. (2002). Two large survey projects were planned from the opening of the IRSF. The first was a deep photometric survey toward the Magellanic Clouds aimed at making a near-infrared (NIR) point source catalogue. The results were published by Kato et al. (2007). It provides NIR point source catalogues of both Magellanic Clouds that are about two magnitudes deeper and four times finer in spatial resolution than the congeneric NIR point source catalogue of the 2MASS survey. The second major project was the variable star survey towards the Magellanic Clouds. A time-series survey of this type requires a great deal of telescope time over many years. The IRSF provides an ideal facility in which to carry out this "near-infrared variable star survey in the Magellanic Clouds".
Survey strategy and observational specifications
In order to study variable sources in the SMC, an area of 1 square degree around the SMC centre, as shown in Fig. 1, was observed repeatedly. The 1 • ×1 • area is divided into nine 20 ×20 regions. Each of these regions is further subdivided into nine 7 ×7 fields of view that are labeled from A to I. The central position of each field is given in Appendix A. This survey area was chosen because it is well populated and we can expect the maximum efficiency in gathering a large sample of variable sources. Moreover, OGLE optical microlensing survey data (Udalski et al. 1997(Udalski et al. ,Żebruń et al. 2001 are available for the same area. Their survey and ours are mutually complementary, so the amalgamation of these two data sets will eventually produce a complete variable source catalogue for that area. The SMC survey commenced in July 2001 and in this paper we present data obtained up to December 2017. During that period, the whole survey area was observed more than 140 times and, after manual removal of poor images (see section 3.2), there are at least 115 (at most 162) independent epochs of photometric data at J, H, and Ks.
For the survey, we use a fixed exposure time of 5 seconds. We take ten 5 s images for each field of view at a time, with a dithering radius of 15 arcsec. This configuration allows us to take data for sources as bright as ∼ 9 mag at J, H, and Ks. The detection limits are derived and described in Section 3.2. The 5 s exposure time is chosen so that we can simultaneously detect bright AGB stars as well as red giants well below the tip of the first red giant branch (RGB). Therefore we can measure Cepheid variables and all of the AGB variables except extremely red ones (such as those found in NGC419 and NGC1978 by Tanabé et al. (1998) and Kamath et al. (2010) and those found in the LMC by Gruendl et al. (2008)) and reach several magnitudes below the RGB tip. Extremely red sources and/or faint sources will be detected, and their variability studied, from the data obtained in VMC (Cioni et al. 2011) and in the SAGE-Var program (e.g., Riebel et al. 2015), a follow-on to the Spitzer legacy program Surveying the Agents of Galaxy Evolution (SAGE; Meixner et al. 2006).
DATA REDUCTION
All data were reduced (i.e., flat-fielded, dark-current subtracted and sky subtracted) in the same manner using the SIRIUS pipeline software (Nakajima, private communication). A sky image is made from the images acquired with the same sequence before and after taking the images of the subject, after masking bright stars. The SIRIUS pipeline software produces median combined images that comprise 10 dithered 5 s exposures. These combined images are ideally free from the spurious noise caused by the presence of bad/hot pixels and/or by cosmic ray events.
Astrometry
The celestial coordinates of the sources detected in our survey are calculated by referencing their positions to the 2MASS point source catalogue. This process involves the following steps for each field of view: (i) The equatorial coordinates (αi, δi) of 2MASS sources are converted to (Xi, Yi) in the World Coordinate System.
(ii) Bright sources are extracted and their pixel coordinates (xj, yj) measured.
(iii) A triangle matching technique is used to relate (Xi, Yi) and (xj, yj) with the IRAF 1 task XYXYMATCH.
(iv) The transformation matrix to relate (xj, yj) and (αi, δi) is calculated using the matched pairs with CCMAP/IRAF. We find more than a hundred matched pairs in all fields of view for all wavebands to calculate the matrix. The individual matrices are used to derive the coordinates of all sources detected in the images.
The positional differences between the 2MASS and the resultant fit coordinates are always quite small. The root mean square values of the positional differences between them are typically 0.02 to 0.05 for all wavebands. The absolute astrometric precision of the 2MASS point source catalogue is about 0.07 (Skrutskie et al. 2006) and our fit coordinates should have an accuracy of the same order. The equatorial coordinates of the 2MASS point source catalogue are based on the International Celestial Reference System (ICRS). Hence, our fitted coordinates refer to the ICRS.
Photometry
There are more than a hundred independent images for each field of view (see section 2.2) taken at various dates and times. Let f N obs be the number of the independent images in a field f . Poor images have been excised from the following analyses based on the star count in each image. First, the number of stars detected in each image is counted and the maximum star count, f nmax, over f N obs images is found. Then, images with star counts fewer than 0.3× f nmax are regarded as poor images. The number of available exposures, f Nexp, differs from one field to another due to temporal changes of observing conditions (weather, seeing, sky, etc.), and f Nexp ≤ f N obs by definition.
As is described in Section 3.3, we use an image subtraction technique to detect light variations. It measures the differential brightness from a certain reference brightness measured in a reference image of a field. Therefore it is technically not necessary to do photometry on all f Nexp images. Instead, it is sufficient to do photometry on a reference image of each field. The photometric reference image for each field is made by combining the 10 best-seeing images with typical seeing of 1 arcsec, after eliminating the shift and rotation between images as well as the differences in seeing and backgrounds. In this process the 10 images are filtered using 3-σ rejection from the median. We use these combined images for photometric and positional references for each field. In addition to the reference image, we nonetheless perform photometry on all f Nexp images in each field. This is for evaluating variability (see Section 3.4).
We developed point spread function (PSF) fitting photometry software working under IRAF. This process involves the following steps: (i) DAOFIND is used to extract point-like sources whose fluxes are more than 3-σ above the background noise level.
(ii) Aperture photometry is performed on the extracted sources using an aperture radius of 7 pixels. The inner radius of the sky annulus is the same as the aperture radius and the width of the sky annulus is 10 pixels.
(iii) Several isolated (i.e., no bright sources within 7 pixels) point sources with moderate flux (i.e., unsaturated with good S/N ratio) are selected from the result of step (ii). At least 25 such "good" stars are selected.
(iv) Since the shapes of the PSFs can vary from image to image the stars selected in step (iii) are used to construct a model PSF for each image. We let the PSF/DAOPHOT package choose a best fitting function by trying several different types.
(v) The PSF fitting photometry is performed on the extracted sources in step (i) using ALLSTAR. We assume that the PSF is constant over an image.
This photometric process yields arbitrary instrumental magnitudes for each source that have yet to be calibrated.
Photometric calibration
We reference the IRSF Magellanic Clouds Point Source Catalog (IRSFMCPSC, Kato et al. 2007) to convert the instrumental magnitudes to the calibrated ones. The IRSFMCPSC is calibrated with Las Campanas Observatory (LCO) standards from Persson et al. (1998). Therefore our photometric zero points are also based on the LCO standards. Because our data are taken with the same instrument (i.e., IRSF/SIRIUS) as was used to collect data to make the IRSFMCPSC, we assume a simple linear equation between the instrumental and calibrated magnitude, where λ denotes the J, H and Ks filters. Conversion to other systems such as the 2MASS system are given elsewhere (e.g., Kato et al. 2007). The conversion offset in equation 1 is determined for each image by taking the difference between the instrumental magnitude and the catalogue magnitude from the IRSFMCPSC. In this process, sources detected well away from the detector edges and correlated catalogue sources with 2MASS quality flag A are used. Then we take a weighted average of the difference and used it as the conversion offset. The weighting factors are the inverse square of the total errors, calculated by combining the catalogue-and instrumental magnitude errors. Outliers are rejected by an iterative 2-σ clipping algorithm from the weighted average. Typically more than a hundred sources are used to calculate the final offset value. The standard error of the offset is important in determining the final photometric accuracy in individual fields of view. The standard error of the offset in each field is found to be typically 0.001 to 0.002 mag for all wavebands, which should be considered to be the systematic error of the photometry. We apply the individual offset value and its corresponding standard error for each field of view to calibrate the in- strumental magnitudes. Hereafter we call these magnitudes "calibrated magnitudes".
Evaluation of photometry using sources in overlapping fields
We mosaiced the 1 • ×1 • area in the central part of the SMC with 81 fields of view. Each field (7.7 ×7.7 ) overlaps at its edges with adjacent fields by about 0.7 arcmin. Sources falling in these overlap regions have multiple photometric measurements that should be consistent as long as the sources are not variable. We evaluate our photometry of the reference data by checking the consistency of calibrated magnitudes extracted from each of these measurements on photometric reference images. The distribution of photometric differences for all high signal-to-noise (S/N > 20) sources common to adjacent pairs is shown in Fig. 2. We fit a Gaussian profile to estimate the mean difference and its standard deviation. They are, −0.00±0.06, −0.00±0.06, and −0.01±0.06 mag in J, H and Ks, respectively. These results suggest that our photometric calibration is good and uniform across the survey field. The dispersions seen in the figure arises mainly from photometric random errors and internal dispersions in the IRSFMCPSC, which is about 0.04 mag (Kato et al. 2007).
Completeness of point source detections
In order to estimate the detection completeness of our source extraction we use an artificial star technique. We add 900 artificial point sources (i.e., stars) at a given magnitude to the photometric reference images. The extra stars are distributed on a 30×30 grid with a spacing of 30 pixels. The sources are extracted from the new artificial image in the same way as is described in Section 3.2. The list of input artificial stars is then cross-identified with the list of detected stars to examine how many artificial stars are successfully extracted. These processes are repeated by varying the input source magnitude in steps of 0.02 mag. We define the The colours of the lines correspond to different filter bands, blue for J, green for H, and red for Ks, respectively. "90 percent completeness limit" as the magnitude at which 90 percent of the added artificial stars are recovered.
To estimate the effect of source number density on the completeness analyses, three 7.7 ×7.7 areas in the SMC are selected as test fields. Fig. 3 is the star density map (number of stars detected in the H band in 1 arcmin 2 bins) of the observed 1 • ×1 • area with the three test fields indicated. One is the most crowded field (0050-7310G, ∼ 96 stars/arcmin 2 in H), another is moderately populated (0055-7310H, ∼ 70 stars/arcmin 2 in H) and the last one is sparsely populated (0051-7230H, ∼ 55 stars/arcmin 2 in H). The results of the analyses are summarized in Table 2 and in Fig. 4. These indicate that the 90 percent completeness limits are deeper in the underpopulated fields of view and that they can differ up to about 0.50, 0.44, and 0.30 mag for J, H and Ks, respectively, between the sparsely and densely crowded extremes.
Detection limits
The distributions of photometric error 2 versus calibrated magnitude for the all sources detected in the photometric reference images are shown in Fig. 5. It is clear that the brightest sources are affected by deviations from the linear response of the SIRIUS detectors. The deviation becomes larger than 2 per cent for sources brighter than J = 9.6, H = 9.6, and Ks = 9.0 mag, respectively (hereafter referred as "saturation limits"). Certainly, these values depend on the observational conditions, such as seeing size and sky background brightness. Any sources brighter than these limits are not included in the catalogues published with this paper, and are excluded from the following discussions. Fig. 5 also illustrates the typical photometric errors of our survey as a function of source luminosity. We also define the 10% detection limit as the magnitude at which the photometric errors of the sources exceeds 0.109 mag. The 10% detection limits are calculated for the aforementioned three different population density fields, and are summarized in the Table 2. The results indicate that the 10% detection limits depend on the source density.
Image subtraction
We use the image subtraction package, ISIS.V2.2, to detect variable source candidates. The image subtraction method can find variable sources even in very crowded fields and its efficiency in detecting variables is evidenced by previous and ongoing surveys (e.g., OGLE and MOA). Details of the image subtraction technique are given in Alard & Lupton (1998) and Alard (2000) and only an outline of the technique is described here.
First, images are shifted and rotated to match the photometric reference image, which is also used as the positional reference. Basically, the image subtraction technique stands on the idea that two images (an image and a reference image) taken under different conditions (i.e., seeing and sky background) are related by the following equation: which transforms the PSF with smaller full-width at half maximum of the reference image R to the PSF of an image Ii, where i denotes the i-th observation, fi is the convolution function for i-th observation and bgi(x, y) is the differential sky background between the two images. We assume that there are Nexp available exposures in all (i = 1, Nexp). After the convolution the differential image, Di(x, y), can be computed as: Then in the ideal case all the pixels except those of variable sources should have values of zero in the differential image, Di(x, y). Finally a variance image is created by calculating the variance of each pixel over i in the differential images: where If a variable source is found at position (x, y) then the variance of the pixel should be large. In this way variable sources stand out and non-variables are invisible on the variance image. We consider candidate variable sources to be those stars that have signals 3-σ above the local background in the variance image. The positions of the variable source candidates are measured simultaneously. The number of variable source candidates is summarized in Table 4.
Converting ADU to magnitude
The differential image technique calculates flux differences in Astronomical Data Units (ADU). Therefore it is necessary to convert ADU to calibrated magnitudes for practical use.
After we obtain the instrumental magnitudes and their corresponding ADU fluxes in the photometric reference image we can convert ADU light curves into calibrated magnitudes through the steps described below. First we define the following values: • λ I j 0 : the flux at a given waveband λ of the j-th variable source candidate in the reference image in ADU.
• λ I j i : the flux at a given waveband λ of the j-th variable source candidate in the i-th image in ADU.
The ISIS software gives differential fluxes of variable sources relative to their fluxes in the reference image, ie, ∆ λ I j i = λ I j 0 − λ I j i . To convert these values to magnitudes we need both the instrumental magnitude, λ m j 0 , and the flux, λ I j 0 , of each variable source candidate in the photometric reference image (Benkö 2001). Candidate variables in the photometric reference image are selected by the nearest neighbour search method. A search radius of 4 pixel is used. If several sources are present within the search radius on the reference image, we choose the closest one and set the proximity flag (see Section 4.3). The search radius (corresponding to about 1.8 ) was chosen by experiment, and takes into account variations in observing conditions that can make accurate position determination in the variance image difficult. Then we calculate the calibrated magnitudes, λ m j i , of the j-th variable source candidate in the i-th image through the ordinal formula, λ m j i = −2.5 log 10 where the conversion offset is derived as in Section 3.2 for each waveband and field of view.
Variability indices and false alarm probability
A key characteristic of our data is that they are taken simultaneously in the J, H and Ks bands. Such a multi-band simultaneous photometric data set is ideal for searching for variability through correlations between photometric fluctuations at different wavelengths (Welch & Stetson 1993;Stetson 1996). This robust technique for evaluating stellar variability is further improved by Ferreira Lopes et al. (2015) and Ferreira Lopes & Cross (2016) to handle panchromatic flux correlations. They defined several statistical variability indices, but the main ones used in this work are the so-called Stetson index "JWS" and the Ferreira Lopes indices, "I where Npair is the number of simultaneous observation pairs in two different wavebands, λ1 and λ2, the λ δi is the normalized residual for a given waveband, λ, computed as where λ mi and λ ei are the time-series photometric measurements and their corresponding errors for a given waveband, respectively, and λ µ is the weighted mean, λ mi. At first, a simple mean of λ mi is used for λ µ. Then a weighting factor, λ gi, which is defined in Stetson (1996) as is calculated and λ µ is redetermined with those weights. The procedure is iterated until λ µ stabilizes. This helps to reduce the influence of any outliers in a set of measurements. Also, the correction factor, λ 1 ,λ 2 Γi, is given by The Ferreira Lopes index, I (s) pfc (in our case, the combination type s is equal to 3), is calculated as with λ 1 ,λ 2 ,λ 3 Λi defined as The other Ferreira Lopes index, I (s) fi , is calculated as which takes values from 0 to 1. Refer to the papers mentioned above for the original definitions of the variability indices.
For purely random photometric errors, photometric fluctuations at different wavelengths should be uncorrelated, and their λ 1 ,λ 2 JWS or I pfc should tend to zero in the limit of a large number of observations. The I (3) fi index provides a measure of signal correlation strength, with 1 being the perfect correlation over all wavebands observed. For our data, the number of pairs, Npair, is typically more than one hundred. A minimum condition of Npair > 10 is set for calculating these variability indices. Note that the number of pairs, Npair, is equal to or less than the number of available exposures, Nexp, because only data brighter than 19.0, 18.5, and 17.5 mag in the J, H and Ks bands, respectively, are used. These limits roughly correspond to the brightnesses where their typical photometric errors exceed 0.3 mag (see Fig. 5). Hereafter these limits are referred to as "faint limits". Also, to minimize adverse effects from possible spikes in the data, up to five extreme outlier data points are identified by a 2 sigma-clipping algorithm. These possible spikes as well as data points fainter than the faint limit are ignored in the process of calculating the above indices. Figure 6. The standard deviation of λ N det magnitudes is plotted against the mean magnitudes for all stars detected regardless of variability. Only stars with λ N det > 0.5 λ Nexp are shown. The solid lines connect the median standard deviations at a given mean magnitude calculated for every 0.1 mag interval with a width of ±0.2 mag. The dashed lines are the fits to the points of a function of the form λ SD(X) = λ a + λ b exp λ c λ X , where λ X is the mean magnitude; these define the limits separating variable from non-variable sources. See text for details. During the calculation of the variability indices, we also computed the false alarm probability, FAP (how often a certain signal is observed just by chance), by using so-called Monte Carlo or bootstrap simulations. At first the timeseries data are randomly shuffled keeping the times of observations fixed. Then the variability index of the randomly shuffled data is calculated in the same manner as before. The data are randomly reshuffled and the process is repeated. This procedure is repeated 10,000 times (= N total ) and the number of times the absolute value of the calculated index exceeds the original one (= N chance ) is recorded. Then we calculate the FAP as FAP = N chance /N total .
Standard deviation of observed magnitudes
Standard deviations of observed magnitudes are also calculated for each candidate. Again, to minimize adverse effects from possible spikes in the data, up to five extreme outlier data points are identified by a 2-sigma clipping algorithm and are ignored. Here we define λ N det to denote the number of detections that passed the screening and are also brighter than the faint limit over λ Nexp available exposures in a given waveband, λ. λ N det can be equal to or less than λ Nexp, and can be different for each source. Note again that the possible spikes and data points fainter than the faint limit are rejected only for the purpose of calculating the standard deviation, and are present in the published time-series data. Then we calculated the standard deviations of λ N det magnitudes for sources with λ N det > 0.5 λ Nexp regardless of variability. The calculated standard deviations are plotted against the corresponding mean magnitudes in Fig. 6. Not surprisingly, the standard deviation tends to become large as luminosity decreases. Obviously, large amplitude variables such as Mira-like variables blur the trend, but the majority of the sources plotted in the figure are not variable and we assume that the standard deviations arise mainly from photometric errors.
To formulate the trend we first calculated the median of the standard deviations, λ SD( λ X), for every 0.1 mag interval with a width of ±0.2 mag. Then, an exponential curve of the form: was fitted by the least squares method, where λ a, λ b, and λ c are free parameters, and λ X is the mean magnitude at which the corresponding λ SD( λ X) was calculated. We imposed a constraint that the standard deviation be a monotonically increasing function of mean magnitude, ie, λ SD( λ X1) < λ SD( λ X2) if λ X1 < λ X2 and λ SD( λ X1) = λ SD( λ X2) otherwise. The resultant fitted curves are shown by thick lines in Fig. 6. The best-fitting parameter values are tabulated in Table 3 together with the corresponding χ 2 values. The derived formulae define thresholds to separate real variables from non-variable sources, and are also used for identifying variable sources.
Identifying variable sources
Following the procedure described in section 3.3, we found more than 6 0000 candidates in the survey field. We take the following steps to reject false variability candidates. In the following, let λ µ j and λ sd j denote the mean and the standard deviation, respectively, of the λ m j i in Equation 6 calculated with data brighter than the faint limit and that passed the up-to-five extreme outliers rejection procedure described above.
Rejecting foreground stars with relatively high proper motion
We have been monitoring the survey area for more than a decade. Therefore, for some foreground stars in the survey area, the apparent positional changes over this period can become large enough to detect. Such a relatively high proper motion (HPM) star can be misclassified as a variable star in our method of finding variables and should be rejected. We cross-matched the variable source candidates with the second data release (DR2) of Gaia catalogue (GAIA DR2, Gaia Collaboration, Brown, A.G.A., Vallenari, A., et al. 2018;Lindegren et al. 2018) with a tolerance radius of 1.0 arcsec and reject sources with total proper motion ( µ 2 α cos 2 δ + µ 2 δ ) larger than 22.5 mas/year. The threshold value of 22.5 mas/year was chosen to reject stars that would have moved more than half a pixel 3 in 10 years. We further cross-matched the variable source candidates with the OGLE catalog of high proper motion stars towards the Magellanic Clouds (Soszyński et al. 2002 andPoleski et al. 2011) with a tolerance radius of 1.0 arcsec and removed ones with cross-identifications. We checked the light curves of all matched HPM star candidates. Most of them have distinctive light curves of false variables (e.g., linearly increasing or decreasing light curve), and none of the rest have meaningful light curves. In these screening processes 698 sources were rejected. Further investigation of the individual light curves revealed another 47 stars with linearly changing light curves. These all have Gaia proper motions >4 mas/yr and parallaxes >0.3 mas, and have been removed from our catalogue of variables.
Candidates with 3-band detections
fi , and λ sd indices for evaluating variability. A variable source candidate is recognized as a real variable if all of the following conditions are satisfied: The upper and lower panels of Fig. 7 show the Ferreira Lopes variability index, I pfc , respectively. Cutoff values, above which a variable source candidate is considered to be real were estimated by visually examining the light curves as a function of indices while taking into account the FAP. By definition, candidates with large negative I (s) pfc values could also be real variables. However, experiments showed that they are likely to be false alarms, probably resulting from systematic photometric errors in a given waveband while photometry of the other two wavebands correlates. Therefore we restrict ourselves to stars with positive index values. The resultant cutoff values are indicated by solid lines in the figures.
Candidates with 2-band detections
For variable candidates with 2-band detections, we use the λ 1 ,λ 2 JWS and λ sd indices for evaluating variability. A variable source candidate is recognized as a real variable if both of the following conditions are satisfied: Fig. 8 shows the distribution of the λ 1 ,λ 2 JWS index. The meaning of the marks and lines is the same as in Fig. 7. As is the case with the I (3) pfc index, candidates with large negative λ 1 ,λ 2 JWS values could also be real variables. Again, however, experiments showed that they are likely to be false alarms. Therefore we restrict ourselves to deal with stars with positive index values. Cutoff values were estimated in the same way as for candidates with 3-band detections.
Candidates with 1-band detections
For candidates with 1-band detection, first we selected those with λ sd ≥ 3.0 × λ SD( λ µ) and checked all their light curves by eye.
Summary and comments on variable source identification
Although more than 99% of the selected variable sources were detected in more than one waveband, the rest were detected in only one of the three wavebands. Source spectrum, saturation, detector glitches, and so on, are the likely reasons for that. Table 4 shows a breakdown list of the number of variable sources selected by the screening processes described above. Hereafter we refer to these selected variables as "variable sources". It should be noted that these sources might include unresolved variable galaxies and QSOs. Despite our efforts to reject non-physical variable sources, some might have slipped through the screening processes. The main remaining concern is the foreground high proper motion stars without data in the GAIA DR2 catalogue. In our experiments, they have similar light curves, characterised by a steady decline in brightness throughout the period of the survey. In a paper dealing with difference image photometry in the context of gravitational microlens- Figure 8. Stetson variability index, λ 1 ,λ 2 J WS , for candidates with 2-band detections is plotted as a function of weighted magnitude mean J µ or Ks µ. Symbols and lines are as in Fig. 7. For clarity, 34, 12, and 21 stars with λ 1 ,λ 2 J WS ≥ 1.5 are not shown for J&H, H&Ks, and J&Ks pairs, respectively.
ing, Albrow et al. (2009) perform a series of experiments that show the effect of measuring a star in the difference image at the wrong coordinates. The effect can be a decrease of about 20% in apparent brightness for a shift of about 20% of the FWHM of the images. The declining light curves might then be explained as due to a misplacing of the photometric aperture when measuring the difference images. Any variable sources with steadily declining light curves should be handled with care.
We also notice that there is often a large spread in magnitude in all filters in the data from the later seasons, espe- cially red after the 2013 season. Again, the work by Albrow et al. (2009) may be relevant here, in that they showed the error for an off-centre measurement depends on the ratio of the distance off-centre to the FWHM. Also, a deterioration in sensitivity of the SIRIUS camera has been reported (Nakajima, private communication), such that the detection limits in all filters became shallower by about 1 mag over the years since the first season in 2000. The cause of the deterioration is not known. This may also contribute to the spread.
CATALOGUE DESCRIPTIONS
Along with this paper we publish a photometric point source catalogue for the 1 • ×1 • survey area and also a variable source catalogue with time-series data. This section describes the features of the catalogues and how they are constructed.
Photometric point source catalogue
When we constructed photometric reference images for each field of view we combined the 10 best-seeing (typically 1 arcsec) images. The 10 images were chosen randomly in time so the photometric results from their combined image will be time-averaged (more specifically, median filtered) over the 10 epochs; these may be useful for certain types of research. In addition to this feature, each of the 10 best-seeing images comprises ten 5 s exposure images where bright sources will not be saturated. This is an advantage over the Kato et al. (2007) catalogue where the photometry is saturated for sources brighter than about 11 mag. Note also that our photometry has a S/N comparable to or a bit better than theirs, because of the longer total exposure times (300 s for the Kato et al. (2007) catalogue and 500 s for ours). For these reasons we publish the photometric point source catalogue along with the variable source catalogue. First we simply compiled the photometric results for each field of view. In total 348 129, 321 440, and 279 900 sources are detected in J, H and Ks, respectively. In this process, sources within a radius of 10 from the four very bright stars, namely HD5302, CM Tuc, HD5688, and HD6172 were manually deleted because they were heavily affected by the bright haloes of these stars. These simple source lists for each waveband are contaminated by multiplydetected sources that fall in overlapping areas between adja-cent fields. We removed such sources based on their spatial proximity (|∆r| < 1.0 ). We adopted the result with the better S/N and discarded the others. This procedure leaves 314 689, 289 264, and 252 140 sources in J, H and Ks, respectively. Further elimination of saturated sources leaves 314 685, 289 239, and 252 118 sources in J, H and Ks, respectively. Then the J and H duplication-and saturationfree point source lists were merged using a positional tolerance of |∆r| < 1.0 . For the matched sources, coordinates were recalculated by taking an average of the coordinates from each band. In the rare cases when more than one source is present within the tolerance radius the closest one was always adopted and the others were listed as solitary sources. The J and H band merged list was further merged with the Ks duplication-and saturation-free point source list in the same way to make a final J, H and Ks band merged photometric point source catalogue. Note that the foreground stars rejected from the variable star catalogue (see section 3.4.3.1) are present in the point source catalogue. Table 5 shows the first five records that contain meaningful data for all the three wavebands extracted from the photometric point source catalogue as an example. The catalogue is in increasing order of Right Ascension. The full version of the catalogue is available on the MNRAS server. The first two columns show the coordinates referenced to the 2MASS positions. The following columns list the calibrated magnitudes with their errors and standard error of the calibration offset, and flags for multiple detection and proximity in J, H and Ks, respectively. The calibrated magnitudes have not been corrected for interstellar extinction. The multiple detection flag is set to 1 if the source is detected in more than 1 field of view, and 0 otherwise. The proximity flag is set to 1 if there are nearby sources within a radius of 3 arcsec.
The columns of the point source catalogue
In the left panel of Fig. 9 we show a colour-magnitude diagram of all sources detected in our survey of the SMC. The lower and upper dashed lines indicate the 90% completeness limit of the moderately populated region (see Ta-ble 2) and the saturation limit, respectively. The numbers of stars in bins of area 0.025×0.025 mag 2 were computed and the fiducial colour was applied according to the number density of stars in each bin (see the annotated colour wedge). Because of the limiting magnitude and saturation effects, the lower right area and the upper part of the colour-magnitude diagram are sparsely populated. Note that photometry for the red giants with moderate dust extinction is complete. However, care must be taken for stars with high mass-loss rates whose fluxes are significantly attenuated due to circumstellar extinction.
Variable source catalogue
A total of 60 961 variable source candidates were detected. Among them, 1 063 sources passed the verification process described in the previous section and were identified as real variable sources (See Table 4). The variable source catalogue lists the coordinates of the variables, the variability indices, the number of available exposures for each variable, and other relevant information.
The columns of the variable source catalogue
The last five records of the variable source catalogue are shown in Table 6. The table has many columns, but it is repetitive for the J, H and Ks bands from the left to the right. Explanations for each column of the table are given below: Columns 1-2 : Mean coordinates of the variable source. This coordinate is the mean value if the source is detected in more than one wavebands. Columns 3-5 : Standard deviation of magnitudes in J, H, and Ks. If unavailable, "-99.999" is given. Column 6-7 : Ferreira Lopes variability indices. If unavailable, "-99.9999" and "-99.999" are given for I and J&Ks pairs, respectively. If unavailable, "-99.999" is given. Columns 11-12 : Coordinates of the variable source determined on the photometric reference image. If unavailable, "999.999999" is given. Columns 13-15 : Calibrated magnitude, its error and error in the conversion offset. These are the same as the ones in the photometric catalogue. The calibrated magnitude has not been corrected for interstellar extinction. If unavailable, "99.999" is given. Column 16 : Proximity flag. If another photometric reference star was present within 3 arcsec of the corresponding position given in Columns 10-11, the flag is set to 1. Otherwise it is set to 0. This is identical to the mp flag in the point source catalogue Column 17 : Name of the time-series data file. The name denotes the "mean coordinate" of the variable source given in Columns 1-2. If unavailable, "-" is given. Column 18 : Number of available exposures, Nexp, for the variable source. If unavailable, "0" is given. Columns 11 to 18 are then repeated for each of the H and Ks bands.
Time-series data
The multi-epoch photometric data (time-series data) of variable sources are also published with this paper. Each variable source has at least one time-series data file for the corresponding waveband in which it is detected. The file name of the time-series data indicates the coordinates and waveband, e.g., "14.278202−72.827084.H.dat". If the source is detected in more than one waveband, the designated coordinate is the mean of the coordinates determined independently for each waveband. The time-series data file contains the information described below.
The columns of the time-series data catalogue
An example of the time-series data is shown in Table 7. The data file consists of 5 columns as follows: Column 1 : Heliocentric Julian day. Column 2 : Calibrated time-series magnitude. It has not been corrected for interstellar extinction. A value of 99.999 is given for the poor measurement. Column 3 : Error of the time-series magnitude calculated by the differential image analysis, i.e., error in the first term Table 5. The first five records that contain meaningful data for all the three wavebands extracted from the photometric point source catalogue. The coordinates are followed by, for each waveband, successive columns listing the calibrated magnitude λ m, the error in the calibrated magnitude λ e, the error in the calibration offset λ ec, and flags for multiple detection λ fm and proximity λ fp. The catalogue is in increasing order of Right Ascension. The full version of the catalogue is available on the MNRAS server.
pfc and I fi , the Stetson variability indices λ 1 ,λ 2 J WS , the position on the photometric reference image, the calibrated magnitude λ m and its error λ e, the error in the calibration offset λ ec, the proximity flag, the name of the time-series data file, and the number of available exposures follows for the J, H, and Ks bands. The catalogue is in increasing order of Right Ascension. The full version of the catalogue is available on the MNRAS server. The Heliocentric Julian day, the time-series calibrated magnitude λ m, the error of the time-series magnitude calculated by the differential image analysis e diff , the total systematic error computed from errors in the reference magnitude and calibration offset esys, and the name of the survey region and field of view in which the variable source detected. The full version of the data is available on the MNRAS server.
HJD λ m e diff esys Name of [day] [mag] region & field of view in the right hand side in equation 6. Column 4 : Total systematic error determined from errors in the reference magnitude and calibration offset, i.e., error in the term of m j 0 + conversion offset in equation 6. Column 5 : Name of the survey region and field of view in which the variable source was detected.
Note that the number of available exposures is not nec-essarily the same among the three wavebands. This is due to several reasons, but the main one is (infrequent) readout or reset failures in one or two detectors among the three SIR-IUS detectors. In these cases, the corresponding raw data are entirely left out. A value of 99.999 is given for column 2 (and 3) if the exposure data are available but the photometry is of poor quality.
USING THE CATALOGUES
The main objective of this paper is to introduce our survey and publish the data. Detailed discussion of each type of variable sources is beyond the scope of this paper but is in preparation. Here we just demonstrate the catalogue data.
Colour magnitude diagram of variable sources
In the right panel of Fig. 9, we show the colour-magnitude diagram of variable sources that passed our evaluation processes. The size of the mark is in proportion to the standard deviation of the J-band time-series magnitude, such that the radius of the mark is equal to 1/10 of the standard deviation. It is obvious that the light variation phenomenon is ubiquitous in the colour-magnitude diagram. It is notable that almost 100% of the bright red giants are variable. Many of the luminous and blue sources are also found to be variable. Fig. 9 only scratches the surface of the available survey data. Many interesting types of variable source are detected and we believe that our data will be useful for a wide variety of research topics.
Sample light curves
As is clear from Fig. 9, variable sources with a wide range of colours and luminosities are detected in this survey. Fig. 10 provides an illustration of our data for three Mira variables, together with J and Ks observations from VMC-DR4 for the same stars. VMC-DR4 covers a time period when we had rather few measurements. The overall agreement between these data-sets is good, but it would be worth making a detailed analysis of possible differences between the photometric systems before the combined data are used.
Comparison to the OGLE survey
The OGLE project provides huge datasets of variable sources towards the Magellanic Clouds and the data are readily accessible 4 . Soszyński et al. (2010) and Soszyński et al. (2011) published catalogues of classical Cepheids and long period variables in the SMC. The OGLE survey detected 1 350 classical Cepheids, 95 Miras and 513 semiregular variables within the 1 • ×1 • area in common with our survey. We queried the OGLE-III database to extract known variable sources satisfying the following positional criteria: 11.9806 ≤ RA [degree] ≤15.5166 and −73.3441 ≤ Decl.
[degree] ≤ −72.3215. Among them are 236 classical Cepheids, 94 Miras and 332 semi-regulars that are identified as variables in our survey. Obviously our survey missed very short period Cepheid variables that are faint at nearinfrared wavelengths. The sole missing Mira (OGLE-SMC-LPV-06978) is located at the very edge of our survey area. Some of the missing semi-regular variables are located just beyond the edge of our survey area, but most of the others were recognised as variable source candidates, but were eliminated by variability identification processes. It is likely that their amplitudes of light variation are small. Meanwhile there should be some infrared sources that are not detected | 2017-11-03T09:37:09.759Z | 2018-09-18T00:00:00.000 | {
"year": 2018,
"sha1": "418651dbb271fa3b579bacb3087197c0dbebe6b9",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/481/3/4206/25863649/sty2539.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "418651dbb271fa3b579bacb3087197c0dbebe6b9",
"s2fieldsofstudy": [
"Physics",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
239188972 | pes2o/s2orc | v3-fos-license | THE NEO-JURISPRUDENCE OF PIL IN SUPERIOR COURTS OF PAKISTAN: A COMPARATIVE ANALYSIS OF PRE AND POST LAWYERS’ MOVEMENT WORKING OF SUPERIOR COURTS
The dynamics of the superior judiciary in Pakistan have undergone a drastic transformation in its approach and working in post 2007 emergency followed by a landmark movement of civil and legal fraternity for restoration of constitutional supremacy. The neo-jurisprudence is being applauded and criticized at the same time. The excessive use of Suo-motto and public interest litigation at one hand and frequent judicial review of executive and legislative action on other have been the main source of contention between judiciary and other two pillars of state, legislature and executive. The Superior Court is being recognized as the ultimate savior of fundamental rights and guardian of the constitution as well as rights of the people. At the other extreme, the criticism like activist judiciary; disrespect for popular will and making rather interpreting law are most commonly attributed to Superior Judiciary. The study is qualitative in nature and presents a comparative analysis of trends in Superior Court before and after Lawyers’ movement. The study also aims in justifying the proactive approach especially in providing social justice on failure of state organs to respond to the exigency of time.
Introduction
Judiciary as one of the three pillars of state occupies an important place and relevance in the socio-political sphere of state. The study attempts to examine the dynamic shift in the working of superior Courts especially its proactive approach towards social justice and rule of law, post imposition of emergency in 2007 followed by restoration of constitution after landmark civil and legal fraternity movement. Judiciary appeared in post 2007 scenario as the ultimate savior of people of Pakistan and they started associating their value judgment with that of decision of superior Courts. The role of Superior Court has become very important and highlighted in wake of state's failure to discharge its fundamental responsibility as enshrined in the constitution. The Judicature also acts as a check over any misadventure by other two organs of state from attempting to transgress their power. The article attempts to help people, lawyers, students of law and other actors to understand and evaluates development of Jurisprudence, the working of superior courts, which is essential for the safeguard of constitutional democracy via viz Constitutional supremacy as against the parliamentary supremacy.
Literature Overview
Public interest litigation is a newly evolved concept in the modern jurisprudence. Incapacity of traditional system of adjudication; procedural and technical limitation in providing social justice to people at large in wake of concepts of social democracy has given rise to deviation from settled Anglo-Saxon rules and a more relaxed approach is being applied for providing justice to all without taking into account requirements of locus-standi, aggrieved party. A more liberal approach is being applied by the courts to provide social justice by shifting from traditional procedure and commitment to precedent as explained by Prof. Robertson in context of development taking place in Eastern Europe and South Africa. 2 Such shift in approach of court has resulted in the trend of contemporary rise of judicial power in term of judicial review and corresponding fading away of parliamentary sovereignty. 3 Public interest litigation is a method to provide speedy and adequate remedies for the violation of fundamental rights of public at large. With recognition of widening the gap between haves and have-nots, the need for providing a platform and a mechanism was never so eminent as is now that could alleviate the miseries of economically and socially deprived sections of society at the hands of privileged few or on account of legislative and executive high-handedness. It is litigation in the interest of public as a whole without taking into account any socio-economic or religio-political orientation. It is a recognized legal action in modern jurisprudence for the enforcement of public interest in the court of law. Public interest is defined as "something in which the public, the community at large, has some pecuniary interest, or some interest by which their legal rights or liabilities are affected. It does not mean anything so narrow as mere curiosity or as the interest of the particular localities, which may be affected by the matter in question". 4 The rise of public interest litigation is neither of Pakistan origin nor a South Asian phenomenon. While discussing the development emerged in Latin America in the context of Court protecting the individual rights by limiting the governments, author Helmke & Rios-Figueroa, in their joint work describe Public Interest Litigation as a global phenomenon. 5 The development of PIL globally and especially in India had great influence upon jurisprudence of superior courts of Pakistan. However, the development was a direct result of recognition of democratic values; the emergence of concepts like social welfare; and adoption of Human Rights Declaration. The PIL is a global movement and not a geographical phenomenon but in South Asia especially in Pakistan it was direct result of state inaction and lack of will to deliver. The Responsibility of judiciary as a guardian of rights of people becomes of prime importance in event of failure of state to deliver according to their constitutional mandate. The legitimacy crisis had also forced the state institutions to use judicial organ as an umbrella for implementing their decisions and policies and ultimately to abandon some of the privileges and powers and to conform the same to the judiciary. 6 Apart from institutional vacuum; incapacity of traditional judicial system to provide speedy and efficacious remedy for public at large have also become one of the contributing factors towards the excessive use of inherent jurisdiction and powers by superior courts. Courts are sensitized towards changing socio cultural and environmental needs of society and in order to have relevancy law must keep pace with society. 7 The development of media and easy access to information has been catalyst for development of public interest litigation and it had not only provided access but had made the people generally right-conscious. Media by discharging its obligation of highlighting every issue of social importance had not only made executive and legislature more prone to accountability but on other hand had increased the demand for justice and compensation for wrong. Media has contributed towards the success of judicial activism by reporting the same in electronic and print media. 8
Research Methodology
The study is qualitative in nature. The data collected from published books, articles and legal decisions of superior courts of Pakistan. The data has been assessed as well as evaluated and a comparative distinction is drawn in the working and decisions of superior courts in the period before and after 2007 scenario, imposition of emergency followed by a landmark movement of civil and legal fraternity for independence of judiciary.
PIL Jurisdiction in Pakistan: (Pre Movement Period)
In Pakistan concept of public interest litigation emerged due to constitutional interpretation of Article 199 and Article 184 (3) 1988. Though for many it cannot be in strict sense be termed as public interest litigation but a petition of pubic importance. In case reported as Miss Benazir Bhutto v Federation of Pakistan and another, (Supreme Court 1988) Benazir Bhutto as Co-chairman of Pakistan Peoples' Party has challenged the election laws which directly affected the PPP and other parties in upcoming elections in year 1988. In response to objection raised on the maintainability of petition on the ground of locus standi, aggrieved party, the apex court had observed that adversarial nature of contemporary litigation was ill-suited for grant of relief to a large number of unidentified litigation by recognizing the limitation of the traditional Anglo-Saxon procedure that has shut the door of justice for the poor and mass. The concept of "aggrieved party" was liberally interpreted and was given new and extended meaning. The Court further held that petitioner had challenged the amendments made in Political Parties Act 1962 to be in contravention of Article 17 & 25 of the Constitution. The court while adjudicating not only defined the aggrieved party status but it was also observed that effect of sub-section 1 & 6 of Section 3-B of Political Parties Act 1962 would be an automatic exclusion of political party in case of non-registration. Court observed that Section 3-C of the Act was inserted for limited purpose and the same cease to have any effect and cannot provide an alternate for non-registration under the section. Based on such observation political parties were also adjudged as aggrieved parties. While interpreting Article 184(3) of the Constitution the Court observed that the approach should not be ceremonious observation of rules of interpretation but the approach should be progressive; and should be inspired from the Objective Resolution and the concepts of social justice and democracy as enshrined in Islam. 10 Within the two years of Miss Benazir Bhutto case the then Chief Justice had taken a Suo-Moto notice in the Darshan Masih case which can be described as the first example of public interest litigation in the constitutional jurisprudence of Pakistan. It was a Suo-Moto where a direct cognizance was taken by the Supreme Court under Article 184(3) on a telegram which was converted into a petition. Public Interest litigation in Pakistan is being identified with the exercise of Suo-Moto jurisdiction of Supreme Court. Darshan Masih who belonged to unprivileged class moved the court through telegram stating therein that after court intervention followed by his and his family release, three among them have been abducted by their owner and remaining they are in constant apprehension and fear. They are hiding and living like animals without food and shelter and requested to let them live like humans. The Court after taking the cognizance issued notice and started hearing the case to solve the problems of bounded labour and new mechanism of systematic inquiry as against adversarial system started in the constitutional jurisprudence of superior courts in case of Darshan Masih v The State, (1990 Supreme Court) . The court further observed that procedural necessities can be given out in case of violation of fundamental human rights. 11 The then Chief Justice Mr.Justice Afzal Zulla who had taken the notice of telegram of Darshan Masih had taken keen interest in the field of public interest litigation. It was his dynamic approach and efforts which made Quetta Declaration possible which was a milestone in the evolution and development of PIL in Pakistan, the commitment of the Journal of Social Sciences and Humanities Vol: 60(1) Jan-Jun 2021 37 social justice to society as a whole. 12 The Supreme Court not only assumed a new jurisdiction but had relaxed the procedural constraints of adversarial system for alleviating the miseries of unprivileged class and to protect the fundamental rights of economically deprived sections of the society.
The apex court of Pakistan in its landmark judgment known as Shahla Zia case having international recognition, have interpreted the concept "right to life" and observed that right to clean environment is a fundamental right of all citizens of Pakistan falling under right to life and right to dignity as provided in Article 9 & 14 of the Constitution while acknowledging the importance of Rio Declaration on Environment. 13 A new platform was provided to people where action of executive as well as legislature was open to scrutiny. Judicial power being characterized with a new people oriented profile and constitutional movement gained the status of social movement. 14 The development of public interest litigation has resulted in development of corresponding tools like extended meaning of aggrieved person, widening the scope of locus standi, softening the law of limitation, relaxing procedure as well as precedent and Suo-Moto jurisdiction.
Development Post Lawyer Movement
Superior courts being the guardians of fundamental rights of peoples of Pakistan had adopted unprecedented proactive approach for protecting the fundamental rights of the people. Creation of Human Rights Cell in Supreme Court was catalyst in development of PIL. Though it was established in early 90s but revitalized during the tenure of then Chief Justice of Pakistan, Mr. Justice Iftikhar A Choudhary and reached its zenith after lawyers' movement. The PIL by invoking Article 184(3) of the constitution were of various procedural forms namely human rights cases, complaints converted into constitution petitions, Suo-Moto cases and C.Ps filed on Original Side by individuals or social groups on issues of gross violation of fundamental rights and of public importance. Over the period Pakistan apex courts have spread the new jurisprudence of PIL which is a result of drastic change and radical departures, both on procedure as well as on domain.
The paradigm shift in the approach of apex courts can be visualized from its efforts over the period in breaking the shelf of judicial subservience and in acquiring the status of guardian of constitutional supremacy. The apex courts in Pakistan started enjoying sanctuary not only from constitutional guarantee but also associated itself with the public legitimacy. The people at large have started attaching their value judgment with judiciary and in wake of increasing public legitimacy the societal transformation through judicialization of socio-political issues has become new priority of superior courts. In the words of Justice S.K Anand, the former Chief Justice of India: "it is because of public The Neo-Jurisprudence of PIL in Superior Courts of Pakistan: A Comparative Analysis of Pre and Post Lawyers' Movement Working of Superior Courts 38 opinion that higher judiciary in country occupies the position of pre-eminence among the three organs of state." 15
A Comparative Analysis
There have been dynamic transformations in working of superior courts in context of procedural requirements in post lawyers' movement. The requirements as to correct form of procedure involved in pleading are no more relevant. The intent of complaint becomes more important than the contents and its structure. The requirements as to pleadings as well as that of petitioner have become secondary which was condition precedent in formative phase of development of PIL jurisdiction in Pakistan. In case of Muhammad Yaseen v Federaton of Pakistan (Supreme Court 2012) Court affirms that exercising the jurisdiction is not dependent upon existence of petitioner. 16 The requirement of representative petitioner has become legally irrelevant while deciding the issue before the superior courts in exercise of its original jurisdiction. In case of Khawaja Muhammad Asif v Federation of Pakistan and others (Supreme Court 2013) Court observed that jurisdiction under Article 184(3) can be exercised even without existence of petitioner when information is brought in notice of the court which justifies the exercise of jurisdiction. Court observes this when Khawaja Muhammad Asif showed his inability as petitioner to pursue the petition on becoming Federal Minister. 17 Supreme Court in a post movement scenario had even relaxed the maintainability bar while exercising the original jurisdiction which used to be the condition precedence for invoking the remedy before the superior courts. While exercising the original jurisdiction superior courts have always honoured the maintainability bar and for invoking the jurisdiction under Article 184(3), the petitioner was required to satisfy the court that there is no other efficacious remedy available. In a new jurisprudence the condition of efficacious remedy either available or not is no more a bar. The bar as to lis pendens as well as the previous precedence and decisions of the courts which had attained finality had become out of consideration. In order to provide effective social justice the procedural and structural constraints of judicial hierarchy were overcome and objections that the matter is either pending at different forums competent to try and adjudicate the controversy or the bar as to matter which had already been substantially decided before another forum competent to try such controversies were overruled. Such departure can be seen in complaint against establishment of Macro Habib Store on the playground. 18 In the aftermath of restoration of supremacy of constitution post 2009 lawyers movement the concepts like judicial restraints, separation of power and political question doctrine took the backseat. The bar to justiciability of certain controversies or the matter which were otherwise constitutionally committed to other institutions was of no relevance when 15 Journal of Social Sciences and Humanities Vol: 60(1) Jan-Jun 2021 39 the matter is of public importance requiring immediate intervention of superior courts. The questions/controversies which used to be non-justicible and courts have usually in the past declined to interfere in the matter are no more immune. Foreign policy issues, economic matters and matters of policy issues which were previously on account of question of justicibility not entertained for lack of professional qualifications, resources and institutional incompetence are being taken up and decided in post lawyers' movement era. (Suo Moto Case NO.04 of 2010) In a matter involving political question, the then Prime Minister Syed Yousuf Raza Gillani was prosecuted in a contempt proceedings 19 and attended the hearings arising out of a petition in case of Dr.Mubashir Hassan & Others v Federation of Pakistan & Others (Supreme Court 2010) 20 Many a matters involving issues having socio-economic bearing on public at large were taken up and decided in contemporary jurisprudence ignoring the justicibility bar. Supreme Court has taken the cognizance and reviewed the executive decision with respect to rental power plants, 21
Year-wise Institution of HR cases
The graph as per extract of Supreme Court Annual Report 32 clearly demonstrates the importance of Human rights cases, Public Interest litigation in the working of Superior Court. However, with the increase of Suo Moto Jurisdiction and PIL, the rhetoric of Judicial Activism proportionally rises within some judicial discourse and within political and executive platform. The Judicial overreach on political-institution issues, the mandate 32 Ibid. of which is otherwise explicitly fall within the domain of other institution, is a criticism associated with the working of Superior Court as against the separation of power.
Conclusion
The constitutional history and its development in context of role of judiciary have remained shrouded with the clouds of theory of necessity which culminated into jurisprudence of Pakistan. Resultantly many military interventions were witnessed and most of the executive actions remain immune from judicial review. The constitutional history of Pakistan had been like pendulum swinging from one extreme to another. Despite many anomalies the superior courts in Pakistan have succeeded in forming an image of last savior for the people of Pakistan as Guardian of their rights. The success of lawyers' movement culminated into restoration of judiciary and in reinstatement of the then Chief Justice Iftikhar Muhammad Choudhary along with 42 of his fellow judges. In consequence thereof, the societal transformation through judicialization of socio-political issues had become the new priority for the superior courts. This phenomenon of judicialization of politics and the new emerging traits within contemporary democracy were explained by Guarnieri and Pederzol, in Western European judiciary, in the words as: "….Nothing that the social and political significance of the judiciary has become common trait of contemporary democracy: A phenomenon described as judicialization of politics. Resultantly judges participation in politic has risen: from the elaboration of public policies as a result of implementation of laws and the review of their constitutionality, to their implementation by means of the judiciary's overview of administrative agencies". 33 One of the main defence offered by judicial power as explained by Ely that the court is especially necessary to protect from subversion or erosion of constitutional rights of all individuals. 34 In the words of Choper (Choper, 1980) "if judicial power were both independent and well defined the judges learning and integrity would effectively prevent the erosion of public liberty". 35 Judicial Review in contemporary World can not fit well in orthodox separation of Power Model. Constitutional Courts perform more than the judicial function domain as traditionally enjoined with. Constitution has turned merely law of the land to a dynamic and living document as the true picture of peoples aspiration embodying the values and principles of society. Court by virtue of Constitutional Power of Judicial Review has taken the task of realizing such values as provided in the Constitution.
An indisputable important role is being performed by superior court through the weapon of PIL in its struggle to ensure social justice, yet many doubt its capacity to achieve the desired result on account of very nature of the process/undertakings i.e. litigation. Critics believe that social institution cannot be reformed through litigation, which is otherwise a complex phenomenon, and on the other hand over reliance on Court would ultimately | 2021-10-21T15:57:05.699Z | 2021-06-30T00:00:00.000 | {
"year": 2021,
"sha1": "afd52a4a842c36b3893db96bffe8b86654d9e8dd",
"oa_license": "CCBYNC",
"oa_url": "https://jsshuok.com/oj/index.php/jssh/article/download/444/400",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2f5dd974b1baaf7ef06522624ff8dd0f58ff430e",
"s2fieldsofstudy": [
"Law"
],
"extfieldsofstudy": []
} |
54543275 | pes2o/s2orc | v3-fos-license | A Time-Periodic Bifurcation Theorem and its Application to Navier-Stokes Flow Past an Obstacle
We show an abstract time-periodic bifurcation theorem in Banach spaces. The key point as well as the novelty of the method is to split the original evolution equation into two different coupled equations, one for the time-average of the sought solution and the other for the"purely periodic"component. This approach may be particularly useful in studying physical phenomena occurring in unbounded spatial regions. Actually, we furnish a significant application of the theorem, by providing sufficient conditions for time-periodic bifurcation from a steady-state flow of a Navier-Stokes liquid past a three-dimensional obstacle.
Introduction
Time-periodic bifurcation from a steady-state regime is a commonly observed phenomenon in the dynamics of viscous liquid, for both bounded and unbounded flow; see. e.g. [11,Section 10.3], [19,Chapter 3]. As is wellknown, it may take place when the magnitude of the driving mechanism, m (say), reaches a certain critical value, m c . Basically, if m < m c the flow is steady, whereas once m > m c the flow shows an unsteady, time-periodic character. It must be emphasized that the latter occurs even though the driving mechanism is time-independent.
The rigorous mathematical analysis of this type of bifurcation for bounded flow, including stability properties of the bifurcating branch, has received a number of important contributions, beginning with the works of Iudovich [14], Joseph & Sattinger [15], and Iooss [13] in the early 1970. In particular, these papers laid the foundation for a rigorous understanding of complicated bifurcation phenomena occurring in the Taylor-Couette experiment; see [4].
However, it must also be emphasized that the approaches employed by these authors -mostly resembling ideas introduced by E. Hopf in [12] on similar problems for systems with a finite degree of freedom-do not apply to the case of an unbounded flow. As a result, the important time-periodic bifurcation phenomenon occurring in the flow of a viscous liquid past body, like a cylinder (in 2D) or a ball (in 3D), is left out. From a strictly technical viewpoint, this failure is due to the circumstance that the above approaches require the relevant time-independent, linearized operator, L , to be continuously invertible in the appropriate Hilbert space where the problem is formulated. Now, while this condition is certainly satisfied if the region of flow is bounded, since in that case 0 can only be an eigenvalue for L , in the case of an unbounded flow it fails, because 0 becomes a point of the essential spectrum [2, Theorem 2 and Remark 2]. Nevertheless, as first pointed out and proved by Babenko [3], the operator L becomes Fredholm of index 0 provided it is defined in the Banach space, B, where steady-state solutions belong. Therefore, the bounded invertibility of L , thus defined, is again ensured by requiring that 0 is not an eigenvalue. In the light of these considerations, it becomes natural to formulate the time-periodic bifurcation problem in the space B, an approach first taken by Babenko [3], and, successively extended and improved by Sazonov [17].
However, this kind of procedure has two drawbacks. On the one hand, it gives up the simplicity of the Hilbert-space formulation, and, on the other hand and more importantly, it is not able to cover the case of time-periodic bifurcation of plane flow past a cylinder [1, p. 39]. Motivated by the latter, in [8] the present author has introduced a different method for the study of time-periodic bifurcation of viscous flow that allows him to overcome both drawbacks. The method stems from the observation that, in the case of an unbounded flow, the (time-independent) time-average over a period, v, of the sought solution, and the "purely periodic" (time-dependent) component, w, belong, in general, to two different function spaces, with, in particular, v ∈ B. With this in mind, the original time-dependent equation can be equivalently rewritten as two coupled equations, one of the elliptic type (for v), and the other of parabolic type (for w). The problem then simplifies to a great extent, in that one can show that, in order to obtain the desired bifurcation result, it suffices to investigate, basically, only the properties of the evolution equation which is proved to be naturally formulated in the same Hilbert-space framework as that of bounded flow.
We believe that the method introduced in [8] could be very useful in many other problems of mathematical physics, and, in particular, those regarding phenomena occurring in unbounded spatial regions.
For this reason, the main objective of this paper (Section 3) is to employ the basic ideas introduced in [8] to prove an abstract time-periodic bifurcation result that could be applied to more general problems; see Theorem 3.1. As hinted earlier on, this theorem is formulated for the coupled systems constituted by a time-independent and a first order time-dependent equation in Banach and Hilbert spaces, respectively; see (3.5). (1) Under suitable regularity conditions on the nonlinearities (see (H4) and Remark 3.3) and technical assumptions (see (H3)), we then show the existence of a one-parameter family of bifurcating time-periodic solutions, provided the spectrum of the relevant linearized operators satisfies certain specific conditions (see (H1), (H2), (H5)). Roughly speaking, they amount to assume that the linear (time-independent) operator involved in the evolution equation possesses a pair of simple, purely imaginary, complex conjugate eigenvalues, "crossing" the imaginary axis with non-zero speed; see also Remark 3.1. Moreover, we show that this bifurcating branch is unique, and that the type of bifurcation can only be super-or sub-critical.
The second part of the paper (Section 4) is dedicated to the application of Theorem 3.1 to the study of time-periodic bifurcation of a steady-state solution to the Navier-Stokes equation in an exterior three-dimensional domain (flow past a body). In particular, we show that all technical assumptions of Theorem 3.1 are indeed met (see Proposition 4.1-Proposition 4.3) so that the results stated in Theorem 3.1, under the above mentioned hypotheses on the spectrum, apply. We wish to stress out that our results differ from those of [17] on the one hand, because they are obtained, basically, in a Hilbert-space framework, and, on the other hand, because unlike [17], we also show the uniqueness property of bifurcating solutions.
Notation
The symbols N, Z, and R, C stand, in the order, for the sets of positive and relative integers, and the fields of real and complex numbers.
Ω denotes a fixed exterior domain of R 3 , namely, the complement of the closure of a bounded, open, and simply connected set, Ω 0 ⊂ R 3 . We shall assume Ω of class C 2 , and take the origin O of the coordinate system in Ω 0 .
(1) We wish to remark that our approach also admits of a straightforward extension to Banach spaces; see Remark 3.2.
Also, we denote by R * > 0 a number such that the closure of Ω 0 is strictly where the bar denotes closure. We set u t := ∂u/∂t, ∂ 1 u := ∂u/∂x 1 , and indicate by D 2 u the matrix of the second derivatives of u.
For an open and connected set , stand for the usual Lebesgue and Sobolev classes, respectively, of real or complex functions. (2) Norms in L q (A) and W m,q (A) are indicated by . q,A and . m,q,A . The scalar product of functions u, v ∈ L 2 (A) will be denoted by u, v A . In the above notation, the symbol A will be omitted, unless confusion arises.
(2) We shall use the same font style to denote scalar, vector and tensor function spaces. In the following, B is a real Banach space with associated norm · B . By B C := B + i B we denote the complexification of B.
Moreover, for u, v ∈ L 2 2π,0 (Ω) we put Finally, by c, c 0 , c 1 , etc., we denote positive constants, whose particular value is unessential to the context. When we wish to emphasize the dependence of c on some parameter ξ, we shall write c(ξ).
An Abstract Bifurcation Theorem
Objective of this section is to prove a time-periodic bifurcation result for a general class of equations in Banach spaces. Before proceeding in that direction, however, we first would like to make some comments that will also provide the motivation of our approach.
Many evolution problems in mathematical physics can be formally written in the form u t + L(u) = N (u, µ) , where L is a linear differential operator (with appropriate homogeneous boundary conditions), and N is a nonlinear operator depending on the parameter µ ∈ R, such that N (0, µ) = 0 for all admissible values of µ. Then, roughly speaking, time-periodic bifurcation for (3.1) amounts to show the existence a family of non-trivial time-periodic solutions u = u(µ; t) of (unknown) period T = T (µ) (T -periodic solutions) in a neighborhood of µ = 0, and such that u(µ; ·) → 0 as µ → 0. Setting τ : and the problem reduces to find a family of 2π-periodic solutions to (3.2) with the above properties. We now write u = u + (u − u) := v + w and observe that (3.2) is formally equivalent to the following two equations At this point, the crucial issue is that in many applications -typically when the physical system evolves in an unbounded spatial region-the "steady-state component" v lives in function spaces with quite less "regularity" (3) than the space where the "purely periodic" component w does. For this reason, it is much more appropriate to study the two equations in (3.3) in two different function classes. As a consequence, even though formally being the same as differential operators, the operator L in (3.3) 1 acts on and ranges into spaces different than those the operator L in (3.3) 2 does. With this in mind, The general abstract theory that we are about to describe stems exactly from the above considerations.
To this end, let X , Y, be Banach spaces with norms · X , · Y , respectively, and let H be a Hilbert space with norm · H and corresponding scalar product ·, · . (4) Moreover, denote by a bounded linear operator, and by a densely defined, closed linear operator, with a non-empty resolvent set P(L 2 ). For a fixed (once and for all) θ ∈ P(L 2 ) we denote by W the linear subspace of H closed under the norm w W := (L 2 + θ I)w H , where I stands for the identity operator. We then define the following spaces Here 'regularity' is meant in the sense of behavior at large spatial distances. (4) Without any risk of confusion, we use here the same symbol as the L 2 -scalar product introduced earlier on. with corresponding norms The scalar product in H 2π,0 is defined by (5) ( be a (nonlinear) map satisfying the following properties: We can then formulated the following. Bifurcation Problem: Find a neighborhood of the origin U (0, 0, 0) ⊂ X × W 2π,0 × R such that the equations possess there a family of non-trivial 2π-periodic solutions (v(µ), w(µ; τ )) for some ω = ω(µ) > 0, such that (v(µ), w(µ; ·)) → 0 in X × W 2π,0 as µ → 0.
With a view to solve the above problem, we begin to make the following assumptions (H1)-(H5) on the involved operators.
In order to prove our main Theorem 3.1, we begin to draw a number of consequences from the above assumptions. In this regard, let v 0 be the (unique) normalized eigenvector of L 2 corresponding to the eigenvalue ν 0 , and set which, by (H2) and the fact that w 0 = 0, implies w ℓ = 0 for all ℓ ∈ Z−{±1}.
Thus, recalling that µ 0 is simple, we infer w ∈ S and the lemma follows.
Denote by L * 2 the adjoint of L 2 . Since ν 0 is simple (by (H2)), from classical results on Fredholm operators (e.g. [20,Section 8.4]), it follows that there exists at least one element For future reference, we observe that with the normalization (3.6), it follows that is the adjoint of Q. In view of the stated properties of v * 0 , we infer that , and the lemma follows from another classical result on Fredholm operators (e.g. [20, Proposition 8.14(2)]).
With this result in hand, we shall now follow a more or less standard procedure to show that our Bifurcation Problem has in fact a solution. To this end, in order to ensure the the solutions we are looking for are nontrivial, we endow (3.5) with the side condition where ε is a real parameter ranging in a neighborhood of 0. We may then prove the main result of this section.
Then, the following properties are valid.
(a) Existence. There are analytic families satisfying (3.5), (3.8), for all ε in a neighborhood I(0) and such that such that every (nontrivial) 2π-periodic solution to (3.5), (z, s), lying in U must coincide, up to a phase shift, with that member of the family (3.9) having ε ≡ (s|v * 1 ). (a) Parity. The functions ω(ε) and µ(ε) are even: Consequently, the bifurcation due to these solutions is either subcritical or supercritical, a two-sided bifurcation being excluded. (7) Proof. We scale v and w by setting v = ε v, w = ε w, so that problem (3.5), (3.8) becomes Unless µ ≡ 0.
with U (0) and V (ω 0 ) neighborhoods of 0 and ω 0 . Since, by (H4), we have in particular N 1 (0, 0, v 1 , 0) = N 2 (0, ω 0 , v 1 , 0) = 0, using (3.7) 1 and Lemma 3.1 we deduce that, at ε = 0, the equation F (ε, U) = 0 has the solution U 0 = (0, ω 0 , 0, v 1 ). Therefore, since by (H4) we have that F is analytic at (0, U 0 ), by the analytic version of the Implicit Function Theorem (e.g. [20,Proposition 8.11]), to show the existence statement -including the validity of (3.10)-it suffices to show that the Fréchet derivative, DF (0, U 0 ), of F with respect to U evaluated at (0, U 0 ) is a bijection. Now, in view of the assumption (H4), it easy to see that the Fréchet derivative of the following set of equations has one and only one solution (µ, ω, v, w) ∈ R × R × X × W 2π,0 : In view of (H1), for any given f 1 ∈ Y, equation (3.12) 1 has one and only one solution v ∈ X . Therefore, it remains to prove the existence and uniqueness property only for the system of equations (3.12) 2−4 To this aim, we observe that, by Lemma 3.2, for a given f 2 ∈ H 2π,0 , equation (3.12) 2 possesses a unique solution w 1 ∈ W 2π,0 if and only if its right-hand side is in H 2π,0 , namely, Taking into account (3.7) 2 the above conditions will be satisfied provided we can find µ and ω satisfying the following algebraic system However, by virtue of (H6), this system possesses a uniquely determined solution (µ, ω), which ensures the existence of a unique solution w 1 ∈ W 2π,0 to (3.12) 2 corresponding to the selected values of µ and ω. We now set Clearly, by Lemma 3.1, w is also a solution to (3.12) 2 . We then choose α and β in such a way that w satisfies both conditions (3.12) 3,4 for any given f i ∈ R, i = 1, 2. This choice is made possible by virtue of (3.7) 1 . We have thus shown that DF (0, U 0 ) is surjective. To show that it is also injective, set f i = 0 in (3.12) 2−4 . From (3.13) and (H6) it then follows µ = ω = 0 which in turn implies, by (3.12) 2 and Lemma 3.1, w = γ 1 v 1 + γ 2 v 2 , for some γ i ∈ R, i = 1, 2. Replacing this information back in (3.12) 3,4 with f 3 = f 4 = 0, and using (3.7) 1 we conclude γ 1 = γ 2 = 0, which proves the claimed injectivity property. Thus, DF (0, U 0 ) is a bijection, and the proof of the existence statement in (a) is completed. We shall next show the uniqueness statement in (b) by adapting to the present case the argument of [20, Theorem 8.B]. Let (z, s) ∈ X × W 2π,0 be a 2π-periodic solution to (3.5) with ω ≡ ω and µ ≡ µ. By the uniqueness property associated with the implicit function theorem, the proof of the claimed uniqueness amounts to show that we can find a sufficiently small ρ > 0 such that if then there exists a neighborhood of 0, I(0) ⊂ R, such that To this end, we notice that, by (3.7) 1 , we may write We next make the simple but important observation that if we modify s by a constant phase shift in time, δ, namely, s(τ ) → s(τ + δ), the shifted function is still a 2π-periodic solution to (3.5) 2 and, moreover, by an appropriate choice of δ, with η = η(δ) ∈ R. (The proof of (3.18) is straightforward, once we take into account the definition of v 1 and v 2 .) Notice that from (3.14), (3.16)-(3.18) it follows that |η| + s W 2π,0 → 0 as ρ → 0 .
Remark 3.2
The arguments used in the proof of Theorem 3.1 go through in the more general case where the evolution equation (3.5) 2 is formulated in a Banach space, provided we modify (H3) by adding the assumption that N [Q] is two-dimensional. However, we preferred the Hilbert formulation just to emphasize that, as shown in the next section, time-periodic bifurcation of a Navier-Stokes steady-state flow past an obstacle can be safely and successfully handled in the simpler Hilbert-space framework.
Remark 3.3
The assumption of analyticity of N 1 and N 2 with respect to (v, w, µ) is not necessary. Actually, a suitably modified version of Theorem 3.1 continues to hold if the nonlinear terms are of class C k in all variables, for some k ≥ 2. In such a case, the family of branching solutions of Theorem 3.1 will be of class C k−1 in the parameter ε.
Time-periodic Bifurcation of Steady-State Solutions to the Navier-Stokes Equations Past an Obstacle
In this section we will apply the general theory developed in the previous one to the study of time-periodic bifurcation from a steady-state flow of a Navier-Stokes liquid past a three-dimensional obstacle. To this end, assume that an obstacle, B, of diameter d is placed in the flow of a Navier-Stokes liquid having an upstream velocity v ∞ . Then, the bifurcation problem amounts to study the following set of (dimensionless) equations Here V and P are velocity and pressure fields of the liquid, Ω is the region of flow, namely, the entire three-dimensional space exterior to B, e 1 is a unit vector parallel to v ∞ , and λ := |v ∞ |/(ν d), with ν kinematic viscosity of the liquid, is the Reynolds number. It will be shown (see Proposition 4.1) that, under suitable assumptions on λ 0 , the above equations possess a unique steady-state solution branch (u(λ), p(λ)), with λ in a neighborhood In this regard, for u 0 ∈ X 2, 4 3 (Ω) and λ 0 > 0 define the operator (Ω) . (4.5) By the properties of the X-and H-spaces and the Hölder inequality, we easily show that L 1 is well-defined. The following result holds. corresponding to λ = λ 0 . Then, if N[L 1 ] = {0}, problem (4.6) has a solution that is (real) analytic at λ = λ 0 . Precisely, there is a neighborhood U (λ 0 ) of λ 0 and a solutions family to (4.6), (u(λ), p(λ)) ∈ X 2, 4 3 (Ω) × D 1, 4 3 (Ω), λ ∈ U (λ 0 ), such that the series are absolutely convergent in X 2, 4 3 (Ω) and D 1, 4 3 (Ω), respectively.
Proof. The Fredholm property is shown in [9, Theorem 3.1]. Next, we notice that setting u := u − u 0 , φ := p − p 0 , from (4.6) we deduce that ( u, µ) satisfies By the Hölder inequality, we show at once that the bilinear form is continuous, and therefore the operator N : is analytic at any ( u, µ), and so is F : . Now, F (0, 0) = 0, and, being N [L 1 ] = {0} by assumption, the Fréchet derivative D u F (0, 0) ≡ L 1 is a homeomorphism. As a consequence the lemma follows from the analytic version of the Implicit Function Theorem (e.g. [20,Proposition 8.11]).
We now introduce the operator (4.8) Since Z 2,2 (Ω) is dense in H(Ω), L 2 is densely defined. Moreover, with the help of Hölder inequality and the embedding W 2,2 ⊂ W 1,4 ⊂ L 12 it is easy to check that R [L 2 ] ∈ H(Ω), provided u 0 ∈ X 2, 4 3 (Ω). (8) Our main objective is to show that the intersection of the spectrum σ(L 2 ) (computed with respect to H C ) with {iR−{0}} is constituted at most by a finite or countable number of eigenvalues with finite multiplicity (see Proposition 4.1).
The proof of this property requires some preparatory results.
in Ω , Moreover, there are constants c and c 0 depending only on Ω, such that (u, p) satisfies the following inequality Proof. The proof is entirely analogous to that of [8,Lemma 4.1]) and will be thus omitted.
Lemma 4.2 The operator
is compact.
We are now in a position to show the first main result of this section.
We now turn our focus to the study of some properties of the timedependent operator We begin to recall the following result, proved in [7, Lemma 5] for the two-dimensional case. However the proof carries over verbatim to the threedimensional case and, therefore, will be omitted.
Lemma 4.4 The operator
is a homeomorphism.
Our next and final objective is to rewrite (4.15) in the abstract form (3.5), so that under the appropriate assumptions, we may apply Theorem 3.1 and provide the desired bifurcation result.
To that purpose, we introduce the scaled time τ := ω t, split v and p as the sum of their time average, (v, p), over the time interval [−π, π], and their "purely periodic" component (w := v − v, ϕ := p − p). In this way, problem (4.15) can be equivalently rewritten as the following coupled nonlinear elliptic-parabolic problem where and where, we recall, µ := λ − λ 0 , and u 0 ≡ u(λ 0 ). We prove next some functional properties of the quantities N i , i = 1, 2.
Finally, we observe that, thanks to Proposition 4.3 the operator Q obeys condition (H4). | 2015-08-04T13:25:00.000Z | 2015-07-28T00:00:00.000 | {
"year": 2015,
"sha1": "e0df67179b07cc31621fb2f0e68a7699e0b62956",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "e0df67179b07cc31621fb2f0e68a7699e0b62956",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
9338894 | pes2o/s2orc | v3-fos-license | In Vitro Enhancement of Carvedilol Glucuronidation by Amiodarone-Mediated Altered Protein Binding in Incubation Mixture of Human Liver Microsomes with Bovine Serum Albumin.
Carvedilol is mainly metabolized in the liver to O-glucuronide (O-Glu). We previously found that the glucuronidation activity of racemic carvedilol in pooled human liver microsomes (HLM) was increased, R-selectively, in the presence of amiodarone. The aim of this study was to clarify the mechanisms for the enhancing effect of amiodarone on R- and S-carvedilol glucuronidation. We evaluated O-Glu formation of R- and S-carvedilol enantiomers in a reaction mixture of HLM including 0.2% bovine serum albumin (BSA). In the absence of amiodarone, glucuronidation activity of R- and S-carvedilol for 25 min was 0.026, and 0.51 pmol/min/mg protein, and that was increased by 6.15 and 1.60-fold in the presence of 50 µM amiodarone, respectively. On the other hand, in the absence of BSA, or when BSA was replaced with human serum albumin, no enhancing effect of amiodarone on glucuronidation activity was observed, suggesting that BSA played a role in the mechanisms for the enhancement of glucuronidation activity. Unbound fraction of S-carvedilol in the reaction mixture was greater than that of R-carvedilol in the absence of amiodarone. Also, the addition of amiodarone caused a greater increase of unbound fraction of R-carvedilol than that of S-carvedilol. These results suggest that the altered protein binding by amiodarone is a key mechanism for R-selective stimulation of carvedilol glucuronidation.
The nonselective β-and α 1 -adrenoceptor antagonist carvedilol has been clinically used to treat chronic heart failure, as well as hypertension, angina pectoris, and cardiac arrhythmia. 1) Carvedilol is administered orally as a racemate mixture, but undergoes enantioselective first-pass metabolism. The blood concentration of the S-enantiomer, which has high β-blocking activity, is approximately one-half of that of the R-enantiomer, which has low β-blocking activity. 2,3) Both enantiomers are mostly eliminated by hepatic metabolism, with renal excretion accounting for only 0.3% of the administered dose. 4) Carvedilol is metabolized extensively via aliphatic side-chain oxidation, aromatic ring oxidation, and conjugation pathways. 5) We previously demonstrated that R-carvedilol is metabolized mainly by CYP 2D6 and partly by CYP1A2, 2C9, and 3A4, and that S-carvedilol is metabolized mainly by CYP1A2 and partly by CYP2C9, 2D6, and 3A4. [6][7][8][9] On the other hand, Ohno et al. found that uridine 5′-diphosphate (UDP)-glucuronosyltransferase (UGT) 2B7, 2B4, and 1A1 are capable of catalyzing the glucuronidation of carvedilol using microsomes from insect cells expressing human UGT. 10) They also reported that glucuronidation of R-carvedilol is mediated by UGT1A1 and 2B4, and glucuronidation of S-carvedilol is mediated by UGT2B7 and 2B4. 10) In 2005, Fukumoto et al. reported that coadministration of amiodarone affects the enantioselective pharmacokinetics of carvedilol in patients with heart failure. 11) That is, the mean serum concentration to dose (C/D) ratio of S-carvedilol in 54 patients received amiodarone concomitantly with carvedilol was 2-fold higher than that in 52 patients received carvedilol alone. However, there was no significant difference in the mean C/D values of R-carvedilol between the two groups. 11) We have previously evaluated the effect of amiodarone on the metabolism of racemic carvedilol (1 µM) in pooled human liver microsomes (HLM). 12) The oxidation activity for both R-and S-carvedilol decreased by amiodarone (50 µM) and/or desethylamiodarone (25 µM) significantly, 12) because amiodarone and/ or desethylamiodarone are potent inhibitors of CYP1A2, 2C9, 2D6, and 3A4. 13,14) In contrast, the glucuronidation activity for R-carvedilol was increased 1.6-and 1.4-fold by amiodarone and desethylamiodarone, respectively, whereas that for Scarvedilol was only slightly increased by amiodarone and desethylamiodarone. 12) Based on these results, we speculate that the stimulative effects of amiodarone and/or desethylamiodarone on the glucuronidation of R-carvedilol may compensate for the inhibitory effects they have on the oxidation of R-carvedilol. 12) In our previous study, however, we could not determine the metabolite formation in the incubation mixture. That is, the metabolized amount of R-and S-carvedilol was calculated by subtracting the amount remaining in the sample from the amount applied. In addition, there is less evidence supporting such a mechanism responsible for the increased C/D ratio of S-carvedilol associated with coadministration of amiodarone in patients.
The aim of the present study was to clarify the relevance of the stimulative effect of amiodarone on glucuronidation of carvedilol in HLM. Therefore, in the present study, we developed approaches for analyzing the stereoselective effect of amiodarone on R-and S-carvedilol glucuronidation. That is, we first evaluated the effect of amiodarone in several substrate concentrations of both its racemic and enantiomeric form. Second, we also evaluated whether amiodarone was capable of stimulating an in vitro glucuronidation reaction, based on the determination of carvedilol O-glucuronide (O-Glu) formation in the incubation mixture. Third, to understand simply why amiodarone stimulates R-carvedilol rather than S-carvedilol, we evaluated the effect of amiodarone on glucuronidation of each enantiomer separately. Finally, we demonstrated that amiodarone increases the generation rate of carvedilol glucuronide as a consequence of altered protein binding in an incubation mixture of human liver microsomes, and that bovine serum albumin (BSA) has idiosyncratic contribution to the mechanism of the effect of amiodarone. All other chemicals were of the highest purity available.
Glucuronidation of Racemic Carvedilol in HLM Glucuronidation of racemic carvedilol in HLM was evaluated in the presence of UDPGA, as described previously, with minor modification. 8,12) That is, the mixture consisting of racemic carvedilol, 50 µM amiodarone, 1.0 mg/mL microsomal protein, 0.2% BSA, 10 mM MgCl 2 and 25 µg/mL alamethicin in 50 mM Tris-HCl buffer (pH 7.4) was preincubated for 5 min at 37°C. The reaction was initiated by the addition of UDPGA, and the reaction mixture was incubated for 25 min at 37°C. The total volume of the incubation mixture was 150 µL, and the final concentration of racemic carvedilol was 0.003-3.0 µM. The reaction was terminated by the addition of ice-cold 0.1 M Britton-Robinson buffer (pH 8.5). The amount of carvedilol in the samples was measured by HPLC with fluorescence detection, as described previously. 8,12) In brief, carvedilol was extracted from samples with 5 mL diethylether after alkalization in 3 mL of 0.1 M Britton-Robinson buffer (pH 8.5) and 4 mL of the organic phase was transferred in 300 µL of 0.05 M H 2 SO 4 and shaken vigorously. The organic phase was removed by aspiration, and the remaining aqueous layer was back-extracted with 3 mL of 0.1 M Britton-Robinson buffer (pH 8.5) and 5 mL of diethylether. Four milliliters of the organic phase was transferred and evaporated dry in a water bath at 45°C. The residue was dissolved in 1000 µL mobile phase, and 150 µL was injected into the HPLC column. 8,12) The metabolized amount was calculated by subtracting the amount remaining in the sample from the amount applied.
Glucuronidation of R-and S-Carvedilol Enantiomers in HLM Glucuronidation of R-and S-carvedilol in HLM was performed, as described previously, with minor modification. 8,12) That is, the reaction mixture contained R-or Scarvedilol, 50 µM amiodarone, 0.5 or 0.05 mg/mL microsomal protein, 0.2% BSA or human serum albumin (HSA), 10 mM MgCl 2 , 25 or 12.5 µg/mL alamethicin in 50 or 25 mM Tris-HCl buffer (pH 7.4) in the final volume of 150 µL. 8,12) Final concentrations were 1-3000 nM for R-or S-carvedilol. After preincubation for 5 min at 37°C, the reaction was initiated by the addition of UDPGA. The mixture was incubated for 25 min at 37°C. Then, the reaction was terminated by the addition of ice-cold acetonitrile.
Assay of R-and S-Carvedilol Glucuronides
The amounts of R-and S-carvedilol glucuronides in the samples were measured by HPLC with fluorescence detection, as described by Takekuma et al., 15) with minor modification. That is, after removal of the protein by centrifugation at 3000×g for 5 min at 4°C, 100 µL chloroform and 150 µL water were added to 250 µL of the supernatant to remove unreacted carvedilol. The mixture was stirred, and then centrifuged at 3000×g for 5 min at 4°C. Fifty microliters of the supernatant was injected into an HPLC system. The HPLC system consisted of an LC-10AT vp Liquid Chromatograph Series (Shimadzu, Kyoto, Japan) with a model RF-20 A fluorescence detector (Shimadzu) and L-column2 ODS (Chemical Evaluation and Research Institution, Saitama, Japan). The mobile phase consisted of 25% acetonitrile, 75% 10 mM KH 2 PO 4 , and 0.59% (w/v) triethylamine. 15) Flow rate was 0.7 mL/min and column temperature was 40°C. The peaks were monitored at an excitation wavelength of 240 nm and an emission wavelength of 340 nm, and the retention times were approximately 18 and 20 min for R-and S-carvedilol glucuronide, respectively.
Unbound Fraction of R-and S-Carvedilol in Incubation Medium
The unbound fraction of R-and S-carvedilol in the incubation medium was determined by ultrafiltration using Centrifree ® Ultrafiltration Devices (Merck Millipore, Carrigtwohill, Ireland). The incubation mixture (final volume 1000 µL) consisted of 30 nM R-or S-carvedilol, 50 µM amiodarone, 0.05 mg protein/mL microsomal protein, 0.2% BSA, 10 mM MgCl 2 , and 12.5 µg/mL alamethicin in 50 mM Tris-HCl buffer (pH 7.4). The sample was ultrafiltrated at 1000×g, 37°C until 250 µL of the filtrate was collected. Concentration of R-and S-carvedilol in the filtrate was measured by HPLC, as described above. 8,12) Data Analysis Values are expressed as the mean±standard error (S.E.). The statistical significance of the differences between the two groups was evaluated using the Student's t-test if the variance of the group was similar. If this was not the case, the Mann-Whitney U-test was applied. p<0.05 was considered to be significant.
RESULTS AND DISCUSSION
We have previously found a stimulative effect of amiodarone (50 µM) on the metabolism of racemic carvedilol (1 µM) in HLM. 12) In the present study, a concentration-dependent manner of substrate in the amiodarone effect was further evaluated at the racemic concentration range of 0.03-3.0 µM (Fig. 1). The glucuronidation of racemic carvedilol in HLM was stimulated greater for the R-enantiomer than the S-enantiomer by the presence of 50 µM amiodarone. That is, the glucuronidation activity for R-and S-carvedilol in HLM increased up to 3.17-and 1.65-fold, respectively. The stimulative effect of amiodarone in HLM was significant at lower substrate concentrations, whereas no stimulative effect was observed at the racemic carvedilol concentration of 3.0 µM (Fig. 1).
In the case of racemic carvedilol, the glucuronidation activ-ity of S-carvedilol in HLM without amiodarone was 3.6-fold higher than that of R-carvedilol ( Fig. 2A). Takekuma et al. 16) reported that the stereoselectivity for R-and S-carvedilol glucuronidation estimated in HLM differed greatly depending on the substrate form, namely racemic carvedilol and each en-antiomer. This phenomenon is thought to be caused by mutual inhibition between carvedilol enantiomers during racemate glucuronidation. 16) Therefore, to understand simply why amiodarone stimulates glucuronidation for R-carvedilol, rather than S-carvedilol, we compared the effects of amiodarone on the glucuronidation of each enantiomer separately (Fig. 2B). Because the glucuronide formation was supposed to be increased without the mutual inhibition between carvedilol enantiomers, a lesser concentration of microsomal protein (0.5 mg/mL) was applied for the enantiomer glucuronidation. In the case of enantiomer separations, the glucuronide formations from Rand S-enantiomers were slightly higher than those of racemic carvedilol. However, the stereoselectivity of each enentiomer was comparable to that of racemate (Fig. 2). In addition, glucuronide formation increased linearly suggesting that the microsomal activity was more than enough to evaluate the mechanism of this effect (Fig. 2). Therefore, in the subsequent experiments, we determined the glucuronidation activity with 0.05 mg/mL of microsomal protein based on the formation of metabolites derived from each enantiomer.
To confirm that the stimulative effect of amiodarone on the carvedilol glucuronidation can be observed in the case of enantiomer separations, we evaluated the effect of amiodarone (50 µM) at several substrate concentrations (Fig. 3). The glucuronide formation for R-and S-carvedilol in HLM increased up to 5.26-and 2.13-fold, respectively, in the presence of 50 µM amiodarone. The stimulative effects observed were more significant in R-carvedilol (Fig. 3). The effect of amiodarone on glucuronide formation derived from each enantiomer was marked at lower substrate concentrations, and no stimulative effect was observed at substrate concentrations of 1000 or 3000 nM (Fig. 3). These results corresponded to those of the racemate (Fig. 1), suggesting that the mutual effect between the two carvedilol enantiomers in the glucuronidation reaction is less involved in the key mechanisms of the amiodarone effect.
Fujimaki et al. 4) reported that the unbound fraction of Scarvedilol in human plasma was 1.4-fold higher than that of R-carvedilol. That is, the fractions of the drug present in the free form in plasma for R-and S-enantiomer were 0.0045 and 0.0063, respectively. 4) In the present study, BSA was included at 0.2% in the reaction mixture of HLM to prevent adsorption and/or as a solubilizing agent. On the other hand, it was reported that the plasma protein binding of amiodarone was marked at 99.977%. 17) Thus, to clarify the possible effect of amiodarone on the protein binding of carvedilol in the reaction mixture, we conducted the same experiments in the absence of BSA (Fig. 4). As a result, the effect of amiodarone on the glucuronide formation disappeared in the absence of BSA, suggesting that the presence of BSA was essential for the effect of amiodarone (Fig. 4). In addition, to evaluate whether the stimulative effect of amiodarone on glucuronide formation is specific to BSA, we conducted the same experiments, replacing BSA with HSA (Fig. 5). In the presence of HSA, the glucuronide formation from each enantiomer decreased more than those in the absence of HSA. These results suggest that BSA, not HSA, mediated the effect of amiodarone on glucuronide formation of carvedilol in HLM (Fig. 5). In addition, it should be noted that amiodarone (50 µM) partly inhibited glucuronidation activity for both enantiomers in HLM (see closed column in Figs. 4, 5).
To clarify whether there is a difference in protein-binding characteristics between the two enantiomers to BSA, and their interaction with amiodarone, the unbound fraction of R-and S-carvedilol in the reaction mixture was determined (Table 1). Unbound fraction of R-carvedilol was 0.31 and 7.79% in the control and in the presence of amiodarone, respectively. Unbound fraction of S-carvedilol was 1.24 and 9.26% in the control and in the presence of amiodarone, respectively. That is, amiodarone increased the unbound fraction of R-carvedilol (25-fold) much greater than that of S-carvedilol (7.5-fold). In addition, the increase in the glucuronidation rate by amiodarone for R-and S-carvedilol was 6.15-and 1.60-fold, respectively (Table 1). In conjunction with the inhibitory effect of amiodarone on the glucuronidation activity in HLM, the stimulative effect of amiodarone on the carvedilol glucuronidation may be mainly explained by the increased unbound fraction of substrates.
In our previous study, 0.2% BSA was used to prevent adsorption of drugs to glass-ware because lower concentrations of substrate may produce results confounded by non-specific binding. 12) On the other hand, Rowland et al. 18) proposed that the addition of albumin (at concentrations of 0.05-4%) is useful to evaluate glucuronidation clearance in HLM incubations. They found markedly improved predictivity of in vitro-in vivo clearance extrapolation for microsomal incubations conducted in the presence of BSA, and demonstrated that BSA increased the rate of glucuronidation by HLM due to a decrease in K m , without a significant effect on V max . Moreover, the authors suggested that the effect of BSA was not always consistent with that of HSA. That is, long-chain fatty acids released from the microsomal membrane competitively inhibit the UGTs, and that BSA has the capacity to sequester inhibitory fatty acids, whereas fatty acid binding sites are presumably saturated in HSA. 18) In conclusion, higher protein binding of R-carvedilol compared to S-carvedilol, and the addition of amiodarone, which highly binds to BSA, lead to an increase in the unbound fraction of substrate in the reaction mixture. These results may explain the mechanism responsible for the amiodarone-mediated R-selective enhancement of the glucuronide formation in HLM. Although the in vitro data appear not to support our previous proposal 12) for the mechanisms involved in the clinical interaction between carvedilol and amiodarone in humans, our observations described here may provide new insight into the idiosyncratic effect of BSA on drug-drug interactions in HLM. | 2018-04-03T00:00:40.077Z | 2016-08-01T00:00:00.000 | {
"year": 2016,
"sha1": "a00070240fcd0adbe51bb5e45d3c18e50123b6ad",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/bpb/39/8/39_b16-00360/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "dea547ddee923d55bcf4e739f6af122e85eb75b5",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
218830995 | pes2o/s2orc | v3-fos-license | Viruses in Bovine Respiratory Disease in North America
Advances in viral detection in bovine respiratory disease (BRD) have resulted from advances in viral sequencing of respiratory tract samples. New viruses detected include influenza D virus, bovine coronavirus, bovine rhinitis A, bovine rhinitis B virus, and others. Serosurveys demonstrate widespread presence of some of these viruses in North American cattle. These viruses sometimes cause disease after animal challenge, and some have been found in BRD cases more frequently than in healthy cattle. Continued work is needed to develop reagents for identification of new viruses, to confirm their pathogenicity, and to determine whether vaccines have a place in their control.
2009. 1 This review covered infectious agents, including viruses, bacteria, and mycoplasmas. Coverage of viruses included the 4 most commonly discussed respiratory tract viruses in BRD: bovine herpes virus 1 (BoHV1), bovine parainfluenza type 3 virus (PI3V), bovine viral diarrhea viruses (BVDVs), and bovine respiratory syncytial virus (BRSV). There are commercial vaccines containing immunogens to these viruses. These 4 viruses were investigated extensively prior to 2009 in research studies and were described in published reports from state and federal diagnostic laboratories. These viruses were identified by the virologic methods in place at that time. 2 These included isolation in cell culture based on cytopathology and confirmed by fluorescent antibody test using virus monospecific antisera. Other confirmatory tests included neutralization of infectivity using monospecific antisera. Numerous serosurveys permitted detection of viral exposure in selected populations. In the years prior to 2009, selected viruses such as BVDV were sequenced, resulting in the knowledge of genomic regions that could be used to identify these viruses. Eventually, technology produced many automated sequencing procedures and the field of bioinformatics facilitated alignment of newly identified sequences with reference sequences, permitting the identification of the entire or near full length of viral genomes. Terms, such as metagenomics, whole-genome sequencing, and next-generation sequencing, have become commonplace for both research and diagnostic laboratories. These new genomic tests permitted expansion of knowledge of the big 4 viruses: BoHV1, BVDV, PI3V, and BRSV.
EXAMPLES OF EXPANDED KNOWLEDGE OF BOVINE RESPIRATORY DISEASE VIRUSES: GENOMICS OF BOVINE HERPES VIRUS 1, BOVINE VIRAL DIARRHEA VIRUS, AND BOVINE PARAINFLUENZA TYPE 3 VIRUS
The BoHV1 represents one of the original viruses in BRD that was isolated and characterized in the 1950s with the advent of cell cultures. BoHV1 is a common component of bovine viral vaccines, including both modified live virus (MLV) vaccine and killed/ inactivated viral vaccine. The MLV vaccine origin strains of BoHV1 have been identified in clinical cases postvaccination and in aborted fetuses. Thus, it became necessary to differentiate field strains from the MLV strains, but this posed significant challenges. Using whole-genome sequencing and analysis of the resulting nucleic segments, the viral genomes of BoHV1 reference strains, BoHV1.1 reference strains Cooper and Los Angeles, were sequenced. 3,4 The resulting information on the BoHV1.1 genome was investigated further using the reference strain, Cooper, and multiple BoHV1.1 strains in the MLV vaccines available in North America. 5 This genetic analysis found single-nucleotide polymorphisms (SNPs) among the viruses, which permitted the viruses to be classified into groups. The SNPs for various regions permitted the selection of multiple primers to be used and the polymerase chain reaction (PCR) products sequenced. These SNPs patterns then permitted the ability to separate the viruses into groups and each strain to have a specific identity. This information permitted isolates from clinic cases to be categorized as field/wild-type or MLV strain. Use of the SNPs and the sequencing of the PCR products of the primers were applied in multiple studies identifying wild-type strains of BoHV1.1 as vaccine or wildtype strains. [5][6][7][8][9][10] In addition to the separation of vaccine from field strains, these genomic sequencing procedures identified a recombinant BoHV1.1 strain (including components of both a wild-type and a vaccine strain) from an aborted bovine fetus. 10 Using this genetic sequencing, the BoHV1.2b reference strain K22 and multiple wildtype genital and respiratory BoHV1.2b strains were sequenced. 11 The BVDV strains are referred to as biotypes based on cytopathology in cell culture: cytopathic and noncytopathic, with 2 species, BVDV1 and BVDV2, based on genomics. 12,13 Application of genetic testing of BVDV strains initially had focused on the sequencing of PCR products from multiple regions of the BVDV genome. Initial studies of the presence of BVDV subtypes in surveys of US and other North American cattle populations, diagnostic accession of bovine samples, or reports of respiratory disease outbreaks with viral identification led to detection of subgenotypes, BVDV1a, BVDV1b, and BVDV2a. Studies of the distribution of these 3 subtypes from diagnostic laboratory accessions indicated BVDV1b as the predominant BVDV subtype, 14 and BVDV1b was the predominant subtype in multiple studies of beef calves with BRD, based on recovery of virus from acute cases of BRD and necropsy tissues. 15 Investigation of the source of BVDV exposure identified the persistently infected calf as the most imported source of virus exposure, with persistently infected calves resulting from infection of susceptible heifers/cows during a critical stage of pregnancy. 12 A study to evaluate diagnostic tests used to detect persistently infected calves was performed, with the additional objective of determining prevalence of BVDV1a, BVDV1b, and BVDV2a subtypes in persistently infected calves entering a southwest Kansas feedlot. 16 In a 2004 study, there were 86/21,743 (0.4%) persistently infected calves with the distribution of subtypes: BVDV1b (77.9%), BVDV1a (11.6%), and BVDV2a (10.5%). To determine if the distribution of the subtypes was consistent in succeeding years, samples from this same feedlot were tested over the following years and are summarized in Table 1. The distribution of the subtypes seems consistent for each collection from 2004 to 2008.
Another study, using complete genome sequences from samples from this same feedlot in August 2013 to April 2014, determined the distribution of subgenotypes among 119 samples. 17 There were 82% BVDV1b, 9% BVDV1a, and 8% BVDV2. It was reported that the BVDV2 belong to at least 3 distinct genetic groups. This study indicated 2 points: 1. BVDV1b remained the predominant persistently infected subgenotype from 2004 to 2014. 2. BVDV2 may belong to at least 3 distinct genetic groups.
These studies demonstrated prevalence of the subgenotypes in beef cattle, yet information regarding the distribution in dairy cattle is limited. A study of samples from 18 There were 25/4530 (0.55%) persistently infected calves and 5/30 (16.7%) of the herds contained persistently infected calves. All of the PI BVDV strains were BVDV1b. In a summary of diagnostic laboratory samples over a 20-year period, the BVDV1b subtype predominated. 19 In a 2019 study using clinical samples from diagnostic laboratories and the US Department of Agriculture National Animal Disease Center, BVDV Laboratory, BVDV2b and BVDV2c were identified. 20 Thus, it appears with new genomics testing there may additional BVDV subtypes identified. A recent report on the global distribution of BVDV subgenotypes cited at least 21 subgenotypes for BVDV1 and 4 subgenotypes for BVDV2. 21 The application of genomics, PCR testing, and antigenic comparison has been applied to another of the BRD viruses, PI3V. There are 3 PI3V genotypes in the United States: PI3Va, PI3Vb, and PI3Vc. 22,23 In addition to the genetic differences, there also are antigenic differences. These antigenic differences may have an impact on vaccine responses because the current PI3V vaccines contain PI3Va strains.
ADVANCES IN GENOMICS PERMITS IDENTIFICATION OF ADDITIONAL VIRUSES IN NORTH AMERICA: BOVINE CORONAVIRUS, INFLUENZA D VIRUS, AND OTHERS Bovine Coronavirus
The bovine coronavirus (BoCV) has received considerable attention in recent years as another viral pathogen in respiratory disease. Although initially studied as a pathogen causing neonatal diarrhea/enteritis in young calves, growing evidence has suggested involvement in BRD, especially because of the recovery of BoCV from clinically ill cattle with BRD and necropsy cases of fatal pneumonias. The use of viral serology and more extensive use of PCR testing on respiratory tract swabs from BRD cases and necropsy cases by diagnostic laboratories have provided further evidence that BoCV plays a role in BRD. Interpretation of these diagnostic reports, however, often is difficult, especially because there is no US Department of Agriculture licensed BoCV vaccine for BRD prevention and control. A recent article by Ellis 24 gives an excellent review of the history of BoCV in respiratory cases, both in field cases and in experimental studies. The purpose of the review was to seek evidence that BoCV is a biologically significant respiratory pathogen in cattle.
A study using nasal swabs from calves treated for BRD and BRD necropsy cases from multiple feedlots identified viruses with PCR testing and virus isolation. 9 Of the 121 cases, the positives include: 14.9% (BoHV1), 15.7% (BVDV), 62.8% (BoCV), 9.1% (BRSV), and 8.3% (PI3V). In contrast to prior studies, the virus positives were tested by sequencing to differentiate vaccine strains from field strains. Often surveys collect samples from cattle recently vaccinated with MLV vaccines containing BoHV1, BVDV, PI3V, and BRSV. In a study of virus recovery from fatal cases in Ontario, Canada, feedlots, BoCV was recovered from 2/99 (2.0%) cases. 25 A subsequent report in 2009 dealt with a year-long study of pathology and identification of infectious agents in fatal feedlot pneumonias. 26 In that study, which used PCR testing of fresh lung samples, 21/194 (10.8%) were positive for BoCV. The BoCV has been isolated from both healthy calves and sick calves. [27][28][29][30] Using a virus neutralization serotest, however, individual calf serums collected at feedlot entry in a retained ownership study (fresh from the ranch and no mixed-source auction calves) were tested for neutralizing antibodies, and those calves with low antibody levels to BoCV (16 or less) were more likely to be treated for BRD during the feeding phase compared with calves with higher titers. 27 In other research, BoCV from nasal swabs and bronchoalveolar washing fluids were evaluated for genetic and antigenic differences. 28 A region of the viral spike protein in the envelope was the target of genomic sequencing, and virus neutralization tests in cell culture were used to compare antigenic relatedness. Testing demonstrated genetic differences that allowed classification of 2 clades, BoCV1 and BoCV2, which also demonstrated antigenic differences. The current reference BoCV is of enteric origin and is classified as BoCV1. This strain is included in most licensed BoCV vaccines for enteric disease in the United States. Another study of 15 isolates from 3 herds, using sequencing of the spike hypervariable gene region, indicated that there were 4 polymorphisms in the 15 isolates. 30 A critical question posed for BoCV as a respiratory pathogen has been whether BoCV infection with isolates of respiratory tract origin (as opposed to the reference enteric strain or other enteric strains) in susceptible cattle results in measurable gross and microscopic lesions on the respiratory tract. Such challenge studies are required in order to have a model to measure efficacy of BoCV vaccines in the respiratory tract protection. A series of studies was performed with multiple isolates from the respiratory tract of calves and the reference enteric strain to study dynamics of the BoCV infection. 31 BVDV exposure was used in dual infections (BVDV and BoCV) as well as BVDV alone, BoCV alone, and controls. 31 Respiratory disease was observed in calves inoculated with BoCV 6 days or 9 days after BVDV. Lung lesions were present in calves in dual infection groups; however, lesions were more pronounced in calves inoculated with BVDV followed by BoCV inoculation 6 days later. Immunohistochemistry (IHC) confirmed the presence of BoCV antigen in the respiratory tract. Gross lung lesions of the dual infected calves were multifocal and randomly distributed throughout the lungs in most cases. Histologically, lung lesions consisted of interstitial to bronchointerstitial pneumonia (BIP), with inflammatory changes ranging from mononuclear infiltrates to fibrin and neutrophils in more severely affected lungs. Similarly, less severe changes could be seen in several of the BVDV or BoCV inoculated calves. In this study, BoCV antigen was found via IHC in bronchial and tracheal epithelium, alveolar interstitium, and macrophages, whereas BVDV antigen was not detected by IHC. This study confirms the potential for BoCV isolates from the respiratory tract to cause clinical disease detected by gross and microscopic lesions, in particular, with a sequential dual infection with BVDV. In addition, IHC detected the present of BoCV antigen in multiple respiratory tract sites. This study indicates that sequential dual infections may have potential as models for vaccine and therapy development and efficacy studies.
Additional information on BoCV will follow in the section on various surveys using metagenomics and PCR testing as well as serotesting in samples from North America.
Influenza D Virus
The influenza D virus (IDV) has gained considerable attention in the etiology of BRD. Use of genomic testing, including sequencing of the viral genome and use of PCR testing, along with antibody testing has resulted in numerous studies indicating the widespread presence of IDV in North America. The virus ultimately identified as IDV initially was isolated from swine, and designated C/swine/Oklahoma/1223/2011 (C/ OK); this virus was found to have homology to human influenza C viruses. 32 Respiratory tract samples from cattle were submitted to a commercial laboratory for testing for BRD diagnosis using PCR testing, which included testing with primers derived Viruses in Bovine Respiratory Disease from this swine influenza C virus. Viruses from these PCR positives were isolated from cell cultures and confirmed to be influenza virus by hemagglutination and PCR. The viral genomes were sequenced and found different from the influenza C viruses. Using serotesting, these bovine isolates also were found antigenically different from the influenza C viruses. The swine strain C/OK, and these new bovine influenza viruses are now classified as D influenza viruses referred to as IDVs (influenza D viruses).
With this new information, multiple studies have reported the presence of IDV in several regions of North America using respiratory tract samples for detection of the viral genome, PCR positives, and/or serology. [32][33][34][35][36][37][38][39][40][41] These reports are summarized later, not only for presence of IDV but also for other viruses. As with other viruses detected in BRD cases, the question of whether the virus causes disease (acts as a pathogen) or is a resident in healthy cattle without disease potential has been raised. Another question posed for IDV in cattle is whether a vaccine might provide protection in vaccinated and challenged cattle. A subsequent report found that an inactivated IDV vaccine using an isolate from cattle provided partial protection in vaccinated calves compared with controls, and the challenge virus caused inflammation in the nasal turbinates and trachea but not appreciably in the lungs. 42 These results give evidence for the role of IDV of cattle in BRD and that partial protection may result from an inactivated vaccine.
STUDIES OF ADDITIONAL VIRUSES BEYOND BOVINE HERPES VIRUS 1, BOVINE VIRAL DIARRHEA VIRUS, BOVINE PARAINFLUENZA TYPE 3 VIRUS, AND BOVINE Respiratory Syncytial Virus
Use of testing for influenza viruses in cattle in the United States first was published in 2014. 32 Nasal swabs or lung samples were submitted for testing for BRD diagnosis and consisted of 45 samples. These samples were from 6 different states and were tested using a real-time/reverse transcriptase (RT)-PCR assay, which included primers for the influenza C viruses. There were 8 samples (18%) positive for the influenza C virus, representing samples from Minnesota and Oklahoma. Five of the positives were isolated in cell culture and were tested further by PCR and hemagglutination assays. Four positives were from 1 herd in Minnesota and 2 were chosen for further study, C/bovine/Minnesota/628/2013 and C/bovine/Minnesota/ 729/2013, and 1 remaining isolate was from a case in Oklahoma, C/bovine/Oklahoma/660/2013. Eventually these viruses were placed into a new group based on genomic and antigenic differences from influenza C virus group, leading to designation of a new genus (D) in the viral family Orthomyxoviridae. Seroprevalence of IDV in bovine populations was examined with hemagglutination inhibition (HI) with the C/swine/Oklahoma/1334/2011 (C/OK) virus and C/bovine/660/2013 (C/660) as antigen and bovine sera from 8 herds in 5 different states tested individually. With the exception of 1 herd, all herds had high geometric mean titers of greater than 40, and antibodies against the bovine C/OK virus and C/660 virus were cross-reactive in the HI assay. These results indicated that cattle are a reservoir for these viruses.
The epidemiology of IDV was reported further in 2015 using samples from cattle in Mississippi. 33 Metagenomics and PCR testing were used to detect viruses in a study of BRD in California dairy calves. 35 Dairy calves between the ages of 27 days and 60 days were enrolled as either BRD cases or controls. Nasopharyngeal and pharyngeal recess swabs were collected. Using metagenomics and subsequent PCR testing, numerous viruses were identified. Viruses were detected in 68% of the BRD cases and 16% of the healthy controls. Multiple viruses were found in 38% of the sick animals versus 8% of the controls. Based on the viral hits of the genome sequences, the following viruses were detected in descending order: bovine rhinitis A virus, which was greater than bovine adenovirus 3, which was greater than bovine adenoassociated virus, which was greater than bovine rhinitis B virus, which was greater than astrovirus, which was greater than bovine IDV, which was greater than picobirnavirus, which was greater than bovine parvovirus 2, which was greater than bovine herpesvirus 6. Those viruses significantly associated with BRD compared with matched controls included bovine adenovirus 3 (P<.0001), bovine rhinitis A virus (P 5 .009), and bovine IDV (P 5 .012).
A metagenomics study investigated viral genomes in nasal swabs from 103 cattle from Mexico (63) and the United States (40), representing 6 Mexican feedlots and 4 Kansas feedlots in 2015. 36 Cattle with acute BRD and asymptomatic pen mates were included. There were 21 viruses detected, with bovine rhinitis A (52.7%), bovine rhinitis B (23.7%), and BoCV (24.7%) the most commonly reported. Comparing the recovery of viruses from cattle with BRD versus asymptomatic controls, bovine IDV tended to be significantly associated with BRD (P 5 .134; odds ratio 2.94). The other viruses historically associated with BRD, including BoHV1, BVDV, PI3V, and BRSV, were detected less frequently.
A Canadian survey of beef cattle utilized metagenomics to detect viruses in western Canadian feedlot cattle with or without BRD. 37 There were 116 cattle sampled with deep nasal swabs and transtracheal washes collected and included samples from animals with or without BRD. The cattle on arrival received an MLV vaccine containing BoHV1, BVDV1. BVDV2, PI3V, and BRSV. There were 21 viruses identified via metagenomics. Viruses associated with BRD based on statistical comparison included bovine IDV (P<.015), bovine rhinitis B virus (P<.02), BRSV (P<.022), and BoCV (P<.021). This report represents the first report of bovine IDV in western Canada. The BoHV1 was not identified in any sample, and BVDV1 and PI3V were found only in 1 sample and 2 samples, respectively. Perhaps the efficacy of the MLV resulted in reduced or absence of recovery of BVDV, PI3V, and BoHV1. The BRSV was found in 17% of BRD cases and 2% of the controls. There was weak agreement in the identification of viruses in the nasal swabs and transtracheal swabs, suggesting that sample location affects the recovery of viruses.
A study was performed to determine prevalence of BRD viruses and Mycoplasma bovis in US cattle. 38 Samples were from different production classes, including cow calf, stocker, feedlot, and dairy, and from varied seasons of the year. There were 3205 samples collected between May 2015 and July 2016 and from 80 different premises. The intent was to test healthy animals; however, disease status and other clinical data were not collected. These nasopharyngeal swabs were assayed using RT-PCR assay using primers for BoHV1, BVDV, BoCV, IDV, BRSV, and Mycoplasma bovis. The overall percent positive rates for each agent were 3.81% for BRSV, 1.59% for BoHV1, 3.56% for BVDV, 8.3% for IDV, 43.81% for BoCV, 20.12% for M bovis, and 17.32% for multiple-agent positives. The high percentage of IDV and BoCV positives suggested that more emphasis should be placed on these viruses in BRD. The BoCV was significantly more associated with stocker production class and the fall season. This study did not differentiate vaccine viruses from field strains.
A metagenomics study was performed using cases submitted to a western Canadian diagnostic laboratory BRD diagnosis. 39 The samples from pneumonia cases (130) were submitted between September 2017 and December 2018. There were 90.8% of the samples from beef cattle and 9.2% from dairy cattle. Formalin-fixed tissues were processed for histologic examination and fresh tissues frozen until further testing. Cases were classified as suppurative bronchopneumonia (SBP), fibrinous bronchopneumonia (BP), interstitial pneumonia, BP 1 BIP, and bronchiolitis. The metagenomics identification was performed on fresh lung tissues. From 34 samples with metagenomics sequencing results, an RT-PCR test with primers for BVDV, PI3V, BRSV, BoHV1, and BoCV was used in all cases of these viruses detected. In 4 cases, however, a virus was detected by RT-PCR that was not detected by metagenomics sequencing. The recovery of viruses was low, with only 36.9% (48/130) positive. There were 16 viruses identified. The bovine parvovirus 2 was the most prevalent virus, 11.5%, followed by ungulate tetraparvovirus 1 and BRSV, both 8.3%. The BRD viruses-BRSV, 8.5%, and BVDV1 and BVDV2, 2.3%, and 3.8%, respectively-and PI3V, 2.3%, were found infrequently. None of these viruses was associated with a particular pneumonia. Animal viruses were identified in only 1 animal each: bovine rhinitis B, IDV, fowl aviadenovirus, avian adenovirus-associated virus, and bovine polyomavirus. The most prevalent virus in each type of pneumonia was bovine parvovirus 2, at 5.9% in FDP; bovine astrovirus, at 3.1% in SBP; BRSV, at 1.5% in interstitial pneumonia; and ungulate tetraparvovirus 1 in BP and BIP, at 1.5%. However, for every type of pneumonia, samples in which no virus was detected, this was the most common result compared to the percentage virus recovery in each pneumonia category. None of these viruses detected were significantly associated with any type of pulmonary pathology. This virus detection in lung tissue provides low analytic sensitivity relative to ante mortem sampling of the upper respiratory tract for virus surveillance. 39 In this study, however, the bacterial agents Histophilus somni, Mannheimia haemolytica, and Pasteurella multocida were found to have strong associations with SBP, fibrinous BP, and BP and BIP, respectively. 39 These results were in contrast to a prior western Canadian study using swabs from the upper respiratory tract (nasal and tracheal) of beef cattle where IDV, bovine rhinitis B, BRSV, and BoCV were significantly associated with BRD. 37 A potential explanation for these divergent findings is that the lungs of the fatal cases may have cleared the viruses and the bacterial pathogens remained predominant.
Serologic surveys often are used to determine presence and prevalence of viruses in various populations of cattle, based on production class and/or geographic regions. Such surveys preferably should rely on samples from animals that have lost their maternal antibodies; thus, antibodies identified result from active infections. Using HI testing, the seropositive rate for IDV ranged from 13.5% to 80.2% in 2 studies. 33,40 In the latter study, sera from animals 2 years of age or older from beef cattle herds in Nebraska were tested for IDV antibodies via the HI assay. These were from samples collected from September 2003 to May 2004. The HI assay used 2 IDV from Mississippi, representing 2 reported IDV clusters that were antigenically distinct. There were 240 (81.9%) samples seropositive to 1 or both of the 2 IDVs. There were log 2 differences in titers in some samples, suggesting there were 2 antigenic clusters circulating in these Nebraska herds. The cattle from all the 40 farms had evidence of exposure and were from farms across Nebraska.
A subsequent serosurvey was performed using samples from throughout the United States as part of the US brucellosis surveillance program. 41 Both male and female cattle 2 years of age or older representing 42 US states were randomly sampled in 5 slaughter plants. The cattle represented 6 US regions: Pacific West, Mountain West, Upper Midwest, South Central, Northeast, and Southeast. The antigen in the HI test was selected as D/bovine/Kansas/14-22/12. Of the 1992 samples, 1545 (77.5%) were positive for IDV antibodies. Positives were found in samples from 41 of 42 states, with a seropositive rate by state ranging from 25% to 93.8%. Sample size by state or titer level may have caused bias. The range among geographic regions for seropositivity was 47.7% to 84.6%. The Mountain West region had the highest, 84.6%, and the Northeast the lowest, 47.7%.
SUMMARY
Advances in viral detection in BRD mirrors advances in viral sequencing using respiratory tract samples. Additional viruses beyond BoHV1, BVDV, PI3V, and BRSV include, as examples, IDV, BoCV, bovine rhinitis A, bovine rhinitis B, adenoviruses, astrovirus, bovine parvovirus, and others. Diagnostic laboratories are now using PCR testing based on primers learned from sequencing. In selected instances, such as IDV and BoCV, serosurveys have demonstrated the widespread presence of these viruses in North American cattle. In limited studies, these viruses, such as IDV and BoCV, have caused disease in animal studies. In various studies, some of these viruses, but not all, have been found in BRD cases more frequently than in healthy cattle. It is important that reagents be developed by diagnostic laboratories to use in diagnostic testing for the new viruses. The pathogenicity of these new viruses should be determined in controlled challenge studies. Vaccine development and evaluation in controlled studies for these viruses should be considered to determine if vaccinations have a role in their control.
DISCLOSURE
The author has nothing to disclose. | 2020-05-23T13:01:22.648Z | 2020-05-23T00:00:00.000 | {
"year": 2020,
"sha1": "10f0cbab00c432a4b13bf2dee145217a8f8a5690",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.cvfa.2020.02.004",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "6ea3e40464aefbbe4eb9b293294898544c123a02",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
265153104 | pes2o/s2orc | v3-fos-license | Novel endoscopic techniques for the diagnosis of gastric Helicobacter pylori infection: a systematic review and network meta-analysis
Objective This study aimed to conduct a network meta-analysis to compare the diagnostic efficacy of diverse novel endoscopic techniques for detecting gastric Helicobacter pylori infection. Methods From inception to August 2023, literature was systematically searched across Pubmed, Embase, and Web of Science databases. Cochrane’s risk of bias tool assessed the methodological quality of the included studies. Data analysis was conducted using the R software, employing a ranking chart to determine the most effective diagnostic method comprehensively. Convergence analysis was performed to assess the stability of the results. Results The study encompassed 36 articles comprising 54 observational studies, investigating 14 novel endoscopic techniques and involving 7,230 patients diagnosed with gastric H. pylori infection. Compared with the gold standard, the comprehensive network meta-analysis revealed the superior diagnostic performance of two new endoscopic techniques, Magnifying blue laser imaging endoscopy (M-BLI) and high-definition magnifying endoscopy with i-scan (M-I-SCAN). Specifically, M-BLI demonstrated the highest ranking in both sensitivity (SE) and positive predictive value (PPV), ranking second in negative predictive value (NPV) and fourth in specificity (SP). M-I-SCAN secured the top position in NPV, third in SE and SP, and fifth in PPV. Conclusion After thoroughly analyzing the ranking chart, we conclude that M-BLI and M-I-SCAN stand out as the most suitable new endoscopic techniques for diagnosing gastric H. pylori infection. Systematic review registration https://inplasy.com/inplasy-2023-11-0051/, identifier INPLASY2023110051.
Introduction
Helicobacter pylori is a Gram-negative bacterium infecting the epithelial layer of the human stomach, capable of colonizing and persisting in a unique biological niche within the gastric lumen (Correa, 1988;Dunn et al., 1997).In 1994, the International Agency for Research on Cancer (IARC) classified H. pylori as a Group I carcinogen (IARC, 1994).It is linked to chronic gastritis, gastric ulcers, duodenal ulcers, gastric adenocarcinoma, and gastric mucosaassociated lymphoid tissue lymphoma (Suerbaum and Michetti, 2002;Graham, 2015).Over half of the world's population is infected, with prevalence reaching 25 to 50% in developed countries and 70 to 90% in developing nations (Xia and Talley, 1997;Hooi et al., 2017).Postinfection, it sequentially leads to chronic atrophic gastritis, intestinal metaplasia, dysplasia, and gastric cancer (Correa, 1988).Timely diagnosis holds immense significance for H. pylori eradication, preventing diseases such as gastric cancer (Choi et al., 2018).
The diagnosis of gastric H. pylori infection traditionally involves invasive techniques like histological examination, H. pylori culture, and polymerase chain reaction, as well as non-invasive methods such as serological detection, urea breath test (UBT), and fecal antigen detection (Vaira et al., 2002;Wang et al., 2015;Choi et al., 2018).However, the accuracy of invasive detection is affected by factors like biopsy location, size, number, staining methods, and antibiotic use.On the other hand, non-invasive techniques can be influenced by antibiotics, bismuth, and test reaction time (Logan and Walker, 2001).Are there more accurate and intuitive diagnostic options for gastric H. pylori infection?Over the past decade, plain white light imaging endoscopy (WLE) has been utilized as a diagnostic tool for the invasive detection of gastric H. pylori infection.While WLE cannot replace UBT as the diagnostic foundation, it can determine the presence or absence of H. pylori infection during primary disease examination.WLE offers advantages in intuitiveness, immediacy, strong operability, and the potential to avoid biopsy.It guides follow-up examination and treatment, presenting a novel approach to H. pylori diagnosis (Glover et al., 2020b).In this context, endoscopic invasive methods for diagnosing gastric H. pylori infection have emerged as a superior screening tool and research focus.Recent advancements in endoscopic technology have introduced new types of endoscopy, surpassing traditional WLE.These include Magnifying Endoscopy (ME), Narrow Band Imaging Endoscopy (NBI), Linked Color Imaging Endoscopy (LCI), Confocal Laser Endomicroscopy (CLE), Near-Infrared Raman Spectroscopy Endoscopy (NIR), Artificial Intelligence-based Computer-Aided Diagnosis (AI-CAD), and Convolutional Neural Network (AI-CNN; Glover et al., 2020b).Compared to traditional endoscopy, the various images produced by these new endoscopic methods enable better observation of microscopic structures, such as gastric pit patterns, microvessels, cell morphology, and even microorganisms.
Additionally, AI combined with endoscopic images can be trained to determine the presence or absence of infection.The enlargement of gastric pits, disappearance of collecting veins, and the vanishing capillary network increasingly indicate specific endoscopic characteristics of gastric H. pylori infection.This facilitates rapid and minimally invasive endoscopic diagnosis, bringing it closer to pathological diagnosis (Ji and Li, 2014).
A prospective study conducted by Gonen et al. in Turkey, involving 129 patients, affirmed the superiority of high-resolution Magnifying Endoscopy (ME) over White Light Endoscopy (WLE) in diagnosing gastric gastritis associated with H. pylori infection (Gonen et al., 2009).In another prospective study by Ozgur et al. (2013), it was demonstrated that mucosal changes in patients with gastric H. pylori infection were more readily identified using narrow-band imaging (NBI) compared to WLE, with NBI exhibiting a high sensitivity of 92.86% (Özgür et al., 2015).Yagi et al. (2014) compared the diagnostic efficacy of WLE and Magnifying NBI (M-NBI) in patients with post-endoscopic resection.The interobserver agreement for conventional endoscopy was moderate (0.56), with sensitivity and specificity at 79 and 52%, respectively.In contrast, M-NBI demonstrated substantial interobserver agreement (0.77), with sensitivity and specificity reaching 91 and 83% (Yagi et al., 2014).Qi et al. compared ME and M-I-Scan's diagnostic performance and image quality for gastric H. pylori infection.M-I-Scan exhibited high sensitivity and specificity, surpassing ME specificity significantly (sensitivity: 95.45% vs. 95.45%,specificity: 93.55% vs. 80.65%;Qi et al., 2013).In 2017, Shichijo et al. developed an Artificial Intelligence-based Convolutional Neural Network (AI-CNN) capable of diagnosing gastric H. pylori infection through endoscopic images.After learning from 32,208 images across 1750 patients, AI-CNN demonstrated higher accuracy, specificity, and sensitivity than 23 endoscopists.Additionally, the time required for AI-CNN to generate diagnoses was considerably faster than that of the endoscopists (194 s vs. 230 min; Shichijo et al., 2017).Simultaneously, Itoh et al. demonstrated in their study that their AI-CNN deep learning algorithm, trained on 149 endoscopic images under WLE of patients with H. pylori status, achieved diagnostic sensitivity and specificity of 86.7% each when tested on 30 new endoscopic photos (Itoh et al., 2018).In 2023, Zhang et al. published research unveiling AI-WLE, developed using 47,239 images from 1826 patients, which exhibited an accuracy of 91.1% [95% confidence interval (CI): 85.7-94.6].This accuracy was significantly higher than endoscopists (15.5% [95% CI: 9.7-21.3%]).Furthermore, its high sensitivity (0.9290) and specificity (0.8930) were confirmed (Zhang et al., 2023).
In detecting gastric H. pylori infection, different new endoscopes exhibit varying characteristics in terms of sensitivity, specificity, and diagnostic efficiency.Existing systematic reviews or meta-analyses have primarily focused on comparing non-magnifying endoscopy or artificial intelligence for diagnosing human gastric H. pylori images, with a notable absence of comparisons among different new endoscopic techniques.Consequently, evidence-based recommendations regarding the most suitable diagnostic method for gastric H. pylori infection still need to be made (Qi et al., 2016;Bang et al., 2020;Glover et al., 2020a,b).Hence, it is crucial to identify an appropriate technique for diagnosing gastric H. pylori infection among the array of new endoscopic options, particularly when clinicians select different endoscopes for patient diagnosis in clinical practice.
Network meta-analysis, a contemporary evidence-based technique utilizing direct or indirect comparisons, is employed to assess the effects of multiple interventions on disease and estimate the hierarchical order of each intervention (Rouse et al., 2017).In this study, we aggregated existing evidence.We conducted a network meta-analysis to compare novel endoscopic techniques (BLI, LCI, CLE, NBI, ME, AI-CNN, etc.) to evaluate and contrast their diagnostic performance in gastric H. pylori infection patients.This approach aims to furnish patients and clinicians with disease-specific, evidence-based data, facilitating the selection of suitable diagnostic methods for screening and diagnosis.
1.The experimental group employed a novel endoscopic technique as a diagnostic measure for gastric H. pylori infection; 2. The gold standard for diagnosis included the Rapid Urease and Breath Test; 3. Diagnostic techniques comprised novel endoscopic methods and up to two diagnostic approaches; 4. The reported outcome indicators encompassed true positive (TP), true negative (TN), false positive (FP), false negative (FN), sensitivity (Se), specificity (Sp), positive predictive value (PPV), and negative predictive value (NPV).When TP, TN, FP, FN, NPV, or PPV were not reported, calculations were derived from known variables such as Se and Sp; 5.The study design adhered to a prospective or retrospective approach.
Exclusion criteria
1. Absence of well-defined inclusion and exclusion criteria; 2. Non-clinical investigations; 3. Excluded document types: guidelines, systematic reviews, meta-analyses, narrative reviews, letters, editorials, research protocols, case reports, short newsletters, etc.; 4. Incomplete research data, duplicated publications, etc.Studies meeting any of the exclusion criteria were excluded from the analysis.
Study selection
The literature was managed using EndNote X9.1 for screening and exclusion.Initially, the two researchers checked for duplications in literature titles, review papers, conference papers, protocols, and short communications.Subsequently, both researchers reviewed the literature's abstracts to determine inclusion and exclusion criteria.The two researchers then comprehensively reviewed the remaining literature to finalize the inclusion scope.The researchers independently screened the literature throughout this process, and the results were compared.In discrepancies, a discussion ensued, and resolution was achieved with the involvement of a third researcher.
Data extraction
Data for inclusion in the study were recorded using a standardized, preselected nine-item data extraction form, categorized under the following headings: 1. Author, 2. Country, 3. Year of publication, 4. Mean age, 5. Total number of individuals and the distribution by sex, 6. Diagnostic methods, 7. Gold standard, 8. Sensitivity, 9. Specificity.
Literature quality evaluation
Two investigators independently conducted a quality assessment using the Quality Assessment of Diagnostic Accuracy Studies Tool (QUADAS-2), and the assessment results were cross-checked (Yang et al., 2021).Any disparities were deliberated upon and resolved by a third investigator.The evaluation scale encompassed the assessment of risk of bias and clinical applicability.The risk of bias evaluation included four sections: case selection, trial under consideration, gold standard, cash flow, and progress.All components underwent assessment for risk of bias, and the initial three components underwent assessment for clinical applicability.The risk of bias was categorized as "low, " "high, " or "uncertain."
Data analysis
We conducted network meta-analysis aggregation and analysis employing Markov chain Monte Carlo simulation chains within a Bayesian framework, utilizing R software version 4.3.1,following the guidelines outlined in the PRISMA network meta-analysis manual (Moher et al., 2015).The resulting network diagram, generated by the R software, illustrates and describes various novel endoscopes.Each node on the network diagram signifies a distinct novel endoscopic technique, while the connecting lines represent direct head-to-head comparisons with the gold standard.The size of each node and the width of the connecting line are proportional to the number of studies conducted (Chaimani et al., 2013).
Quality assessment of included studies
We utilized R software (version 4.3.1) to conduct a Bayesian network meta-analysis involving 36 articles, encompassing 54 observational studies.The quality, risk of bias, and applicability of these 36 articles were assessed using QUADAS-2.Overall, the articles demonstrated satisfactory quality, with 25 rated high quality and 11 as medium quality.Regarding personnel selection, 13 out of 36 articles had an unclear risk of bias, mandating informed consent from patients or their relatives before testing with new endoscopic techniques.Ten articles exhibited an unclear risk of bias in index detection, while 12 had a dark bias in reference standard assessment.The risk of bias in follow-up time was uncertain for 10 articles.Applicability considerations revealed no increased risk of bias in patient selection, reference standards, and index testing (refer to Figure 2 for details).
Network meta-analysis
The full Network meta-analysis figure will be shown in Figures 3A, 4A, 5A, 6A.
Sensitivity
In the results of the network meta-analysis, when compared to the gold standard detection, AI-BLI [MD = 0.966, 95%CI: (0.706, 1. 3. Convergence analysis confirmed the stability of the results, as depicted in Figure 3B.The bar chart illustrates the top five sensitivities in descending order: M-BLI (0.282), AI-BLI (0.237), M-I-SCAN (0.206), OE-ME (0.132), and M-LCI (0.049; Figure 3C).Table 3 presents a comparison between these two distinct detection measures.
Specificity
The network meta-analysis results indicated differences in specificity compared to the gold standard for various endoscopic techniques: AI-BLI [MD = 0.867, 95%CI: (0.697, 1. 4).Convergence analysis demonstrated the stability of the results, as illustrated in Figure 4B.The ranked bar chart revealed the top five specificities in descending order: TXI-IEE (0.275), BLI (0.236), M-I-SCAN (0.178), M-BLI (0.140), and CLE (0.105; Figure 4C).Table 4 provides a comparison between these two distinct measures of detection.
Positive predictive value
Network meta-analysis results revealed differences from the gold standard in terms of positive predictive value for various endoscopic techniques: AI-BLI [MD = 0.879, 95%CI: (0.536, 1. 5).Convergence analysis demonstrated the stability of the results, as illustrated in Figure 5B.The ranked histogram revealed the top five positive predictive values in descending order: M-BLI (0.232), TXI-IEE(0.206),AI-BLI(0.144),BLI(0.122), and M-I-SCAN(0.099; Figure 5C).Table 5 provides a comparison between these two distinct measures of detection.
Negative predictive value
Network Meta-analysis results demonstrated differences in negative predictive value compared to the gold standard for various 6 presents a comparison between these two distinct measures of detection.
Regression analysis
To examine the effect of age as well as the classification of the gold standard on the results, we performed a meta-regression analysis using StataMP 18.
Regression analysis of age
The results of regression analysis showed that the mean age of the study population was not a statistically significant moderator of SE (p > 0.3370), PPV (p > 0.1370), and NPV (p > 0.8860; Table 7).In addition, there is no sufficient reason to deny that the mean age of the population is not a moderator of SP (p > 0.0030).Table 7 provides details on the age regression analysis of the results.
Regression analysis of gold standard classification
Regression analysis showed that the classification of the gold standard was not a statistically significant moderator of SE (p > 0.4280), PPV (p > 0.4280) and NPV (p > 0.0790; Table 8).In addition, there was still no sufficient reason to reject that the classification of the gold standard was not a moderator of SP (p > 0.0330).Digestive endoscopy is a relatively invasive examination, which is the basis of all invasive examination methods of the upper digestive tract.Compared with non-invasive examination, digestive endoscopy may cause certain throat discomfort, nausea or transient digestive discomfort (Liang et al., 2022), but it can more accurately determine the scope of Hp infection and the degree of damage to the gastric mucosa, which is an advantage that other methods do not have.
This study aimed to assess the diagnostic efficacy of various novel endoscopic techniques in screening for gastric H. pylori infection.It encompassed 36 articles with 54 studies, incorporating 14 distinct endoscopic techniques and gold-standard detection methods.The quantitative analysis included a substantial sample size of 7,230 patients.Our findings indicate that M-BLI, M-I-SCAN, AI-BLI, and TXI-IEE exhibit higher diagnostic efficacy than the gold standard.Our study represents the first comprehensive network meta-analysis of diagnostic tests for these novel endoscopic techniques in diagnosing gastric H. pylori infection.
The network meta-analysis results showed that in the ranking chart of new endoscopic techniques, M-BLI ranked first in sensitivity and positive predictive value, second in negative predictive value, and fourth in specificity.
M-BLI is an innovative image enhancement technique integrating ME and BLI.ME, a magnifying endoscopy, enhances resolution by incorporating a zoom lens to aid endoscopists in better observing gastric mucosa details, including pits, collecting venules, and capillary shapes (Bessède et al., 2017).Observing gastric mucosa with H. pylori infection often reveals enlarged pits, irregular or vanished capillary networks, and irregular or absent collecting veins.Conversely, a honeycomblike capillary network, gastric body RAC, and regular round pits frequently indicate the absence of H. pylori infection in the gastric mucosa (Qi et al., 2016).BLI (Blue Laser Imaging Endoscopy) is an advanced contrast imaging technology The large curvature of the mid-upper body of the stomach was meticulously assessed using M-BLI or M-NBI.Small round pits with a regular honeycomb subepithelial capillary network (SECN) regularly scattered in the collecting venules were considered negative for H. pylori infection.Enlarged or extended pits, unclear SECN, or dense, OK, irregular blood vessels indicated H. pylori positivity.The sensitivity, specificity, PPV, and NPV of BLI were 0.98, 0.92, 0.93, and 0.98, respectively, compared with 0.97, 0.81, 0.87, and 0.95 in the NBI group.No significant differences were found between M-BLI and M-NBI groups (all p > 0.2; Tahara et al., 2017).However, with the inclusion of more recent literature in the mesh meta-analysis, our study shows that M-BLI significantly outperforms M-NBI in SE, SP, PPV, and NPV.The reason may be that M-NBI is not enough to reveal the changes in hemoglobin absorption characteristics, and the contrast and resolution of some diseases with fine structures or similar colors are limited.However, M-BLI is not limited by spectrum compared with M-NBI, and can use the biofilm interference principle to provide a wider range of biomolecular level interaction information.It has higher contrast and resolution to provide sharper images in some cases, and has lower operational dependence.This study demonstrates the high diagnostic accuracy and utility of M-BLI in diagnosing gastric H. pylori infection.Therefore, considering the sensitivity and positive predictive value, M-BLI exhibits superior diagnostic performance and can be recommended as a promising detection tool for gastric H. pylori infection.
The ranking revealed that M-I-SCAN excelled in negative predictive value, ranking first; its sensitivity and specificity were third, and positive predictive value fifth.I-SCAN developed by Pentax Company in Japan, is a computer virtual staining imaging technology with three critical functions for real-time image enhancement: surface enhancement (SE), contrast enhancement (CE), and hue enhancement (TE).The first two enhance lesion identification without significantly altering the color hue and image brightness, often used in tandem.Hue enhancement makes color, hue, and structural changes more apparent after lesion identification.TE includes modes like g for the stomach, c for the intestine, e for the esophagus, b for Barret's esophagus, p for the mucosa, and v for the small blood vessels.Besides microvascular morphology and fine structure observation, I-Scan demonstrates multi-channel and multicolor contrast capabilities, offering unique advantages for determining lesion edges and classifying glandular tube openings (Glover et al., 2020b;Tosun et al., 2022).Sharm et al. conducted a study with 146 patients.WLE's sensitivity, specificity, positive predictive value, negative predictive value, and accuracy in diagnosing H. pylori infection were 59, 100, 100, 69, and 78%, respectively.I-Scan endoscopy exhibited 100, 95, 96, 100, and 97% in the same metrics.I-Scan was superior in observing the fine structure of gastric mucosa, but additional studies are required to understand the H. pylori infection pattern (Sharma et al., 2017).A magnifying endoscopic ME and an i-scan have been developed, providing more explicit images of mucosal and vascular patterns.Qi et al. utilized M-I-Scan and ME to observe H. pylori infection in the gastric mucosa of 84 patients.The accuracy of M-I-Scan in diagnosing H. pylori infection (94.0% vs. 84.5%,p < 0.05, p = 0.046) and specificity (93.5% vs. 80.6%, p = 0.032) were higher than ME (Qi et al., 2013).Therefore, combining ME with I-SCAN testing can uphold a robust negative predictive value for diagnosing H. pylori infection and potentially reduce medical costs.
The ranking chart indicates that TXI-IEE secured the top position in specificity, claimed the second spot in positive predictive value, ranked ninth in sensitivity, and held the tenth position in negative predictive value.TXI-IEE, developed by Olympus Medical Systems (Tokyo, Japan) in 2020, is an The study also examined the association of endoscopic features with three categories of gastric H. pylori infection status (currently infected, previously infected, and noninfected).Results indicated that TXI-IEE exhibited significantly higher diagnostic accuracy for active gastritis than WLI (85.3% vs. 78.7%;p = 0.034).Odds ratios (ORs) for all endoscopy-specific features related to gastric H. pylori infection status were higher in the TXI-IEE group than in the WLI group.Notably, diffuse redness was the sole observation for current infection (OR, 22.0 and 56.1, respectively).Geographic redness was considered indicative of previous infection (OR 6.3 and 11.0, respectively), while regular alignment of collecting venules (RAC) was associated with an uninfected status (OR 25.2 and 42.3, respectively).All specific endoscopic features linked to gastric H. pylori infection status demonstrated higher ORs in the TXI-IEE group than in the WLI group.TXI-IEE enhanced the visibility of diffuse redness, geographic redness, and RAC by creating more excellent contrast (Kitagawa et al., 2023).Therefore, TXI-IEE, with its highest specificity, can potentially reduce the need for unnecessary gastric biopsy.However, it has limitations due to its low sensitivity and negative predictive value.
The ranking chart indicates that AI-BLI ranks second in sensitivity, third in positive and negative predictive values, and sixth in specificity.Artificial intelligence (AI) is the fastestgrowing field in endoscopic research, which is increasingly applied in clinical practice, particularly for image recognition and classification (Cho and Bang, 2020).In contrast to optical endoscopy, AI-assisted endoscopy exhibits operatorindependent characteristics, ensuring a completely objective diagnostic process.In clinical practice, AI-assisted endoscopy proves valuable for offering second opinions and reducing operator dependence in diagnostic endoscopy (Hoogenboom et al., 2020).Wu et al. demonstrated that AI, coupled with BLI, enhances innovation by identifying the typical structure of the digestive tract.This capability alerts endoscopists to missed sites, significantly reducing the blind spot rate in digestive endoscopy (Wu et al., 2019).Nakashima H et al. developed an artificial intelligence system to predict gastric H. pylori infection status using blue laser imaging (BLI)-bright and linked color imaging (LCI) endoscopic images.Two hundred twenty-two patients underwent WL, BLI-bright, and LCI to capture three still images of the gastric lesser curvature.Among them, 162 patients constituted the training set, while the remaining 60 patients served as the test set for verification.Results revealed that the area under the curve (AUC) of the receiver operating characteristic analysis for WLI was 0.66.The AUC of BLI-bright and LCI were 0.96 and 0.95, respectively.The AUC of the BLI-bright and LCI groups significantly exceeded that of the WLI group (p < 0.01; Nakashima et al., 2018).A systematic review and network meta-analysis conducted by Bang CS et al. further demonstrated the clinical utility of AI algorithms as an additional tool for predicting gastric H. pylori infection during endoscopic surgery (Bang et al., 2020).The diagnostic indexes of AI-BLI are promising for use as a detection tool.Still, its diagnosis is susceptible to the endoscopic images included in the study, introducing selection bias and, therefore, certain limitations.
Advantages and limitations
Firstly, our study encompassed 36 articles, comprising 54 observational studies, exploring 14 new endoscopic techniques, and involving 7,230 patients undergoing these novel techniques for diagnosing gastric H. pylori infection.The study stands out for its extensive literature coverage, substantial sample size, minimal heterogeneity in results, and rigorous methodology.Secondly, our research and its foundational studies confront certain limitations.Some newly developed endoscopic diagnostic techniques received limited coverage in the literature.The endoscopic operator's experience might influence the efficacy of specific diagnostic methods.The inclusion of patients may have been impacted by factors such as age or medications taken, contributing to diagnostic variations among different gold standards.Readers should exercise caution in interpreting our study's results.For instance, only one report exists on M-BLI, M-I-SCAN, TXI-IEE, and AI-BLI for diagnosing gastric H. pylori infection, indicating a need for further expansion and exploration.
Conclusion
In this study, we comprehensively compared 14 novel endoscopic techniques with the gold standard for diagnosing gastric H. pylori infection, utilizing Bayesian network metaanalysis.The findings indicate that M-BLI and M-I-SCAN exhibit robust diagnostic performance, emerging as particularly suitable endoscopic techniques for diagnosing gastric H. pylori.Despite some limitations, TXI-IEE and AI-BLI serve as valuable tools for early detection and diagnosis of gastric H. pylori infection, holding clinical significance in minimizing unnecessary biopsies and optimizing medical resource utilization.Nevertheless, this conclusion warrants validation through additional literature, and future research demands more meticulously designed, large-scale, and multicenter studies to further elucidate the application value of various new endoscopic techniques in diagnosing patients with gastric H. pylori infection.
FIGURE 1
FIGURE 1Flow diagram of literature selection.
FIGURE 4 (A) Network meta-analysis figure for Specificity.(B) Convergence analysis for Specificity.(C) Ranking chart for Specificity.
FIGURE 5 (A) Network meta-analysis figure for PPV.(B) Convergence analysis for PPV.(C) Ranking chart for PPV.
FIGURE 6 (A) Network meta-analysis figure for NPV.(B) Convergence analysis for NPV.(C) Ranking chart for NPV.
TABLE 1
Search strategy on PubMed.
TABLE 2
Characteristics of the studies included in the meta-analysis.
TABLE 3
League table on sensitivity.
TABLE 4
League table on specificity.
TABLE 5
League table on PPV. | 2023-11-14T16:02:00.327Z | 2023-11-12T00:00:00.000 | {
"year": 2024,
"sha1": "74d395894b636114253084fe1aa590598633eac7",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3389/fmicb.2024.1377541",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7bdfcdd955b8391d6bfb3ded2bf759afa462a7d7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
239052540 | pes2o/s2orc | v3-fos-license | Subacute transverse myelitis as a clinical presentation of neurobrucellosis
Brucella melitensis is the main cause of human brucellosis worldwide and is considered the most virulent and neurotropic species. In Mexico, this species is considered endemic, being reported since the first decade of the 20th century. Here we present a case of subacute transverse myelitis with the isolation and identification of B. melitensis as the causative agent of Neurobrucellosis in a female patient from the coastal state of Guerrero, Mexico.
Introduction
Human brucellosis is the most common zoonotic infection worldwide, Latin America being a high-risk area [1,2]. It is caused by four species of the genus Brucella (Brucella abortus, B. melitensis, B. canis, and B. suis). This disease is transmitted to humans through direct contact with secretions of infected animals or ingestion of their products, mainly unpasteurized dairy products, or less commonly raw meat. It is a multisystem disease than can simulate other infectious and noninfectious pathologies, mainly affecting the musculoskeletal system [2][3][4][5]. However, less than 5% of patients infected by any of the Brucella species develop neurologic manifestations [2][3][4][5].
Transverse myelitis is a very unusual presentation of this condition and clinical suspicion requires integration of geographic, alimentary and sanitary risk factors; compatible clinical manifestations, and complementary studies, such as Rose Bengal serum agglutination, 2-mercaptoethanol (2-ME) associated serum agglutination (SAT), polymerase chain reaction (PCR), and bone marrow culture.
We present the case of a 35-year-old woman with subacute myelopathy in whom B. melitensis was isolated, with resolution of the symptoms after targeted antimicrobial therapy.
Case report
A 35-year-old woman from the coastal state of Guerrero, Mexico, without previous medical conditions or contact with domestic animals, but a history of unpasteurized cheese consumption, presented with a two-month clinical picture of 39 °C temperature, night sweats, weight loss of 16 kg, and sacrococcygeal pain. Three weeks prior to hospital admission, she started to have lower limb paresthesia and hypoalgesia, paraparesis, and urinary incontinence.
Neurologic examination revealed an alert and oriented patient, with no cranial nerve involvement. Motor examination showed paraparesis (3/5), lower limb hyperreflexia with sustained ankle clonus, and bilateral Babinski and Chaddock signs. A T8 spinal cord level with loss of all sensory modalities was evidenced. The rest of the examination was unremarkable.
Laboratory studies including complete blood count, serum chemistry, electrolytes, and liver function tests were not relevant, except for a positive serum Rose Bengal agglutination test with 1:640 titers. Serum Erythrocyte Sedimentation Rate (ESR) and C-Reactive Protein (CRP) were within normal limits.
A lumbar puncture was performed with opening pressure of 12 cm of H 2 O, clear cerebrospinal fluid (CSF), with 3 cells/mm 3 (no predominance), and CSF glucose and protein values of 43 and 56.53 mg/dL (CSF/plasma glucose rate 0.35), respectively. Genomic DNA was extracted using a DNeasy Blood & Tissue kit (Qiagen, Hilden, Germany). To verify the integrity of the DNA, amplification of a 400 bp fragment of the mitochondrial gene cytochrome oxidase subunit I of the host was performed. Polymerase Chain Reaction (PCR) was performed using the primers Bru1F (TGCTAATACCGTATGTGCTT) and Bru1R (TAACCGCGACCGGGATGT) that amplify a 900 bp fragment of the ribosomal 16S-rDNA gene (16S) of Brucella ( Figure 1). The PCR product was sequenced at Macrogen Inc., Korea and the electropherograms were analyzed using the software Chromas. The sequence was compared with those of references deposited in GenBank using the BLASTn tool. The sequence was 99.8% (830/831 bp) identical to B. melitensis strain IMHT4 (GenBank accession no. MT611102.1). Our sequence was deposited in GenBank with the accession no. MT912851. We used the F4 / R2 primers. Additionally, we achieved the characterization of the species by sequencing a fragment of more than 800 bp of the 16S-rDNA gene, which replaces the use of the BruceLadder multiplex that requires a greater number of primers for species typing. Also, adenosine deaminase (ADA) and GeneXpert-RIF assays were performed on the CSF, looking for central nervous system (CNS) tuberculosis, both of which were negative. Additionally, a bone marrow culture sample was drawn which also isolated B. melitensis.
Contrast-enhanced spinal magnetic resonance imaging (MRI) revealed hyperintensity and contrast enhancement on T1-and T2-weighted sequences extending from T4 to T9 spinal cord segments (Figure 2A-C). Subacute transverse myelitis was diagnosed in the context of neurobrucellosis and triple antibiotic therapy was started with intravenous ceftriaxone 2 g every 12 hours for one month plus oral doxycycline (100 mg every twelve hours) and rifampin (600 mg once daily) for a total of four months.
Three days after antibiotic therapy was started, the patient's symptoms improved, with complete resolution of the motor and sensory deficits and urinary and fecal incontinence on the sixth and seventh days, respectively.
After completing one month of intravenous treatment with ceftriaxone, the patient was discharged from our service, with subsequent follow-up visits at 15 and 30 days, where notable progress was evident regarding sensory and motor function, with mild lower limb residual paresthesia and 4/5 muscle strength in the left lower limb. No residual urinary or fecal incontinence was seen. The patient continued with oral doxycycline and rifampin up to four months.
Discussion
Brucella melitensis is the main cause of brucellosis worldwide and is considered to be the most virulent species to human beings, given that 10-100 organisms are able to cause chronic infection. Additionally, it is the most neurotropic agent, because nervous system involvement caused by other Brucella species is rare. In Mexico, this species is considered endemic, being reported since the first decade of the 20th century, when it was determined that B. melitensis was the main causative agent of Malta fever in Mexico [8]. High bacterial loads are present in milk, urine, and products derived from animal pregnancy; human disease is acquired from ingestion of unpasteurized milk and dairy products, and contact with blood and other secretions from infected animals [2,5,8].
Bone marrow culture has sensitivity of 97% during the acute phase, 90% in the subacute phase, and 50% during the chronic phase. It is considered the standard of reference for the diagnosis of Brucella infection and it is found to be positive in up to 24-28% of neurobrucellosis cases [5]. In our study, it was possible to recover an axenic culture that could be typified as B. melitensis. Brucella can also be isolated from other biologic samples such as blood (serum culture positivity is 24-50% for neurobrucellosis), pus, CSF (15-30% by means of culture), and pleural, synovial, and peritoneal fluids [3,6,7,9].
In the absence of cultures, serologic diagnostic tests (such as Rose Bengal, serum agglutination, or Coombs test) and molecular techniques can be used.
Particularly, the amplification of the bcsp31 gene has a sensitivity of 100% and specificity of 98.3%. Other studies improved the amplification of the 16S-rDNA, with variable results that goes from 70-90%. In the present study we were able to successfully amplify and sequence a fragment of ~8 00 bp of the 16S-rDNA gene from Brucella using the set of primers F47R2, which allowed us to corroborate the identity of the infecting species [10,11].
Direct CNS involvement in patients with Brucella infection occurs in <5% of adult cases and < 1% of pediatric cases. Regarding acute infection, CNS involvement is nonspecific, with headache, fatigue and myalgia frequently reported. Subacute and chronic neurobrucellosis occurs in less than 10% of cases, with a clinical presentation that includes myelitis, meningoencephalitis, meningomyelitis, optic neuritis, peripheral neuritis, and facial palsy. CSF findings characteristically show lymphocytic pleocytosis, elevated protein count, positive seroagglutination titers, or culture with isolation of Brucella spp. [4,9].
Incidence of neurobrucellosis can change depending on the sample size and the epidemiologic findings of different research centers in various countries. Several studies have reported unusual manifestations as the initial presentation such as pseudotumor cerebri, demyelinating syndromes, intracranial granuloma, transverse myelitis, sagittal sinus thrombosis, spinal arachnoiditis, aphasia, hearing loss, and hemiparesis [12].
Diagnosis of neurobrucellosis is based on the following criteria: 1-Signs and symptoms of neurological disease in the absence of other diseases. 2-CSF analysis with lymphocytic pleocytosis (> 16/mm³); elevated protein content (> 45 mg/dL) and reduced CSF/plasma glucose rate (< 0.50).
3-Bacteria isolation from blood and other body
fluids. 4-Standard tube agglutination (STA) titers positivity in serum and/or cerebrospinal fluid (CSF), or positive Rapid agglutination (RAT), Coombs tests (titers ≥ 1/160) and Wright ≥ 1/160 in serum or any value of titer in CSF obtained by the RAT, Wright or Coombs' tests. 5-Response to specific antimicrobial therapy with a significant drop in the CSF lymphocyte count and protein concentration [2,7,9]. The role of imaging studies in the diagnosis of neurobrucellosis is limited because its findings can mimic other inflammatory or infectious conditions. MRI is superior to computerized tomography (CT) scanning and is indicated in cases of diagnostic doubt, neurologic deterioration, slow improvement, or unusual physical findings [12].
Neurobrucellosis can initially present as longitudinally extensive transverse myelitis [7]. Incidence of acute transverse myelitis is about 1-4 per million population and can present at any age, with a peak between the second and fourth decades [4]. The first case in the literature, reported by Ozer et al., was of a 56-year-old woman with paraparesis and absent deep tendon reflexes [4]; MRI showed diffuse bulging of the L3-L4, L4-L5, and L5-S1 spinal cord segments. Nerve conduction studies were conclusive, with radiculoneuritis and positive serum Rose Bengal; however, no microorganism was isolated from the CSF. Ten days after treatment with ceftriaxone, rifampin and doxycycline was started, the patient showed improvement of paraparesis [4].
Transverse myelitis (TM) is characterized by an interruption of axonal conduction due to an inflammatory process within the spinal cord; it can be part of a systemic disease or present as an isolated condition [13].
Krishnan et al. reported a case of a 65-year-old man with headache and hearing loss who developed progressive diplopia and required brain tissue biopsy, with an abscess as a complication of such a procedure, in whom B. melitensis was isolated [13].
Treatment of neurobrucellosis represents a problem because of the need to reach therapeutic concentrations of antibiotics in the CSF. Considering that tetracyclines and aminoglycosides do not adequately penetrate the blood-brain barrier, it is recommended to add rifampin and cephalosporins to the standard treatment with doxycycline. Duration of treatment is determined by clinical and CSF response (normal CSF protein, cell count < 100/mm 3 ) and therefore should be continued for three months to one year, usually 6 months [3,4,6,9,14].
Antimicrobial treatment shortens the natural history of the disease, decreases the incidence of complications, and avoids relapses. There is strong evidence that tetracyclines are the drugs of choice for Brucella infection. They cause rapid resolution of symptoms, with defervescence occurring around the first 2-7 days depending on clinical presentation [5,8,9,15].
In conclusion, patients with a prolonged febrile illness and neurological manifestations call for careful evaluation of both clinical and epidemiological antecedents. Neurobrucellosis must be considered in Mexico, once other pathogens, causing nonspecific febrile symptoms, have been ruled out. | 2021-10-21T15:11:20.532Z | 2021-09-30T00:00:00.000 | {
"year": 2021,
"sha1": "9fb95c8d23735a8f3f453d909bf999a50aa7a02d",
"oa_license": "CCBY",
"oa_url": "https://www.jidc.org/index.php/journal/article/download/34669609/2633",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f231a8eca9d986b30546eabdf0027730265ac463",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
7915759 | pes2o/s2orc | v3-fos-license | A molecular defect in virally transformed muscle cells that cannot cluster acetylcholine receptors.
Muscle cells infected at the permissive temperature with temperature-sensitive mutants of Rous sarcoma virus and shifted to the non-permissive temperature form myotubes that are unable to cluster acetylcholine receptors (Anthony, D. T., S. M. Schuetze, and L. L. Rubin. 1984. Proc. Natl. Acad. Sci. USA. 81:2265-2269). Work described in this paper demonstrates that the virally-infected cells are missing a 37-kD peptide which reacts with an anti-tropomyosin antiserum. Using a monoclonal antibody specific for the missing peptide, we show that this tropomyosin is absent from fibroblasts and is distinct from smooth muscle tropomyosins. It is also different from the two previously identified striated muscle myofibrillar tropomyosins (alpha and beta). We suggest that, in normal muscle, this novel, non-myofibrillar, tropomyosin-like molecule is an important component of a cytoskeletal network necessary for cluster formation.
antiserum. Using a monoclonal antibody specific for the missing peptide, we show that this tropomyosin is absent from fibroblasts and is distinct from smooth muscle tropomyosins. It is also different from the two previously identified striated muscle myofibrillar tropomyosins (alpha and beta). We suggest that, in normal muscle, this novel, non-myofibrillar, tropomyosin-like molecule is an important component of a cytoskeletal network necessary for cluster formation.
characteristic feature of the vertebrate neuromuscular junction is its enormously high concentration of acetylcholine receptors (AChRs) I. AChRs are uniformly distributed along embryonic muscle fibers, but accumulate at high concentrations at junctional sites in response to innervation. This accumulation is probably initiated by a factor of neural origin (12,28,43). In mature muscle, clusters are maintained by a component of the muscle's basal lamina, which has been studied in some detail (11,39). How the muscle cell responds to molecules that induce clustering is not well understood, although it might be anticipated that cytoskeletal elements are involved. A variety of cytoskeletal molecules are present in either the postsynaptic region of adult muscle or beneath AChR clusters on cultured muscle cells. These include alpha-actinin, filamin, vinculin, talin, an intermediate filament-like molecule, nonmuscle actin, a 300-kD protein, and a 58-kD protein (5-7, 19, 22, 23, 33, 47, 49, 56; reviewed in reference 20). In addition, a 43-kD AChR-associated protein, thought to be capable of binding to both the AChR (10) and actin (53), is concentrated near AChR clusters (8,21,42,46,50).
The mere presence of a cytoskeletal molecule, however, does not guarantee its participation in clustering. Other structural changes occur during muscle cell development that could also require cytoskeletal reorganization. For instance, the synaptic region of muscle is characterized by extensive membrane folds. In addition, we have shown that AChR clustering initiates the sub-membrane localization and immobilization of a set of myonuclei and Golgi apparatus. This process probably results from a cytoskeletal reorganiza-1. Abbreviations used in this paper: AChR, acetylcholine receptor; ALD, anterior latissimus dorsi; HRP, horseradish peroxidase; PLD, posterior latissimus dorsii; RSV, Rous sarcoma virus. tion beneath the cluster (17). To establish that particular cytoskeletal molecules have a direct role in clustering, evidence of a more functional nature must be obtained.
Previously, we reported that chick myotubes that are infected at the permissive temperature with a temperaturesensitive mutant (tsNY68) of Rous sarcoma virus (RSV) and allowed to fuse at the nonpermissive temperature do not cluster their AChRs in response to a variety of factors that increase greatly the number of AChR clusters in normal muscle cells (3). This suggests that the transformed cells have a functional defect in clustering. We have tried to identify differences between normal and transformed cells that might account for this defect. In this paper, we demonstrate that transformed cells have greatly decreased levels of a nonmyofibrillar protein that is labeled by an anti-tropomyosin antibody. This molecule could function in AChR clustering by stabilizing actin filaments.
Cell Culture
Preparation of chick myotube cultures and viral infection were carried out as described previously (3), except that cultures were grown in 35-mm tissue culture dishes in DME (Gibco, Grand Island, NY) with 10% horse serum (Gibco) and 5 % chick embyro extract in an atmosphere containing 10% CO2. Rat muscle cell cultures were prepared as described previously, and cells were maintained in DME with 10% horse serum, 2% chick embryo extract, and 33 mM additional glucose (45). For immunofluorescence experiments, cells were grown on acid-washed, collagen-coated glass coverslips. and 5 % 2-mercaptoethanol). Lysates were microfuged and stored at -20°C until use. Protein was determined by the method of Schaffner and Weissman (48). One dimensional SDS-PAGE was performed according to the method of Laemmli (29).
Western Blot Analysis
Proteins resolved by SDS-PAGE were transferred to nitrocellulose by electroelution for 18-20 h at 150 mA constant current at room temperature in 20 mM Tris-HCl, 150 mM glycine, 20% methanol (52). Gels and nitrocellulose sheets were pre-soaked in the transfer buffer for at least 20 min before electroelution. Proteins transferred to nitrocellulose were reversibly stained with 0.2% Ponceau S (Serva Fine Biochemicals Inc., Garden City Park, NY) in 3% trichloroacetic acid.
Before antibody incubation, nitrocellulose blots were blocked with 10% horse or FCS (Gibco) in PBS for 1 h. The blots were incubated with the appropriate primary antibody overnight at 4°C or for 2 h at room temperature in PBS containing 10% serum, 0.1% Triton X-100, 0.02% SDS (buffer A). They were then washed at room temperature three times for 10 min each in Buffer A minus serum. The appropriate second antibody-horseradish peroxidase (HRP) conjugate (Hyclone Laboratories, Logan, UT) was diluted in Buffer A, incubated with the blots at room temperature for 2 h, and washed as above. Blots were rinsed in water or 10 mM sodium citrate, pH 4.5, and then reacted with 0.01% hydrogen peroxide and 0.5 mg/ml 4-chloroi-naphthol in the citrate buffer (24,30). The reaction was stopped by rinsing the blots in water.
Monoclonal Antibodies
To prepare antigen for immunization of BALB/C mice, proteins from noninfected myotubes were resolved by SDS-PAGE, and the gels were stained with Coomassie Blue. The 37-kD band which was found by Western blotting analysis to react with an anti-tropomyosin antibody (see Results) was excised and washed in alternate lO-min cycles of I M ammonium bicarbonate, pH 8.8, and water. The gel slice was minced, and protein was extracted by overnight incubation in 1% SDS, 10 mM ammonium bicarbonate, pH 8.8, and I mM EDTA. About 50 lag protein was extracted into 1 ml of this buffer. Next, 0.5 ml of this solution was mixed with 0.5 ml AIu-GeI-S adjuvant (Serva) and injected into the peritoneum of mice at 5 w, 1.5 w, and 4 d before the tusion procedure. Mice were chosen ior fusions by bleeding from the orbital sinus and checking titers against low molecular mass tropomyosins on mini-Western blots.
One day before the fusion, macrophage feeder layers were prepared (40). Fusion of spleen cells with NS-1 mouse myeloma cells was done using slight modifications (30) of standard procedures (40) with polyethylene glycol in 75 mM Hepes buffer (Boehringer Mannheim, Indianapolis, IN). HAT selection was carried out according to standard protocols. Hybridoma cells were grown in RPMI medium (Gibco) containing 10% FCS (Hyclone) and 10 mM sodium pyruvate, and supernatants were assayed by mini-Western blots to detect wells that were producing antibodies predominantly against the desired molecular weight band. These cells were cloned in 0.5% Seaplaque soft agar (FMC BioProducts, Rockland, ME) over feeder layers of rat embryo fibroblasts in the above medium. After ",,2 w, clones were picked and diluted into medium in 96-well microtiter plates. The supernatants were again analyzed by mini-Western blot analysis.
Purification of Smooth Muscle Tropomyosin
Chicken gizzard tropomyosin was prepared by the method of Ebashi (16) modified by Dr. Fumio Matsumum of Cold Spring Harbor Laboratory (Cold Spring Harbor, NY) (35). Briefly, 100 gm frozen chicken gizzard (PeI-Freez Biologicals, Rogers, AK) was homogenized in 400 ml of a buffer containing 0.1 M NaCI, 20 mM Tris, pH 7.4, 1 mM EDTA, and 5 mM 2-mereaptoethanol, and centrifuged at 13,800 g for 20 min. The supernatant was saved, and the pellet was re-extracted and homogenized in 100 ml of the same buffer, but with 1 M NaC1, and again centrifuged at 13,800 g for 20 rain. Both supernatants were heated separately at 100°C for 10 min, cooled on ice for 30 min and centrifuged as before. An additional 5 mM 2-mercaptoethanol was added to each supernatant, which was subjected to 28-36% ammonium sulfate fractionation. The pellets were redissolved to a final protein concentration of 5 mg/ml in 20 mM Tris, pH 7.4, containing 5 mM 2-mercaptoethanol.
Myofibril Preparation
Chicken pectoral myofibrils were generously provided by Dr. Jim Dennis (Department of Anatomy and Cell Biology, Cornell University Medical College, New York, NY). Myofibrils were prepared from fascicles of anterior latissimus dorsi (ALD) and posterior latissimus dorsi (PLD), provided by Dr. Dennis, that had been stored at -20°C in buffer with 0.5% Triton X-100. The fascicles were minced into 1 mm pieces in K-Buffer, which consisted of 0.1 M KCI, 10 mM sodium phosphate pH 7.0, 5 mM MgCIz, 1 mM EGTA, 1 mM dithiothreitol (DTT), and 0.1 mM phenylmethylsulfonyl fluoride (PMSF) (Sigma Chemical Co., St. Louis, MO). The mince was then homogenized before centrifugation at 3,000 rpm for 20 min in a Sorvall SS-34 rotor. The pellets were washed three times with 40 ml of the same buffer and centrifuged as above. The final myofibril pellets were stored at -20°C in glycerol: buffer, 1:1. ALD and PLD muscle do not yield clean myofibrils because of the large amount of connective tissue in these muscles compared with that in pectoral muscle.
Immunofluorescence
Cultured Cells. Muscle cell cultures grown on glass coverslips were fixed at room temperature for 20 min in 3 % paraformaldehyde in PBS containing 1 mM CaCI2 and 1 mM MgCI2, washed three times with PBS, permeabilized for 3 min in 0.1% Triton in PBS, and again washed with PBS three times. Nonspecific binding sites were blocked in 10% horse serum in DME for 15 min at room temperature. Hybridoma supernatants or rabbit antisera diluted in 10% horse serum were added to the coverslips for 30-60 min. The cells were washed three times with PBS, blocked for an additional 15 min in 10% horse serum in DME and then labeled with the appropriate FITC-conjugated second antibody (Cooper Biomedical, Malvern, PA; diluted 1:150 in 10% horse serum in PBS) for 30 min before washing three times in PBS, once in water and mounting with UV inert mountant (Atomergic Chemetals Corp., Farmingdale, NY). Photographs were taken using appropriate filter combinations for fluorescein and rhodamine on a Zeiss ICM 405 microscope equipped with epifluorescence illumination (Carl Zeiss, Inc., Thornwood, NY).
Myofibrils
Aliquots of the myofibril suspension were absorbed to gelatin-coated slides and incubated in 1% BSA in K-Buffer for 30 min to block nonspecific binding sites. Antibodies diluted in 1% BSA in K-buffer were incubated with the tissue for 1 h. The slides were washed with K-buffer 3 times for 5 min each. The myofibrils were again incubated in 1% BSA in K-Buffer for 10 rain before incubating them with the appropriate FITC-conjugated anti-mouse or anti-rabbit immunoglobulin (diluted 1:150 in 1% BSA in K buffer) for 30 rain. The slides were washed and rinsed as before and mounted in 90% glycerol in PBS.
Indentification of Myotube Troporayosin Isoforms
Our interest in muscle cells transformed with temperaturesensitive mutants of RSV arose from our observation that these cells are unable to cluster AChRs even when maintained at the nonpermissive temperature, 42°C (3). We, therefore, compared these cells with normal muscle cells grown at 42°C. SDS-PAGE revealed, as anticipated, that proteins of the two cell types were quite similar. There were some differences, though. The most striking change was that one protein with an apparent molecular weight of 37 kD was virtually absent in infected myotubes that had been shifted to 42°C for 3-4 d (Fig. 1). This protein was also absent from cells shifted back to 37°C (data not shown). It seemed unlikely that this difference between the two types of culture was due to vastly differing numbers of myotubes. For all experiments, tsNY68-infected and uninfected cultures were inspected to determine that comparable numbers of myotubes were present. The numbers of cell surface AChRs in both types of cultures were also determined in some cases to ensure that they were comparable (3). The identity of the 37-kD peptide was suggested by experiments in which Western blots of equal amounts of total protein from cultures of normal and transformed myotubes were probed with a rabbit polyclonal antibody, TM#6 (kindly provided by Dr. Fumio Matsumura, Cold Spring Harbor Laboratories), prepared against chicken gizzard tropomyosin which cross-reacts with many tropomyosin isoforms (35). The 37-kD peptide in normal cells is labeled with TM#6 (Fig. 2). This immunoreactive peptide, however, is missing from tsNY68-infected myotubes kept at 42°C (Fig. 2) as well as from those shifted back to 37°C (data not shown). These experiments suggest that a molecule of 37 kD, which bears some structural similarity to a tropomyosin, is present in normal muscle cells, but absent from transformed ones. We will refer to this molecule as tropomyosin 2 since it is the peptide with the second highest molecular weight labeled by the anti-tropomyosin antibody.
Generation of a Monoclonal Antibody Against Tropomyosin 2
To characterize tropomyosin 2 more completely, we isolated it from SDS-gels prepared from cultures of normal chick myotubes. We immunized mice with the gel-eluted material for the production of monoclonal antibodies. From one of our fusions, we obtained an antibody, D3-16, which preferentially labeled this peptide on Western blots of total muscle culture protein. In fact, D3-16 labeled this peptide strongly on Western blots more consistently than did TM#6 (see, for instance, Fig. 8). D3-16 also labeled a peptide with apparent molecular mass of 43 kD, which was labeled by the antitropomyosin antibody TM#6, as well, and which will be re- ferred to as tropomyosin 1 (Fig. 4). Tropomysosin 1 was distinct from alpha actin, which was not labeled by D3-16 in Western blots containing striated muscle myofibrillar protein (Fig. 4, lane 1). Western blots using D3-16 showed clearly that tropomyosin 2 was greatly decreased in tsNY68-infected cells (Figs. 4 B and 8 C).
Characterization of Tropomyosin 2 Expression
Cellular Specificity. We were interested in determining the cellular specificity of tropomyosin 2, its relationship to other muscle tropomyosins, and its distribution within muscle cells. To accomplish these goals, we used Western blotting and immunofluorescence techniques to compare the reactivity of D3-16 with that of TM#6 and with that of a mouse monoclonal antibody, CH 1, that reacts only with alpha and beta skeletal tropomyosins (31; kindly provided by Dr. Jim J.-C. Lin, Department of Biology, University of Iowa, Iowa City, IA).
The first issue we addressed was whether tropomyosin 2 truly was present only in normal striated muscle cells in culture or whether it was additionally found in fibroblasts. This was important because even though both infected and noninfected myotube cultures were treated with cytosine arabinoside for 48 h to decrease the number of fibroblasts, tropomyosins from residual fibroblasts might still have contributed to the many isoforms found in myotube cultures. In that case, the difference between normal and transformed cultures in the level of tropomyosin 2 might have been due to differing numbers of fibroblasts. Fig. 3 is a representative Coomassie Blue-stained SDS-polyacrylamide gel of total cell protein from fibroblast cultures (lane 2), tsNY68-infected myotubes (lane 3) and noninfected myotubes (lane 4), both grown at 42°C. Pectoral muscle myofibrils, which contain alpha, but little beta, tropomyosin were included to mark the position of that peptide (lane 5). Alpha-actin is also a major component of the pectoral myofibrils. Chicken gizzard tropomyosins at "~35 and 43 kD are also presented for comparison. As expected, the cultured cell protein preparations contained numerous polypeptides and especially prominent actin bands. However, fibroblasts did not have either a major peptide that comigrated with tropomyosin 2 or a form that comigrated with tropomyosin 1 present in both noninfected and infected myotubes.
A Western blot prepared from a similar gel is shown in Fig. 4. The position of alpha-tropomyosin is specified by TM#6 reactivity with pectoral myofibrils, and the position of tropomyosin 2 by D3-16 reactivity with cultured muscle. As judged by reactivity with both TM#6 and D3-16, it is clear that fibroblasts did not contain any immunoreactive tropomyosin 2. They did contain, however, two tropomyosin isoforms migrating just above and below the 43-kD myotube form, but these did not stain well with TM#6 and did not stain at all with D3-16 (Fig. 4).
These results were consistent with those obtained from immunofluorescence studies. Normal myotubes from both chick and rat embryos stained diffusely with D3-16, but both types of fibroblast failed to stain (Figs. 5 and 6). Infected chick myotubes also did not stain (data not shown). Taken together, these experiments suggest that the 37-kD peptide missing from tsNY68-infected muscle cell cultures is not present in fibroblasts. This also demonstrates that differences between normal and tsNY68-infected myotube cultures could not have been related to the presence of residual fibroblasts in these cultures.
We also compared myotube tropomyosins to those purifed from chicken gizzard. The major tropomyosins found in chicken gizzard had apparent molecular masses of 35 and 43 kD, clearly different from that of tropomyosin 2 (Fig. 3, lane 1 ). These peptides were labeled by TM#6 (Fig. 4 A, lane 5), but not by D3-16 (Fig. 4 B, lane 5). Thus, D3-16 recognizes an epitope that is not present in smooth muscle tropomyosins.
Differences between Tropomyosin 2 and Myofibrillar Tropomyosins
We were next interested in determining if tropomyosin 2 is one of the previously recognized striated muscle tropomyosin isoforms. Two forms of muscle tropomyosin have been well characterized. Myofibrillar beta tropomyosin has an apparent molecular mass of '~36 kD and is present in slow and Figure 6. Immunofluorescence of rat myotube culture labeled with D3-16. Fluorescence (A) and phase (B) images are shown. D3-16 labels these myotubes uniformly, but again does not label fibroblasts that are present throughout the culture. Bar, 10 IxM. mixed muscle types and in cultured myotubes (2). Myofibrillar alpha tropomyosin has an apparent molecular mass of 34 kD and is present in fast and slow muscle and in cultured myotubes. Montarras et al. (38) studied the expression of alpha and beta tropomyosins in myotubes formed from myoblasts transformed at 37°C with tsNY68 and shifted to 42°C. They found that expression of alpha-tropomyosin began soon after shift to the nonpermissive temperature, but that betatropomyosin expression did not peak until 48 h later. West and Boettiger (54) reported that synthesis of alpha and beta tropomyosin in RSV tsLA24-infected myotubes decreased ,o40% after a 29-h shift from nonpermissive to permissive temperatures.
Experiments with pectoral muscle myofibrils demonstrated, on the basis of molecular weights and reactivity with D3-16 (Figs. 3 and 4), that alpha tropomyosin and tropomyosin 2 are different. However, because of the similarity in molecular weights between beta tropomyosin and tropomyosin 2 and because of the previously reported effects of transformation on muscle tropomyosin expression, it was important to determine whether the deficient 37-kD polypeptide was beta tropomyosin.
Comparisons between myofibrillar tropomyosins and tropomyosin 2 were again made by Western blot analysis and by immunofluorescence using the different antibodies. Myofibrils from ALD and PLD muscles and total protein from noninfected and infected myotubes were first resolved by SDS-PAGE and stained with Coomassie Blue (Fig. 7). These preparations contained relatively high levels of contaminating proteins because of the difficulty in obtaining pure myofibrillar preparations (compared with pectoral myofibrils, as seen in Fig. 3). Myofibrils from both PLD and ALD had major bands at 34 and 36 kD (lanes 1 and 2). In addition, ALD myofibrils had a band at ,o37 kD, as did noninfected cultured muscle cells. The 37-kD band was clearly resolved from that migrating at 36 kD. Thus, on the basis of apparent molecular weights tropomyosin 2 is different from beta tropomyosin.
The difference between tropomyosin 2 and beta tropomyosin was confirmed by Western blotting of proteins prepared from similar SDS gels are presented in Fig. 8. The 34-and 36-kD bands in ALD, PLD, and normal cultured muscle were labeled by both TM#6 (Fig. 8 A) and CH 1 (Fig. 8 B) and must, therefore, correspond to alpha and beta myofibrillar tropomyosin. Staining of beta tropomyosin appeared heavier in PLD than in ALD. CH 1 did not label tropomyosin 2 in normal cultured muscle cells, though (Fig. 8 C). Furthermore, D3-16 did not label any proteins present in PLD myofibrils, although it lightly labeled a 37-kD band in some preparations of ALD myofibrils.
Immunofluorescence of fixed and permeabilized cells also revealed differences in the staining patterns of the three antibodies and emphasized the difference between myofibrillar tropomyosins and tropomyosin 2. Both TM#6 and CH 1 stained cultured myotubes in a striated fashion. D3-16 stained chick and rat myotubes brightly, but uniformly (Figs. 6 and 7).
Differences were also observed when the antibodies were used to label myofibrils isolated from ALD muscle, which contained both alpha and beta tropomyosins, or from pectoral muscle. TM#6 and CH 1 labeled the myofibrils in a striated fashion. D3-16, on the other hand, labeled diffusely (in Tropomyosin 2 (arrow) is missing from PLD myofibrils, but is present in ALD myofibrils. It is distinct from beta (13) and alpha (it) tropomyosins present in both types of myofibrils. Positions of molecular mass standards are given on the left. the case of ALD myofibrils) or not at all (Fig. 9). These results strongly support the idea that tropomyosin 2 is not myofibrillar.
Other Differences between Normal and Infected Muscle Cells
On occasion, there were decreased amounts of beta tropomyosin (Fig. 8 A, lanes 2 and 3), as observed previously (54), as well as of tropomyosin 2, in the infected muscle cells. In addition, tropomyosin 1 was sometimes present in increased amounts in the transformed cells (Fig. 8 A). CH 1 labeling (Fig. 8 B) suggested that alpha tropomyosin may have been decreased in transformed cells also, but this was not observed consistently. Also, a peptide with the approximate molecular mass of myosin was decreased in most experiments ( Figs. 1 and 3).
Absence of Tropomyosin 2 in Cells Unable to Cluster AChRs
In a previous paper, we reported on properties of chick skeletal muscle cells infected with a temperature-sensitive mutant of RSV (3). In those experiments, myoblasts were infected at the permissive temperature and were unable to fuse as long as they were kept at this temperature. When the cells were shifted to the nonpermissive temperature, they formed myotubes that were then unable to cluster AChRs, although they resembled normal cells in many other respects. Muscle cells infected with tdl07A, a transformation-defective RSV that lacks the src gene, were still able to cluster their AChRs.
Thus, we hypothesized that the presence of pp60 src somehow interfered with the clustering process.
We have identified a major deficit in the infected cells: greatly decreased amounts of a 37-kD protein that is labeled by a polyclonal anti-tropomyosin antiserum. Its reactivity were resolved by SDS-PAGE. As before, TM#6 stains several tropomyosin isoforms including the predominant alpha (a) and beta (13) forms in the myofibrils. In this figure, TM#6 does not label tropomyosin 2 strongly. CH 1 stains mainly alpha and beta tropomyosins, but it stains alpha more intensely than it does beta, even though both are present in roughly equal amounts as judged by Coomassie Blue staining (Fig. 7). D3-16 labels tropomyosin 1 and tropomyosin 2 which is absent from PLD myofibrils, but present in small amounts in the ALD myofibril preparations (probably due to contamination with cytoplasmic or cytoskeletal material). Positions of molecular weight standards are given on the left.
with this antiserum suggests that it is a tropomyosin isoform, and we have referred to it tentatively as tropomyosin 2. The resemblance between the 37-kD peptide and the class of tropomyosins was further indicated by Western blot analysis of two-dimensional gels using TM#6. A highly acidic protein, similar in isoelectric point to the tropomyosins, was absent from lysates of the infected myotubes (data not shown).
The possibility that tropomyosin 2 is a tropomyosin-like protein is also supported by the observation that RSV-transformed fibroblasts are known to be missing a tropomyosin isoform (26,27,32,34). That particular form, however, must be different from the one we have studied, which is not present in fibroblasts. Amino acid analysis of the 37-kD peptide will assist in confirming its molecular identity. We do not yet know how the presence of pp60 'r~ controls the amount of tropomyosin 2 in tsNY68-infected myotubes. The presence of pp60 ~r" in other kinds of infected cells is also known to affect synthesis of particular proteins. These include fibronectin (1), collagen isoforms (1,15), caldesmon (41) and alpha-actin (55). As mentioned above, tropomyosin isoforms have also been shown to change in transformed fibroblasts (see also reference 14). Thus, there is certainly precedent for the type of alteration we have observed. It is not clear how any of these effects are related to pp60~r~'s activity as a tyrosine protein kinase.
We also do not know why the amounts of tropomyosin 2 remain low even when the cells are maintained at the nonpermissive temperature. In most cases, effects on particular proteins are temperature-sensitive when cells are infected with ts-mutants. It is possible that, in our experiments, pp60 ~r~ had an effect at the permissive temperature that was not reversible when the temperature was elevated. This difference may reflect differences in the response to transformation between terminally differentiated nonreplicative cells, such as multinucleated muscle cells, and cells that remain able to multiply. This idea is supported by the observation of West and Boettiger (54) that tsLA24-RSV infected myotubes maintained for several days at 42°C still lack welldeveloped myofibrils. In our experiments as well, most noninfected myotubes were elongated and striated, with well-organized myofibrils and stress fibers. However, transformed myotubes seldom had striations and often appeared very broad and flattened. Also, Miskin et al. (36) reported elevated plasminogen activator levels in tsNY68-infected myotubes kept at 42°C compared with fibroblasts treated similarly. These results support the present observation that there are differences, even at 42°C, between normal and infected myotubes in the types or levels of proteins synthesized.
Properties of Tropomyosin 2
We used a monoclonal antibody generated against gel-eluted tropomyosin 2 to study its properties. In particular, tropomyosin 2 was shown to be absent from rat and chick fibroblasts and to be distinct from smooth muscle tropomyosins. Western blot analysis and immunocytochemical studies indicate that tropomyosin 2 is not predominantly myofibrillar. The small amount of labeling of ALD myofibrils probably reflects contamination from cytoplasmic or cytoskeletal tropomyosin 2. Our experiments distinguish especially between the 36-kD myofibrillar beta-tropomyosin recognized by TM#6 and CH 1 antibodies and the 37-kD non-myofibrillar tropomyosin 2 recognized by TM#6 and D3-16.
If tropomyosin 2 is non-myofibrillar, what might its location be within muscle cells? Preliminary experiments involving the immunoprecipitation of [32P]orthophosphate labeled infected and noninfected myotubes with TM#6 followed by SDS-PAGE and autoradiography revealed two phosphoproteins from both infected and noninfected cells at 34 and 36 kD that did not comigrate with tropomyosin 2 (data not shown). These are presumably alpha and beta tropomyosin, which are both known to be phosphorylated (37,38). This suggests that tropomyosin 2 resembles more closely the nonmyofibrillar tropomyosins, which are not phosphorylated (25,37). Thus, tropomyosin 2 may be a cytoskeletal molecule in muscle cells.
Tropomyosin 2 May Function as Part of a Sub-Cluster Cytoskeletal Network
We and others have found the 43-kD AChR-associated protein and a nonmuscle form of actin to be concentrated in the vicinity of AChR clusters and in the postsynaptic region of the adult neuromuscular junction (5,8,21,42,47). Our previous work showed that the initial formation of AChR clusters, but not the integrity of existing clusters, is disrupted by cytochalasin treatment (17). One interpretation of these results is that an essential part of AChR clustering is the formation of a network of actin filaments beneath clusters, which then is somehow stabilized against disruption by cytochalasins. In other cell types, tropomyosins are thought to form part of cytoskeletal networks, presumably by binding to actin filaments (14,51). Under some circumstances, tropomyosins also appear to stabilize actin networks (4,18). Thus, it might well be that tropomyosin 2 is an important component of the muscle's cytoskeleton, binding to actin filaments forming beneath nascent clusters. The role of the 43-kD protein would be to serve as a link between the AChR and actin. Evidence presented in this paper suggests that tropomyosin 2 is present throughout embryonic muscle fibers grown in cell culture. Recently, we have found that tropomyosin 2 is enriched in junctional regions of adult rat intercostal fibers. This would be consistent with a crucial role in AChR clustering. The presence of tropomyosin 2 in other regions of muscle cells may indicate that it is also involved in other processes in developing muscle that are dependent on cytoskeletal organization, such as the formation of contractile filaments (19), which, as was pointed out above, is defective in the transformed cells.
In conclusion, we have shown that RSV-infected muscle cells that are unable to cluster AChRs are missing a 37-kD peptide. The missing peptide is labeled by an anti-tropomyosin antibody and may prove to be a novel muscle cytoskeletal tropomyosin. Such a cytoskeletal component might mediate AChR clustering and may also play a role in other processes within muscle cells, such as the assembly of myofibrils. We are in the process of microinjecting our monoclonal antitropomyosin 2 antibody into normal myotubes to see if it has any effects on AChR clustering. We also intend to introduce tropomyosin 2 into tsNY68-infected myotubes to see if this allows them to cluster AChRs. These types of experiment may give us a more direct indication of the role of tropomyosin 2 in AChR clustering. | 2014-10-01T00:00:00.000Z | 1988-05-01T00:00:00.000 | {
"year": 1988,
"sha1": "62c60bb463b196622f5a20e4e44ee72f674fab0b",
"oa_license": "CCBYNCSA",
"oa_url": "http://jcb.rupress.org/content/106/5/1713.full.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "6e1bd50311676fefca5d369ddad484d4a58fd06b",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
202772449 | pes2o/s2orc | v3-fos-license | Face tracking with camshift algorithm for detecting student movement in a class
Face detection (face tracking) has been widely applied for various purposes, including in the fields of entertainment, education and security. Face detection can certainly be done with the camera in real time. For example on a camera on a laptop or camera in a room in real time will detect facial movements. Face detection is implemented using the camshift algorithm. The camshift algorithm works on a search window that can find facial movements in each frame. The camshift algorithm that has been applied can calculate the size and location of the search window that will be used for the next frame. The camshift algorithm can be used for detection such as face detection. The distribution used is hue in the HSV color dimension (Hue, Saturation, Value). The use of this hue distribution is done to overcome differences in human skin color and the background used when taking frames.
Introduction
In information technology, biometrics usually refers to technology to measure and analyze the characteristics of the human body such as fingerprints, eyes, sound patterns and facial patterns that were first used for the authentication process [1]. Face Tracking is a way to detect changes in image changes from one frame to another to find its location [2] [4] 24]. The use of face tracking plays an important role in computer vision and can be widely applied in a variety of applications such as automatic surveillance, traffic monitoring and robot vision [2] [21]. Face tracking moves are very complex due to flexible movement changes, changes in light and changes in point of view [3] [19]. The use of automatic surveillance is used to be able to find objects in the form of faces on someone in a room. Certainly, the camera certainly only takes all body objects to the person who enters the room, so it is necessary to look for faces on a person's body object to be able to recognize the person [4] [17]. There are several factors that affect the results of face tracking, namely the presence of other objects that cover part of the object, the background of objects that have the same characteristics and colors [5][19] [22]. So we need a method that has a good accuracy of resistance to changes that occur in the object [6].
Viola Jones is a method that is quite widely used by other researchers and is quite good in handling face tracking problems in real time, but this method requires a long time to handle, with the position of the front head suitable, and cannot detect the head with face black. The Local Binary Pattern (LBP) method is very effective for describing image features. LBP has advantages such as high speed in an invariant calculation and rotation, facilitating extensive use in the field of shooting, texture checking, facial recognition, image segmentation. However, this method is very difficult to detect faces if there is a small change in the face, and is not accurate enough because it is only used in binary and gray images [7][13] [27].
AdaBoost is a method used in enhancements in the field of machine learning. The AdaBoost method has very accurate predictions by combining several relatively weak and wrong rules. This method is easy to apply, and this method is quite good in handling many faces in one image. However, even though this method produces results that are very suitable for the object being searched for, the calculation of this method also requires considerable time [8] [15].
The SMQT Features and SNOW Classifier methods are methods that have 2 steps of completion. The first step is face lighting, where there will be a search for pixel information on the image to detect the face. The second step is classifying objects with the aim of getting results from face detection. However, face lighting in the first step can result in the determination of skin color, for example in an image with an area of gray may be detected as a face [9][21] [26].
From some research results, the camshift method is very well used for object tracking. Camshift algorithms are used for security, vehicle navigation, surveillance cameras, car driver assistants, biometrics, video games, and industrial automation [10] [11][15] [20]. The Camshift (Continuosly Adaptive Mean-Shift) algorithm known as an improvement to the mean-shift method states that this algorithm has quite good tracking quality on various objects based on the base color of the object [9][23] [27]. The CamShift method is also very good if you want to be combined with several other methods in various studies. So this study will design an application that is able to detect faces using the CamShift algorithm.
Related research
Zhang NaNa, and Zhang Jin researched about optimizing the improvement of face tracking by using camshift algorithms, this study refers to tracking faces where first looking for a moving object whether the object is someone or another moving object, with these two conditions the result of the object traceable can be determined, if you find a face on the object then it can be determined that the object is someone who was caught on camera, if there is no face on the object it is determined as an ordinary object that moves on the camera [12] [23].
Muhammad Haris Khan, John Mc Donagh examines face tracking with various expressions obtained in an image, it aims to be able to capture the best possible face to eliminate a little error when tracking faces using the camshift algorithm, this study has a slight increase because it not only captures faces on the object, but the expression on the face is also determined to get better results, and of course it can still be developed to determine the feeling that is being experienced by the face that is traced to someone [14] [28].
Cihan H. Dagli developed research on tracking many faces in one image, where the K-Means method became a calculation that determines how many faces there are in one image, this study aims to collect the number of faces and trace one by one face starting from one face to the face others in the image, use the color of facial skin as a reference and ignore other colors to accelerate proper facial determination [13] [29].
. Camshift Algorithm
CamShift stands for Continuously Adaptive Mean Shift, which is the development of the Mean Shift algorithm that is done continuously (repetitively) to adapt or adjust to the ever-changing probability distribution of colors each frame change from the video sequence [18] [23]. While the steps of the CamShift algorithm are as follows: For the image of the probability of skin color distribution, the mean area (centroid) in the search window can be searched by the equation: Search zeroth moment: (1) Searching first moment for x and y: And then location of mean on search window (centroid) is a: where I (x, y) is the pixel color value in the position (x, y) of the image and x, y is in the search window.
2-dimensional (2D) orientation of face objects is obtained by performing a second moments calculation with the equation: where the object's orientation is If (7) then the length l and width w of the centroid distribution are : The use of these equations in the face tracking system will result in x, y, face rotation, length and width (area or z value).
The Proposed Model
Starting with displaying the form from the application, after the camera is ready to record, then proceed with the process of shooting, it will be continued by counting conversion of RGB to HSV which mean color panels, RGB stands for red, green, blue. The colors formed by the color model are the result of a mixture of primary colors red, green, and blue based on certain compositions. Converting the HSV color panel to Grayscale then to binary do the camshift algorithm with input in the form of the size and location of the search window and the image of the color probability distribution, and save the zeroth moment. Repeat the calculation region in the middle of the search window with a size larger than the search window for each change in the video image frame. When the camshift calculation results are successful, the next step is to create a detection box on the face that is obtained and display the results of the application that has been tested.To explain the processes that occur in the face tracking application using the camshift algorithm, the author uses a flowchart. The shape of the flowchart of the design process that occurs as shown in Figure 1.
The Result and Discussion
Research requires a surveillance camera which is placed in the classroom at an angle that allows it to get some of the faces in the room. This study uses a surveillance camera with a resolution of 1MegaPixel. Security cameras are mounted as high as 2 meters, 2 meters high due to the high right on the camera allows the camera to capture your face and prevent just got a head if it is too high . his discussion uses an example image made and taken from the Faculty of Education of the English Literature Study Program at the Universitas of Prima Indonesia as an example image that will be used in this study. This study uses images as research material to look for faces in several photos. The photo is the result of recording surveillance cameras installed in class. In the recording of the surveillance camera that has been connected to the application, 21 frames of images will be taken from 21 seconds during video recording, as a test material to display the face results of recording several frames, Figure 2 is an example of a picture in the first second. In the captured image from the video camera, then the size of the search window and the initial location of the search window are first determined, then the calculation region in the middle of the search window with a size larger than the search window. In the test photo in the form of photos of 21 frames of video recording taken, because 21 frames are enough to show different movements. Every face that is obtained will be a sample of face scavenging. A total of 21 images were taken and sorted according to the pose of the face contained in the 21 photos as in figure 3. Where there were 21 pieces of photos from the face to be used as data so that the photos could be known to the students in the class.
Figure 3. Image of Dataset
From the results obtained, in figure 4 there are 8 students who are in the photo. But in the picture only 6 students were detected. This is caused by two things, namely the first because the student's face pose is not visible in the photo, second because the face pose from the camera distance of more than three meters, because based on the application testing results, students who are more than 3 meters away will not be detected by the camera . The position on the front face caught on camera is the face that will be processed by the application to determine who the people on each face are caught. These results must be obtained in accordance with the research that has been done. At a distance of 3 meters, the face position may be caught or could not be caught. Because of the long distance will cause the camera to capture a face increasingly difficult, with the capture of the face tagging faces can not be done. At a distance of 3 meters less, appropriate facial position is at the forefront of the face will be marked and the application will mark the name of the face, the results can be seen in Table 1. Face tracking can be said to be successful if the position of the face that are in the front position. For marking the face name that is a predictive calculation algorithm camshift. Tracking and tagging faces in the application can not tag faces is the man himself, the point here is if the camera captures a facial application will predict the face is A, that when face A camera caught, face signifies the absolute application is A.
5.Conclusion
The conclusions that researchers obtained based on the implematation results of this application are face tracking used in this study in the form of a system process determines the time to start and stop recording the entry of students into the classroom. Data synchronization is only done when the program looks for student names with faces in the database and those in the room. The results of the implementation of the camshift algorithm for face tracking in real time using this camera will be fully successful if done in an open room or enough light and the position of the face more perpendicular to the camera, and vice versa the face is often undetectable in closed rooms, poor lighting, and distance objects with a camera are not ideal (too close or too far away).This can still be developed with other cases such as in offices or activities that require face tracking. That way, the Camshift Algorithm can be used to create face tracking applications to detect the faces of students in a room by matching program data in the database and in the room. | 2019-09-17T03:03:24.489Z | 2019-07-01T00:00:00.000 | {
"year": 2019,
"sha1": "11b271ba7d914b6e6f9a9e0369c7d40d3794f23a",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1230/1/012018",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "03a5b65e8cee523ed667e39f393ecc96cb25caed",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Physics"
]
} |
243829451 | pes2o/s2orc | v3-fos-license | Understanding the differential impacts of COVID-19 among hospitalised patients in South Africa for equitable response
BACKGROUND
There are limited in-depth analyses of COVID-19 differential impacts, especially in resource-limited settings such as South Africa (SA).
OBJECTIVES
To explore context-specific sociodemographic heterogeneities in order to understand the differential impacts of COVID-19.
METHODS
Descriptive epidemiological COVID-19 hospitalisation and mortality data were drawn from daily hospital surveillance data, National Institute for Communicable Diseases (NICD) update reports (6 March 2020 - 24 January 2021) and the Eastern Cape Daily Epidemiological Report (as of 24 March 2021). We examined hospitalisations and mortality by sociodemographics (age using 10-year age bands, sex and race) using absolute numbers, proportions and ratios. The data are presented using tables received from the NICD, and charts were created to show trends and patterns. Mortality rates (per 100 000 population) were calculated using population estimates as a denominator for standardisation. Associations were determined through relative risks (RRs), 95% confidence intervals (CIs) and p-values <0.001.
RESULTS
Black African females had a significantly higher rate of hospitalisation (8.7% (95% CI 8.5 - 8.9)) compared with coloureds, Indians and whites (6.7% (95% CI 6.0 - 7.4), 6.3% (95% CI 5.5 - 7.2) and 4% (95% CI 3.5 - 4.5), respectively). Similarly, black African females had the highest hospitalisation rates at a younger age category of 30 - 39 years (16.1%) compared with other race groups. Whites were hospitalised at older ages than other races, with a median age of 63 years. Black Africans were hospitalised at younger ages than other race groups, with a median age of 52 years. Whites were significantly more likely to die at older ages compared with black Africans (RR 1.07; 95% CI 1.06 - 1.08) or coloureds (RR 1.44; 95% CI 1.33 - 1.54); a similar pattern was found between Indians and whites (RR 1.59; 95% CI 1.47 - 1.73). Women died at older ages than men, although they were admitted to hospital at younger ages. Among black Africans and coloureds, females (50.9 deaths per 100 000 and 37 per 100 000, respectively) had a higher COVID-19 death rate than males (41.2 per 100 000 and 41.5 per 100 000, respectively). However, among Indians and whites, males had higher rates of deaths than females. The ratio of deaths to hospitalisations by race and gender increased with increasing age. In each age group, this ratio was highest among black Africans and lowest among whites.
CONCLUSIONS
The study revealed the heterogeneous nature of COVID-19 impacts in SA. Existing socioeconomic inequalities appear to shape COVID-19 impacts, with a disproportionate effect on black Africans and marginalised and low socioeconomic groups. These differential impacts call for considered attention to mitigating the health disparities among black Africans.
RESEARCH massive challenges in securing hospital beds. [7] In one instance, the private sector hospitals could not accommodate additional patients and referred patients to public hospitals. [8,9] Mortuaries were also overwhelmed and could not cope with burials. [10,11] The pandemic has resulted in overstretched public funds, [12] economic disruption coupled with high rates of unemployment, [13] increased public discord on the best strategy to manage the pandemic, [14] and declining population mental health. [15,16] The SA government responses included a range of COVID-19 impact mitigation strategies such as social welfare support, social relief grants, public-private hospital partnerships, and job creation initiatives for vulnerable populations. [17,18] While these interventions brought temporary relief, they remain unsustainable.
SA is characterised by heterogeneities in socioeconomic status, exposure risk, poverty levels, and healthcare and healthcare access, among others. [19,20] These heterogeneities may result in differential impacts of COVID-19 across groups. Differential impacts of COVID- 19 have been reported elsewhere. [21][22][23] Sociodemographic factors may increase the risk of COVID-19 diagnoses and deaths in black African communities. [24] There are limited in-depth analyses of COVID-19 differential impacts, especially in resource-limited settings such as SA. [25] It is critical to understand the differential impacts of COVID-19 severity and mortality to guide control measures, inform epidemiological models, ensure appropriate resource allocation, and ultimately attain an equitable response.
Objectives
To explore available information to strengthen the evidence base for context-specific heterogeneities in sociodemographics that may potentiate or reduce COVID-19 adverse outcomes.
Methods
This descriptive epidemiological study utilised data from the NICD Surveillance Update Reports, the Daily Hospital Surveillance (DATCOV) report and the Eastern Cape Daily Epidemiological Report (EC Report) for SARS-CoV-2. [6,26] We abstracted cumulative COVID-19 cases, recoveries and mortalities as well as sociodemographic characteristics (e.g. age, gender, race, type of facility) reported between 6 March 2020 and 24 January 2021 from DATCOV [6] and as of 24 March 2021 from the EC Report; [26] we could not obtain data for exact periods, but observed the epidemiological trends, in which there have not been any huge differences.
To assess differential impacts of COVID-19, we examined hospitalisations and mortality by sociodemographics such as age, sex and race using absolute numbers, proportions, ratios and rates per 100 000 people. In addition, for COVID-19 hospitalisation we requested descriptive epidemiological data from the NICD using 215 028 COVID-19 hospitalisations reported from 644 healthcare facilities, 393 in the public and 251 in the private sector, between 6 March 2020 and 24 January 2021. The data provided by the NICD were presented in the form of tables and stratified as follows using absolute numbers and proportions: COVID-19 hospitalisations by age, sex and race, and COVID-19 mortality by age, sex, and race. We also assessed descriptive epidemiological data from the EC Report using 31 498 COVID-19 hospitalisations reported from eight facilities, 22 101 in public and 9 397 in private sector facilities, as of 24 March 2021.
Data were reported in 10-year age bands. We assumed that actual ages were uniformly distributed within these bands. The race comparisons are based on the data for people whose race was known.
We constructed charts for each table that the NICD initially provided to assess trends and patterns of hospitalisations and deaths according to sociodemographics in a simple manner.
The data from the NICD DATCOV report that we utilised were based on available information at the time of reporting. There might therefore be slight changes if new information from the laboratory or districts was submitted late. [6] Using the cross-tabulations from NICD descriptive data, we manually computed relative risk (RR), 95% confidence intervals (CIs) and p-values <0.001 for hospitalisations and mortality.
Mortality rates were calculated using population estimates as a denominator for each race and age group for standardisation. 'COVID-19 death is defined for surveillance purposes as a death resulting from a clinically compatible illness in a probable or confirmed COVID-19 case; unless there is a clear alternative cause of death that cannot be related to COVID-19; there should be no period of complete recovery between the illness and death. ' [27] The case fatality rate (CFR) used was taken from the NICD Epidemiological Report and the EC Report. It was calculated as COVID-19 deaths divided by COVID-19 deaths plus COVID-19 discharges, excluding individuals still in hospital. [6,26] The DATCOV is a hospital-based surveillance system that the SA government endorsed on 15 July 2020. This surveillance system monitors COVID-19 hospital admissions, related health outcomes, and the epidemiology of COVID-19 among hospitalised patients to guide control measures and resource allocation. [6] To date, all 644 SA hospitals (251 private and 393 public hospitals) report COVID-19 hospitalisations to DATCOV. The surveillance system consolidates records received from hospitals on age, sex, race, occupation, comorbid conditions, and outcomes of hospital admission among confirmed SARS-CoV-2 cases (positive reverse transcriptionpolymerase chain reaction assay and antigen tests) who stay in hospital for 1 full day or longer. [6,26] The NICD conducts quality checks on merged data by removing duplicate entries, adding missing data extracted from other sources, and removing patients found to be COVID-19-negative compared with the laboratory master list of all COVID-19 cases. [6,26] Furthermore, the NICD conducts validation checks to identify data errors, and routine checks for correctness, completeness and outliers; they then address these to ensure data quality. The DATCOV surveillance system is further enriched through the Notifiable Medical Conditions (NMC) database and the Master Line List (NMC-SS). Daily patient line lists and summary reports are produced from DATCOV and shared with respective provinces. The NICD has also created application programming interfaces to ensure direct access to the database by key stakeholders who need them for public health interventions. [6] The Ministerial Advisory Committee and the Inter-Ministerial Task Team use the data to inform modelling and guidelines on clinical and public health measures. The DATCOV surveillance system received approval from the Human Research Ethics Committee (Medical), University of the Witwatersrand (ref. no. M160667). All personal information concerning patients and their health status, treatment, or stay in a health establishment is kept confidential on a secure server and de-linked to ensure anonymity. Similarly, NICD reports contain aggregated patient information without personal identifiers. The names of hospitals are also anonymised when reporting the findings. All data stored in DATCOV are only shared with individuals involved in the surveillance system with password encryption. The NICD surveillance update reports, which are publicly available, were used to determine cumulative cases, deaths and recoveries.
COVID-19 hospital admissions by race and sex in SA, 6 March 2020 -24 January 2021
Based on the DATCOV database with all the hospital admissions for COVID-19 from 6 March 2020 to 24 January 2021, the cumulative data in Table 1 show the distribution in admission (percentages) by race, age and gender. The 215 028 persons represent a national hospitalisation rate of 36 per 10 000 during this period. The highest hospitalisation rate was among persons aged 50 -59 years and the lowest among those aged 0 -9 years. Complete analysis of admissions data by race is hampered by the large number of hospitalised patients whose race was not reported (n=74 854). Overall, white people were hospitalised at older ages than other race groups, with a median age of 63 years (Fig. 1). Black Africans were hospitalised at younger ages than other race groups -the median age was 52 years. Table 2 and Fig. 2 show the number and proportion of deaths by race and gender. Analysis of mortality patterns was conducted by comparing those aged <50 years and those aged ≥50 years. There was a significant association in mortality patterns between age and race; whites were significantly more likely to die at older ages than black Africans (RR 1.07; 95% CI 1.06 -1.08). A similar pattern of whites being significantly more likely to die at an older age compared with coloureds was seen (RR 1.44; 95% CI 1.33 -1.54; p<0.001), and also between Indians and whites (RR 1.59; 95% CI 1.47 -1.73; p<0.001). Fig. 3 shows the ratio of deaths to hospitalisations by race and gender, drawn from Tables 1 and 2. Overall, the ratio increased with older age. In each age group, this ratio was highest among black Africans and lowest among whites. This difference was more considerable for the older age groups, and among women compared with men.
Analysis of race, age and sex rates of hospitalisation showed variation. In the age group 20 -29 years, black African females had (Table 1). Table 2 shows overall COVID-19-related mortality rates per 100 000 by race and sex as well as the number of deaths by race and sex. Males of Indian origin (123.9 deaths per 100 000) had a particularly high death rate due to COVID-19, while coloured females (37 per 100 000) had the lowest death rate. Among black Africans and coloureds, females (50.9 per 100 000 and 37 per 100 000, respectively) had a higher death rate than black African and coloured males (41.2 per 100 000 and 41.5 100 000, respectively). The opposite pattern was seen among Indians and whites, where males had higher rates of COVID-19 deaths than females.
EC COVID-19 admissions and outcomes in health facilities as of 24 March 2021
EC was chosen to demonstrate that the differential impacts observed at national level were also seen at provincial level.
The cumulative number of cases and deaths was 195 006 and 11 351, respectively, as of 24 March 2021 (Table 3). There was a recovery rate of 94.1%. There were slight gender differences in deaths; RESEARCH of the 5.8% of patients who died, more males (6.3%) than females (5.5%) died. The cumulative number of SARS-CoV-2 hospitalisations was 31 498. The majority of reported hospitalisations (n=22 101; 70.2%) were in public sector health facilities compared with private sector ones (n=9 397; 29.8%). Similarly, more SARS-CoV-2 tests were conducted in public sector health facilities (n=563 744; 62.1%) compared with private sector ones (n=344 020; 39.9%). When stratified by gender, more women than men had SARS-CoV-2 tests (Fig. 4).
At the provincial level, EC had a smaller number of patients discharged alive in the public care sector (n=7 035) than in the private sector (n=13 314) ( Table 3). Of the 29.9% of deaths of hospitalised patients due to SARS-CoV-2-related causes, 75.9% (n=7 143) occurred in public sector facilities. Of the total number of patients, 20 349 (64.6%) were discharged alive and 103 (0.33%) were currently admitted. Of those who were currently admitted, 81 (78.6%) were in general wards, 20 (20.4%) in intensive care units (ICUs) and 2 (1.9%) in high care; 37 hospitalised patients (35.9%) were on oxygen, and 8 (7.8%) were on ventilation.
The CFR differed by age group and sex (Fig. 5); specifically, it differed between the younger and older populations, with the latter having a higher CFR. The SARS-CoV-2-related CFR increased with advancing age. The CFR also differed by gender, with males having a higher CFR than females.
COVID-19 admissions
The findings show differential impacts of COVID-19 in admissions and mortality in population and age groups. Admissions and deaths occurred later in age for whites compared with black Africans. [6] RESEARCH Admission differences may be due to racial disparities in exposure and susceptibility due to disproportionately higher rates of non-communicable diseases and disease severity. [27] Differential impacts of SARS-CoV-2-related deaths were evident in EC. The vulnerability of black Africans to other pandemics in SA (e.g. tuberculosis (TB) and HIV) is well known, [28] and similar observations were made in the USA, where African Americans had disproportionately high infection and mortality rates due to COVID-19. [25,27,29,30] Considered attention to race and socioeconomic barriers is needed. Proportions of COVID-19 hospitalisations were highest among persons aged 50 -59 years (22.5%) and lowest among those aged 0 -9 years (1.8%). The association between older age and hospital admission has been reported previously. [6] Factors that may explain higher admission rates among the elderly include deterioration of health with age, [31] with consequent weakening of immune function (immunosenescence) that hinders pathogen recognition, alert signalling and clearance, [32] and a heightened inflammatory environment (inflammaging) which is caused by an overexcited yet ineffective alert immune system. [32] Consequently, older people with infectious diseases such as COVID-19 have an increased tendency to develop opportunistic infections, [33] which may rapidly develop into severe illnesses. [32] Furthermore, age-related comorbidities are highly prevalent in the ageing population in SA. [34,35] The elderly may also have difficulties in accessing healthcare and in implementing hygiene behaviours. [19] SA has a growing ageing population, with 8.5% of the population being >60 years old. [19] In a study of 2 491 adults hospitalised with laboratory-confirmed COVID-19 in the USA, older age and male sex were associated with admission to ICUs. [36] Findings from China and the USA have also shown that older age was one of the determinants of COVID-9 hospitalisation. [37,38] There is a need to address health needs and access in this population. The SA healthcare system services an estimated population of >59 million. It is composed of private and public sectors, with 82% of the population accessing healthcare from the public health sector, while 17% use medical aid to access private healthcare. The public health system is overstretched and under-resourced. However, access to private health services depends mainly on the ability to pay. [39] Public healthcare is characterised by challenges such as long waiting hours, rushed appointments, old facilities, poor disease control and prevention practices, and poor quality of care compared with private healthcare. [39,40] These differences may be magnified further by COVID-19. Admission to a public sector facility has been associated with an increased risk of COVID-19 mortality. [6] The present study revealed that higher proportions of black Africans than other racial groups were admitted to public health facilities. Of the majority (>80%) of South Africans who use the public healthcare system, most are black, while most of the <20% who use private healthcare are white. [41] Approximately 50% of health resources are enjoyed by 16% of South Africans (mainly whites). [42] Approximately 73% of South Africans, mostly black Africans, did not have medical aid in 2018 and therefore cannot afford or access private medical care. [41] Black Africans have limited resources (assets, wealth and medical insurance), which may affect access to good healthcare or delay seeking healthcare. [40,43] Insurance coverage has been shown to be strongly associated with better health outcomes, [44] yet in 2018, only 10% of black Africans belonged to medical aid schemes that would afford them access to private healthcare. [45] This inequity implies that COVID-19 worsens existing preventable health differences that remain persistent in SA and increases national levels of poverty and malnutrition. [46,47] Gender differences in rates of COVID-19 hospital admissions were also observed. Although women were admitted at younger ages than men, they tended to die at older ages. The reasons for this finding have been alluded to in an earlier section of the discussion.
Mortality
The results show that COVID-19 in-hospital mortality was highest in black Africans, followed by coloureds, Indians and whites. The differences may be attributed to preexisting inequalities, as black Africans have a disproportionately high burden of unemployment and poverty (such as no RESEARCH education, no income, no shelter and no food) than other population groups in SA. [19] These conditions have consistently led to higher mortality among black Africans (68.5%) compared with whites (9.5%), coloureds (7.7%) and Indians (1.8%). [48] The high mortality of black Africans due to other pandemics in SA, such as TB and HIV, is well known. [19] Similar observations were made in the USA, where African Americans had disproportionately high infection and mortality rates due to COVID-19. [49,50] COVID-19 deaths were also higher among the elderly, as expected. In 2009, 151 700 -575 400 deaths among H1N1 admissions were of older patients (≥50 years). [51,52] The results of the present study show higher mortality among males than females. Similarly, men (6.3%) had a higher CFR than women (5.5%). Men may delay visiting health facilities because they generally have poor health-seeking behaviour, which may increase disease severity. They may also have underlying health conditions of which they are not aware. Men are also known to have more risky health behaviours, such as tobacco and alcohol use, than women. [53] These risky behaviours are associated with an increased risk of cardiovascular disease (CVD), and CVD is reported to be higher in men than women. [54] Several factors may explain the lower mortality in women. Generally, women are known to have better health-seeking behaviours than men, [55] which could increase their opportunity to be screened for COVID-19 and diagnosed early and not become severely ill. Specifically, women are traditionally responsible for the family's health and are likely to be knowledgeable about pathological symptoms. As such, they have been reported to be more likely to use healthcare services than men. It is important to note that although the mortality rate was higher for men than women, females' cumulative incidence risk has consistently remained high since the beginning of the pandemic. This risk may be attributed to gender inequalities such as women being in service occupations and vulnerabilities in domestic and local chores; many women are compelled to be part of super-spreader events as they need to cook and serve food at such events for financial security. [56] These findings provide insight into gender-related differences in healthcare or disease risk. Analysis and documentation of these differences are essential for improving service delivery and assessing the dual goals of improving health status and reducing health inequalities.
It should be noted that the in-hospital mortality rate does not represent total COVID-19 mortality, as many deaths occur outside healthcare facilities. For example, in EC, >40% of persons who died outside healthcare facilities tested positive for SARS-CoV-2.
Study strengths and limitations
The present study should be interpreted within the context of its strengths and potential limitations. It contributes to the understanding of COVID-19 impacts in SA, as few studies have highlighted the socioeconomic inequalities that shape COVID-19 impacts. Some patients who were not admitted to hospital may have been admitted to other institutions. Some discharged patients may have been readmitted elsewhere with critical illness or could have died after discharge, resulting in under-and/or overestimating the study's outcomes (hospitalisation and mortality). The period for the data used in this study differed (for DATCOV, we used data from 6 March 2020 to 24 January 2021, and for EC we used data as of 24 March 2021).
Conclusions
This review revealed the heterogeneous nature of COVID-19 impacts in SA and highlights population groups that are more vulnerable and prone to severe COVID-19 impacts. Existing socioeconomic inequalities seem to shape the effects of COVID-19, both directly and indirectly. COVID-19 may severely impact black Africans, marginalised groups and groups with low socioeconomic status (SES). These differential impacts call for considered attention to mitigating the health disparities among African black populations, including urgent implementation of the comprehensive National Health Insurance, which would expand health resources and investments to under-served communities and fulfil the constitutional right of every SA citizen to health and the National Development Plan key objective, 'Health care for all' . [57] These unique impacts are not often spoken about, yet they have major public health implications for the current and subsequent waves of COVID-19. Understanding the differential impacts of the COVID-19 pandemic on health services, health outcomes and equity is critical, not only for the attainment of equity and social justice but also for planning and adapting public health responses, identification and application of risks and evidence-based strategies, and reduction of the burden of disease, and for advocacy purposes. We call for deliberate efforts to collect SES data in the country's surveillance systems to design targeted, focused and feasible interventions. | 2021-11-07T16:09:51.386Z | 2021-11-05T00:00:00.000 | {
"year": 2021,
"sha1": "8d194140e782d01a0de7fcadfabfde40beb2e342",
"oa_license": "CCBYNC",
"oa_url": "http://www.samj.org.za/index.php/samj/article/download/13428/9952",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "1b07e57ec59d480c3ccf73ba42dfb92e7837ea3f",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14258694 | pes2o/s2orc | v3-fos-license | Intrapulpal Thermal Changes during Setting Reaction of Glass Carbomer® Using Thermocure Lamp
Objectives. To measure the temperature increase induced during thermocure lamp setting reaction of glass carbomer and to compare it with those induced by visible light curing of a resin-modified glass ionomer and a polyacid-modified composite resin in primary and permanent teeth. Materials and Methods. Nonretentive class I cavities were prepared in extracted primary and permanent molars. Glass carbomer (GC) was placed in the cavity and set at 60°C for 60 sn using a special thermocure lamp. Resin-modified glass ionomer (RMGIC) and polyacid-modified composite resin (PMCR) were placed in the cavities and polymerized with an LED curing unit. Temperature increases during setting reactions were measured with a J-type thermocouple wire connected to a data logger. Data were examined using two-way analysis of variance and Tukey's honestly significant difference tests. Results. The use of GC resulted in temperature changes of 5.17 ± 0.92°C and 5.32 ± 0.90°C in primary and permanent teeth, respectively (p > 0.05). Temperature increases were greatest in the GC group, differing significantly from those in the PMCR group (p < 0.05). Conclusion. Temperature increases during polymerization and setting reactions of the materials were below the critical value in all groups. No difference was observed between primary and permanent teeth, regardless of the material used.
Introduction
Heat production is the most severe stress generated in the pulp by various operative procedures [1]. Thermal trauma may be induced by cavity preparation, exothermic polymerization reactions of resin-based restorative materials [2], and exothermic acid-base setting reactions of glass ionomerbased restorative materials [3] or from various light sources used for curing restorative materials [4,5] and may eventually damage pulp tissue irreversibly if it is not controlled [6,7].
A classic animal study by Zach and Cohen [6] established a threshold temperature for irreversible pulpal damage caused by the application of external heat to a sound tooth; a 5.5 ∘ C increase in intrapulpal temperature induced necrosis in 15% of pulp samples tested. Several in vitro studies have shown that various light sources used during the polymerization of resin-based restorative materials cause such increases in pulp temperature [4,5]. Improvements in restorative materials and techniques, together with increased demand for aesthetic restorations, have led to the introduction of a wide range of dental materials, including compomer, resin-modified glass ionomer cements, and self-adhering composites. These materials contain variable proportions of resin matrix. As the exothermic reaction is proportional to the amount of resin available for polymerization and the degree of conversion of carboncarbon double bonds, these materials may be expected to show different degrees of temperature increase when cured by the same light unit. In a 2005 study, Al-Qudah et al. [8] attempted to quantify the temperature increase caused by the light source alone. The variation in maximum temperature increases among these materials may be correlated with their resin content. The authors demonstrated exothermic temperature increases during the setting of resin-modified glass ionomer (RMGIC) and polyacid-modified composite resin 2 BioMed Research International (PMCR) [8]. Polymerization of photo-activated resin composites can result in an intrapulpal temperature increase due to the exothermic reaction process and the energy absorbed during irradiation [9,10].
Glass carbomer (GC), another newly developed material, is a glass ionomer-based restorative material. GC is distinguished from glass ionomer by its nanosized powder particles and fluorapatite crystals. The addition of fluorapatite was based on the belief that glass ionomers turn into fluorapatitelike material over time [11]. The advantages of GC over conventional glass ionomer cements include significantly better mechanical and chemical properties (e.g., strength, shear, and wear) [12][13][14]. The clinical application of GC is similar to that of conventional glass ionomer cement, except that heat application (60 ∘ C, 60 sn) with a special thermocure lamp is recommended during the setting reaction. The beneficial effects of heat on glass ionomers have been documented in recent studies [14][15][16]. However, the effects of these materials on intrapulpal temperature increase during the setting reaction are not known.
The objectives of this study were (1) to measure the temperature increase induced during thermocure lamp setting reaction of an GC and compare it with those induced by visible light curing of an RMGIC and an PMCR and (2) to compare temperature increases in primary and permanent teeth during setting and curing of these three materials. We hypothesized that temperature increases in pulp chambers during the setting of GC, RMGIC, and PMCR materials would be similar and that temperature increases in the pulp chambers of primary and permanent teeth would be similar.
Materials and Methods
In this study, the temperature increases induced during thermocure lamp setting reaction of a glass carbomer (Glass Fill) and induced by visible light curing of a resin-modified glass ionomer cement (Fuji II LC) and a polyacid-modified composite resin (Dyract AP) in primary and permanent teeth were investigated (Table 1).
Nonretentive class I cavities were prepared in extracted, caries-free human primary and permanent second molars. One mm dentine thickness, measured with a digital micrometer (Mitutoyo, Japan), was left between the pulp chamber and occlusal cavity floor. The roots of each tooth were ground away, and the remains of the pulpal tissue were removed. The pulp chamber was then cleaned of all organic remnants using 5.25% sodium hypochlorite solution.
The same procedure was repeated for all groups. The groups were prepared as follows: All measurements were performed on the same primary and permanent teeth to limit the effects of differences in tooth structure. Each tooth was attached to a novel apparatus, designed originally by Sari et al. [17] and customized for this study, to simulate pulpal blood microcirculation (Figure 1). A standard infusion set (Gemed Medical Co., Istanbul, Turkey) with a 21-gauge (green) injector needle was attached to a distilled water bottle (1000 mL). The length of the injector needle was shortened to 5 mm, and the tip of the needle (1 mm in length) was placed on a stainless-steel metal base plate through a drilled hole and used for water inflow. Another needle tip, which was connected to a freestanding infusion tube, was placed adjacent to the first tip and used for water outflow. The fluid flow rate of the system was set and kept constant at 0.026 mL/min using a digital infusion flowmeter (SK-600II infusion pump; SK Medical, Shenzhen, China), which was attached to the system. Room temperature distilled water was used to simulate blood and blood pressure (15 cm H 2 O) in the pulp ( Figure 1). Light curing glass ionomer cavity-liner cement (glass liner; WP Dental GmbH, Barmstedt, Germany) was used to fix the samples onto the stage of the apparatus. A narrow hole providing access to the pulp chamber was drilled into the distal surface of each crown using a diamond bur, and a J-type thermocouple wire (0.36 mm diameter; Omega Engineering, Stamford, CT, USA) was inserted through this aperture into the pulp chamber. A silicone heat-transfer compound (ILC P/N 213414; Wakefield Engineering, Beverly, MA, USA) was applied to the tip of the thermocouple wire, and the wire was fixed in a position that maintained contact with the pulp chamber using light curing calcium hydroxide cement (Calcimol LC; Voco GmbH, Cuxhaven, Germany). The same cement was used to seal the gap around the thermocouple wire, preventing leakage from the system. RMGIC and PMCR were placed into the cavities and polymerized with an LED curing unit (Ultradent, USA) according to the manufacturer's instructions (Table 2). GC was placed in the cavities and cured for 60 s, at 60 ∘ C with a special thermocure lamp (CarboLED, 1400 mw/cm 2 ; GCP Dental, Netherlands). All application procedures were performed according to the manufacturers' instructions. No acid etching or dentine bonding was performed to enable easy removal of the restorative materials, thereby maintaining constant cavity size during repeated removal procedures, as suggested by Hannig and Bott [10]. The procedures were applied to primary and permanent teeth. During polymerization and setting, temperature increases inside the pulp chambers were measured with a thermocouple connected to a data logger (XR440-M Pocket Logger; Pace Scientific, NC, USA) and a computer. The data logger was set to record one sample every 2 s for the duration of recording, which started with light application and continued until the temperature began to decrease. Data collection was monitored in real time, and data in tabular and graphic forms were transferred to a computer. Differences between initial and highest temperature readings (Δ ) were determined.
Statistical Analysis.
Values from all groups were examined using two-way analysis of variance, after the results of Levene and Shapiro-Wilk tests had confirmed equality of variance and the assumption of normality, respectively ( > 0.05). Then, Tukey's honestly significant difference test for multiple comparisons was applied to determine further differences among groups. Results are presented as means, minimums, maximums, and standard deviations. The significance level was set to < 0.05 for all tests. All computations were performed using the SPSS program for Windows (version 20; SPSS, Inc., Chicago, IL, USA).
Results
The mean and standard deviations of the temperature rise at the primary and permanent teeth for all tested materials are shown in Table 3. Temperature changes in permanent and primary teeth with PMCR were 3.04 ± 0.64 ∘ C and 3.26 ± 0.77 ∘ C, respectively ( > 0.05). Temperature changes in permanent and primary teeth with Fuji II LC were 3.90 ± 0.96 ∘ C and 4.22 ± 1.29 ∘ C, respectively ( > 0.05). The use of GC and the CarboLED lamp resulted in temperature changes in permanent and primary teeth of 5.17 ± 0.92 ∘ C and 5.32 ± 0.90 ∘ C, respectively ( > 0.05).
Temperature increases were the greatest in the GC group. Two-way analysis of variance revealed highly significant differences between GC group and PMCR group both permanent and primary teeth ( < 0.001). Results from the PMCR and RMGIC groups were similar ( > 0.05).
The smallest temperature increases were observed in PMCR group. No difference was observed between primary and permanent teeth, regardless of the material used ( > 0.05).
Discussion
In the present study, temperature increases induced during thermocure lamp setting reaction of an GC and visible light curing of an RMGIC and an PMCR material were evaluated in the pulp chambers of primary and permanent teeth. The first study hypothesis was partially supported, as increases in the GC group differed from those in the PMCR group in primary and permanent teeth. The second study hypothesis was supported because temperature increases in the pulp chambers of primary and permanent teeth were similar.
Pediatric dental clinics provide restorative treatments for primary and permanent teeth. Thus, clear definition of all structural differences between tooth types is important, especially when restorative materials are used to improve the quality of primary teeth. For this reason, we used primary and permanent teeth in this study. Pulp microcirculation is an important factor in the regulation of intrapulpal temperature when heat is transferred from an external thermal stimulus to the dentine-pulp complex [18,19]. Lack of microcirculation has been shown to cause greater changes in intrapulpal temperature [20]. Sari et al. [17] designed a novel pulp-blood microcirculation apparatus and used water circulation in the pulp chamber to simulate in vivo conditions. We also used a pulp-blood microcirculation apparatus in the present study. Dental pulp is a highly vascularized tissue, and its viability may be compromised during cavity preparation and restorative procedures [21]. These procedures can increase the intrapulpal temperature and damage the pulp tissue [22]. Zach and Cohen [6] studied the effects of heat on pulp tissue and found that a 5.5 ∘ C increase in intrapulpal temperature was associated with irreversible pulpitis in 15% of teeth tested in rhesus macaques. When the intrapulpal temperature rose to 11.1 ∘ C, 60% of teeth became necrotic [6]. In the present study, temperature increases in all groups were less than 5.5 ∘ C, the estimated critical temperature for pulp damage. To protect vital pulp from thermal damage, excess heat must be distributed or removed from the area. The major limitation of in vitro studies is the lack of pulp-blood microcirculation, which acts as a coolant by transferring excess heat away from the pulp chamber. In this study, we used a pulp-blood microcirculation apparatus to simulate the cooling effect on pulp tissue under clinical conditions.
In restorative dentistry, thermal changes have been evaluated using several approaches, such as cavity preparation, light curing, laser application, bonding, and debonding [4,5,23]. The thermal effect on pulp tissue depends on variations in the thickness of enamel and dentine on the pulp chamber wall [24], the dentine type [25,26], and the choice of resinbased restorative material and light curing unit [4,5]. The type and duration of light application during curing seem to be the most crucial factor. Familiarity with the characteristics and advantages of light sources used for curing is thus needed to gain a suitable perspective in aesthetic dentistry [27]. According to Lloyd et al. [28], the most important factor causing a temperature increase during composite photoactivation is the heat developed by the light curing unit. Yazici et al. [5] suggested that LED units reduce the risk of pulp injury because they increase the temperature less than halogen units do. The results of that study suggest that plasma-arc and LED curing units cause less temperature increase in the pulp chamber; however, assessment of the physical and mechanical properties of cured resin composites is also important [5]. For these reasons, we used an LED halogen curing unit for the photo-polymerization of two aesthetic restorative materials and a thermocure (CarboLED) lamp during the setting reaction of GC. The CarboLED lamp was developed for thermal curing to optimally enhance the excellent qualities of GCP glass carbomer products. The clinical application of GC is similar to that of conventional glass ionomer cements, except that heat application is recommended during the setting reaction. Heat can be provided by a special light curing device during the setting reaction of Glass Fill. The manufacturers of GC recommend the use of the CarboLED lamp for light curing this product and claim that this device achieves the best results. The beneficial effects of heat on glass ionomers have been documented in recent studies [14][15][16]. Higher temperatures during setting have been found to shorten the setting and working times [15,16]. However, outputs indicate that the use of the CarboLED lamp results in an exothermic setting reaction that raises the temperature of the pulp tissue, thereby increasing the risk of pulpal damage. In our study, the temperature of GC was closest to the threshold temperature for irreversible pulpal damage.
Resin-modified glass ionomer cements and polyacidmodified resin composites were developed to overcome the problems of traditional restorative materials, such as moisture sensitivity and reduced early strength, while maintaining the clinical advantages of command setting, adhesion to tooth structures, adequate strength under occlusal loading, fluoride release, and aesthetics [3]. Taking into account the advantages and clinical characteristics of GC, it appears to be an extremely suitable alternative to conventional restorative materials [14]. It may also have a particular role in the restoration of primary teeth.
The setting reaction of RMGIC has a dual mechanism. The usual glass ionomer acid-base reaction begins when the material is mixed, and this is followed by a free radical polymerization reaction, which may be generated by photoinitiators and/or chemical initiators [29].
Restorative materials such as PMCR can be hardened only through photo-polymerization. This setting reaction has two stages. The first stage is dominant free radical polymerization, identical to that occurring in resin composite. Upon light curing, the polymerizable molecules are interconnected in a three-dimensional network that is reinforced by the filler particles included in the material. After initial setting, with the addition of water, Dyract AP contains all ingredients needed to initiate an ionic acid-base reaction, as occurs with glass ionomers [30]. Setting reactions of all recently marketed compomers are also based on dominant light-initiated free radical polymerization, followed by an acid-base reaction [3].
Al-Qudah et al. [8] suggested that the resin content of dental materials was an important factor affecting temperature increase. Greater resin filler content was associated with a lesser temperature increase and thus a smaller proportion of resin available for polymerization. Fillers are chemically inert and do not contribute to the heat of a reaction. In our study, we used PMCR and RMGIC. According to the manufacturers, PMCR has 73% filler content, and RMGIC has 66% filler content. In our study, temperatures were higher in specimens prepared with RMGIC than in those prepared with compomer because of the dual-cure setting mechanism and the lower filler content.
Light curing units for dental applications were developed to initiate photo-polymerization of resin composites, adhesives, sealants, and resin cements [31,32]. The rise in temperature which accompanies visible light curing of resin materials is caused by both the exothermic reaction process and the radiant heat from light source. In addition, various factors, such as the light intensity of the light curing units, the amount of remaining dentin thickness, the composition of the restorative materials, the distance between light curing units and material surface, the position of light curing units, and exposure time, can affect the extent of the increase in temperature during the polymerization process [33][34][35]. Among these factors, the light intensity of the light curing units arises as an important factor for the temperature rise intrapulpal during polymerization. In the current study, we used two different light curing units according to manufacturer's instructions. The highest temperature increase was observed in GCP Glass Fill group. The GCP Glass Fill has got a special thermocure lamp. The reason for this large intrapulpal temperature rise is probably related to the greater power output of the laser lamp, which at 1400 mW/cm 2 /60 sn is considerably greater than the other lamp (Table 2).
In this study, temperature increases in primary and permanent teeth prepared with the three tested materials were compared. Some chemical and morphological properties of dentine structures differ between primary and permanent teeth. Primary teeth have fewer dentinal tubules, which have smaller diameters and are located at distances of 0.4-0.5 mm from the pulpal surface; the peritubular dentine is two to five times thicker than in permanent teeth [36,37]. In the present study, temperature increases were greatest in primary teeth in all groups, but differences from permanent teeth were not significant. However, the primary teeth used in the present study were nearing exfoliation. These teeth had been in occlusion for about 8-9 years, which may have reduced the permeability of the primary dentine due to the apposition of additional peritubular dentinal matrix [36]. Dentinal tubules may become partly or completely obturated by growth of the peritubular dentine [38]. These structural changes may have effect on increases in temperature.
The thickness of remaining dentine may be reduced under clinical conditions. The potential risk of pulp damage is expected to be greater in deep cavities with thin layers of residual dentine, especially in primary and young permanent teeth. In such cases, a simple and highly effective way to protect the pulp is to apply a cement base or lining material.
Although the actual critical temperature that causes pulp damage remains controversial, increases in pulp temperature should be minimized during the polymerization of resinbased dental restorative materials to avoid the risk of pulp damage.
Conclusions
Within the limitations of this study, the following conclusions can be drawn: (1) The use of glass carbomer in combination with the CarboLED lamp resulted in the greatest intrapulpal temperature increases in primary and permanent teeth.
(2) The smallest temperature increases were observed in teeth treated with polyacid-modified composite resin.
(3) No difference was observed between primary and permanent teeth, regardless of the material used.
(4) Temperature increases during polymerization and setting of the materials were below the critical value in all groups. | 2018-04-03T01:46:36.274Z | 2016-12-20T00:00:00.000 | {
"year": 2016,
"sha1": "9c66b63255fdaf52b97fae8f902866e684a5e613",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2016/5173805",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6b34810067186ae0b5bcf95947cca9a4663b0d2f",
"s2fieldsofstudy": [
"Materials Science",
"Medicine"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.