id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
251263940
|
pes2o/s2orc
|
v3-fos-license
|
On the Freeze-Thaw Instability of an Open Pit Slope Using Three-Dimensional Laser Scanning and Numerical Simulation
In order to reveal the instability law of open pit mine slope in high-cold and high-altitude area. Firstly, the slope structural plane is scanned by three-dimensional (3D) laser scanning technology, and the point cloud data is obtained to realize the intelligent identi fi cation of rock mass structural plane. The geometric parameters of rock mass are counted, and the physical structure model is established. Then, we carry out freeze-thaw cyclic tests on granite to obtain the corresponding mechanical parameters. Finally, according to the obtained mechanical parameters, we use RS2 fi nite element software to calculate the shear strength of structural plane and joint by generalized Hoek-Brown criterion and Barton-Bandis criterion, respectively, establish the geomechanical model, and use the fi nite element strength reduction method to calculate the safety factor of slope and judge the instability of slope. The results show that the physical and mechanical properties of granite deteriorate with the increase of the freeze-thaw cycle. Under the action of the freeze-thaw cycle, the pore water in the rock mass freezes and forms frost heaving force. The expansion of volume leads to the further development of joint fi ssures. The strength of rock slope decreases gradually with the increase of freeze-thaw cycle times, and the safety factor of slope decreases continuously. It shows that repeated freeze and thaw alternation makes the stability of mine slope worse and worse. The research results are helpful to prevent the occurrence of slope disasters in advance and are of great signi fi cance to e ff ectively and safely manage the stability of slope, the treatment of open pit, and environmental treatment.
Introduction
With the development of science and technology, 3D laser scanning technology and finite element software are widely used in the analysis of rock slope stability [1][2][3]. At the same time, in order to satisfy the growing production demand, China's mineral resources development has gradually shifted to the northwest region, which is rich in mineral resources but as poor natural environment, with weak infrastructure, poor ecological environment, and extremely cold winter. The temperature variation and seasonal variation caused by the alternation of day and night will have a great impact on the mechanical properties of rock mass. Repeated freezethaw cycle will expand and increase the rock fissures, deteriorate the physical and mechanical properties of the rock, and reduce the strength of the slope rock mass, resulting in the sliding of the slope rock mass [4][5][6]. These issues have become increasingly prominent. Therefore, this paper puts forward the research on the safety and stability evaluation of open pit slope based on 3D laser scanning and numerical simulation.
3D laser scanning is the latest development of geological mapping [7], which has been proved to be an effective noncontact tool. It is used to collect rock mass information. Its application fields include surface geological data collection [8,9], slope stability and displacement monitoring [10,11], and 3D rock mass model creation [12,13]. It has the advantages of high density, maneuverability, high precision, and noncontact. At present, many experts and scholars have carried out a series of scanning analysis on rock slope engineering with the help of this technology. Chen et al. [14] compared the difference between 3D laser scanning technology and traditional window mapping technology. The results show that the average dip/dip direction discrepancy between the two methods is 1.5°/16°, which is due to the large amount of data collected by 3D laser scanning technology. It is proved that the 3D laser scanning technology has the advantages of higher efficiency and accuracy in underground mines. Wang et al. [15] used 3D laser scanning technology to scan the fragmentation of blast muck piles and provided high-precision point cloud data for calculating BFMP. They proposed an improved VCCS algorithm based on discrete characteristics. The results show that when the size of crushing block is 0.1 m~0.5 m, the accuracy of calculation results are about 80%, and when the size of broken block exceeds 0.5 m, almost all bfmp can be calculated correctly. Kromer et al. [16] used a ground-based 3D laser scanning technology to test the Séchiienne landslide in France for six weeks to detect the flow, displacement, and prefailure deformation of discrete collapse events. Then, they proposed an automatic ground laser scanning system with near realtime automatic change detection and processing function. Ma et al. [17] introduced 3D laser scanning technology into the overall deformation monitoring of slope surface in landslide physical model test and comprehensively analyzed the deformation characteristics of landslide in different evolution stages through the example of landslide physical model test. The results show that on the premise of ensuring highprecision feature point monitoring, the overall deformation and displacement of the model slope can be obtained with the help of 3D laser scanning technology.
Similarly, RS2 software can carry out fluid structure coupling analysis and dynamic analysis and can automatically generate finite element meshes such as triangles, and material models have the advantages of diversified types. It is also widely used in numerical simulation under cyclic loading [18]. In order to analyze the yield failure of the open pit mine in Minas Gerais, Brazil, Pereira and Lana [19] established many representative slope hypothesis models by using RS2 finite element software, carried out elastic and plastic simulation, and evaluated the yield failure mechanism. Liu et al. [20] used numerical simulation software to study the main causes of coal mine roof accidents. The results show that as the coal seam continues to advance, the maximum settlement displacement remains basically unchanged, and the settlement displacement curve presents an asymmetric flat bottom distribution and the stress concentration in front of the coal wall is the source of the abutment pressure. Silva and Lana [21] used RS2 software to study the failure mechanism of flexural buckling that occurred in Pau Branco Mine, of Vallourec and Mannesman Group, in 2002. Through the back analysis of the failure mechanism, they obtained the representative values of in situ stress state, normal stiffness modulus, and shear stiffness modulus of foliation structure. The results are of great significance to further analyze the stability of phyllite slopes of Pau Branco Mine. Arslan et al. [22] used RS2 finite element software to analyze the stability of marble stope under static and dynamic conditions, built shear strength reduction (SSR) technology into the software to determine the failure mechanism, and put forward suggestions and carried out neces-sary control to ensure the stability of slope. Adach-Pawelus [23] used RS2 finite element software to conduct numerical simulation in the plane deformation state, combined with seismic activity analysis and numerical simulation methods to illustrate the impact of mine residues on the possibility of seismic events. The results show that undisturbed rock cuttings may have a negative impact on earthquake and rock burst disasters in the mining area.
Even though 3D laser scanning technology and RS2 finite element software have made great progress and application, there are few studies on high-cold and high-altitude areas with the help of and combination of these two technologies. In view of this, this paper uses Optech Polaris LR 3D laser scanner to scan the open pit slope in cold area, obtains the point cloud data, realizes the intelligent identification and information extraction of rock mass structural plane, obtains the rock mechanical parameters through the freeze-thaw cycle tests, and establishes the slope mechanical model combined with RS2 finite element software to calculate the safety factor of the slope under different times of freeze-thaw cycle. This has important guiding significance for in-depth understanding of the law of slope instability of open pit mines in high-cold and high-altitude areas and preventing slope disasters and accidents in advance [24].
Identification and Extraction of Rock
Structure Plane 2.1. Study Area. The research object of this paper is located in Beizhan iron mine in Hejing County, Korla, Xinjiang, in Northwest China. Permanent Piedmont glaciers are located in the south and west of the mining area, and the altitude of the ore body is 3450-3723 m. The mining area is located in the alpine area with perennial snow, and the climate is extremely cold. The monthly average temperature from January to April and from September to December is lower than zero, and the minimum temperature can reach -40°C. The temperature rises from May to August, generally 5-15°C, and the maximum rise can be 20°C. The temperature at night is usually as low as about -3°C, so the temperature difference between day and night in the mining area is large.
The mining area has frequent rain and snow throughout the year. It is the local rainy season from July to August, and it begins to snow in early October. Therefore, it is only suitable to carry out appropriate field operations from May to September every year. The deposit is located in the river valley, and the terrain of the mining area is conducive to natural drainage. The overall strike of the ore body is 97°, and the dip angle is 47°-74°. According to the prediction of landform, meteorology and hydrology, geological structure, human activities, and other conditions in the mining area, the risk of geological disasters such as landslide and debris flow is small, but the risk of collapse geological disasters caused by local high and steep mountains is high. The study area is shown in Figure 1.
3D
Laser Scanning Technique. Optech Polaris LR 3D laser scanner is used to conduct overall scanning and fine scanning of the research area, respectively. The instrument 2 Geofluids meets the long-distance fine test requirements of the project, as shown in Figure 2. The parameters are shown in Table 1.
Rock Structural Plane Geometric Characteristics.
In order to have a preliminary understanding of the whole mining area, 11 3D laser scans were carried out, covering the whole mining pit, as shown in Figure 3(a). 3D laser scanning was carried out on each point to obtain the point cloud data of each point. After splicing and packaging, a complete 3D geometric model is obtained, as shown in Figure 3(b). At the same time, fine scanning is carried out for the mining area. The scanning point S8 is located on the east side of the pit, including 399020 points. The scanning range is 2044.0957 m 2 , and the maximum size of the scanning area is 51:2365 × 61:8522 × 29:0181 m. Combining the 3D laser scanning technology with the digital camera, the pixel points of the picture are matched with the point cloud, and the gray information or color information of the point cloud data can be obtained. The color information is helpful to display the scanning results, and the results are shown in Figure 3(c).
Processing of Point Cloud
Data. The amount of point cloud data obtained by 3D laser scanning structural plane is large and dense will seriously affect the operation speed. Therefore, in order to realize the programming of the algorithm, it is necessary to divide the point cloud data into space and grid the point cloud data. In this study, for the point cloud data with high scanning accuracy, the 3D difference method is adopted. This method has little impact on the accuracy, and the impact can be ignored. The results are shown in Figure 4. In order to obtain the optimal threshold, it is necessary to select the discrimination index of structural plane intelligent recognition. In this paper, the point normal vector is used as the discrimination index. The next step is to determine the flatness detection threshold of point cloud.
Geofluids
According to the rules, when the value of ξ 1 is smaller, the more points are included in the BORDER matrix, which means that more points are regarded as boundary points and do not participate in the next image segmentation algorithm. Conversely, when the value of ξ 1 is too large, the number of points in the BORDER matrix decreases, which means that the edge recognition effect is weakened, indicating that some boundary points will not be recognized effectively. So, according to the actual situation of the mining area, the flatness detection threshold ξ 1 is set to 20°.
Selection of Optimal
Threshold. Make the regional growth threshold ξ 2 between 5°and approximately 40°, and the value interval is 5°. The area threshold remains unchanged at 0.5 m 2 . When the value of regional growth threshold is too small, the growth criteria is relatively severe, and many point cloud data are not identified as structural plane, which makes the regional division too fragmented and the number of structural plane identified is few. When the value of regional growth threshold is too large, the growth criteria is too loose, which leads to a part of the point cloud data located in uneven areas will also be divided into structural plane, and the number of structural plane identified is too large, so it is impossible to distinguish adjacent structural plane. According to visual judgment, when the regional growth threshold is between 20°and 30°, the recognition effect is more realistic. The results of structural plane recognitions are shown in Figures 5 Similarly, make the area threshold W 1 between 0.1 m 2 and approximately 10 m 2 , and the value interval is 0.1 m 2 . The regional growth threshold remains unchanged at 20°. When the area threshold W 1 is greater than 0.1 m 2 , the recognition effect is more practical. When the area threshold is too large, more structural plane are eliminated, which leads to too few remaining structural plane and is not conducive to the later structural plane information extraction. The result of structural plane recognition is shown in Figure 5(d). In summary, the growth threshold is 20°, and the area threshold is 0.1 m 2 .
Acquisition of Structure Plane Occurrence Information.
The least square method is used to fit all nodes of each rock mass structural plane obtained before. In this way, the plane equation of quasi plane shape can be obtained. The plane equation is shown in It is assumed that the spatial coordinates of n points on the structural plane are ðx 1 , y 1 , z 1 Þ, ðx 2 , y 2 , z 2 Þ ,…,ðx n , y n , z n Þ, respectively, and the matrix equation can be expressed as : So, we need to find vector D and make ϕðDÞ = kDX − Zk get the minimum value. Finally, the normal vector and plane equation of structural plane are obtained to complete the fitting of the structural plane.
This study mainly calculates the dip and dip angle in the occurrence information of structural plane. If the normal vector coordinate of structural plane is ðx 0 , y 0 , z 0 Þ. According to the working principle of laser emission in 3D laser scanning technology, it can only scan the structural plane with good exposure on the side slope, so z 0 > 0. With the geodetic coordinate system, the due east and due north are defined as the positive directions of the X axis and Y axis, respectively, and the Z axis points to the elevation direction. Therefore, the occurrence information dip θ and dip angle δ of the rock mass structural plane can be expressed by the following equation: The occurrence information of point cloud data of rock mass structural plane is shown in Figure 6.
Grouping of Rock Mass Structure Planes Based on
Occurrence Information. Then, the K-means cluster analysis method is used to group these occurrence information and is combined with the field investigation of geological
Geofluids
information. It is obtained that the structural plane of the slope in the mining area can be divided into three groups, including a group of gently inclined plane and two groups of steeply inclined joints. Intelligent identification mainly finds two groups of steep joints. Finally, the cluster center and average occurrence are calculated. The average occurrence of the two groups of rock mass structural planes is 261°< 75°and 307°< 77°, respectively. The results are shown in Figure 7.
In this study, the structural plane occurrence modeling method based on empirical probability distribution is adopted to truly reflect the actual structural plane occurrence distribution and then obtain the relative frequency of two groups of rock mass structural plane occurrence. The results are shown in Figure 8.
Calculation of Spacing between Rock Structural Planes.
In order to obtain the spacing between adjacent structural planes, it is calculated according to the method shown in Figure 9. The dotted line in the figure is the initial state of the same group of structural planes, the solid line is the ideal state of the structural planes converted according to the above method, and the structural planes are parallel to each other in the same group. The vertical distance calculation equation is used to calculate the distance between adjacent structural planes.
Where l 1 is plane 1 equation, l 2 is plane 2 equation, and d l 1 l 2 is the vertical distance between two adjacent structural planes.
According to the equation, the distribution characteristics of the spacing information of the two groups of rock mass structural planes are calculated, as shown in Figure 10.
Calculation of Equivalent Trace Length of Rock Mass
Structural Planes. Because the data obtained by the image segmentation method is point cloud data, the method of projection is used to calculate the area of the structural plane, as shown in where γ is the angle between the xoy plane and the structural plane, S is the area of the desired structural plane, and
Geofluids
S xoy is the projected area of the node on the structural plane projected onto the xoy plane. Thus, the exposed area of rock mass structural plane is calculated. For convenience, the structural plane can be replaced by an equivalent circle with equal area. The radius is expressed in The equivalent trace length of the two groups of rock mass structural planes in this paper can be characterized by the equivalent radius obtained by equation (11). The results are shown in Figure 11.
Statistics of Structural Plane Information of Rock Mass.
Based on the theory of mathematical statistics, according to the point cloud data of S8 rock slope, the probability distribution types and statistical parameters of geometric parameters such as the occurrence of rock mass structural plane calculated above are counted, as shown in Table 2.
Numerical Simulation Analysis
In this paper, RS2 elastic-plastic finite element software is used for numerical simulation analysis. An important function of RS2 is to calculate the safety factor of slope stability based on the finite element strength reduction method. By using the Hoek-Brown strength criterion, the system can automatically reduce the strength and obtain the safety factor of the slope. In this software, the constitutive model of rock mass includes the generalized Hoek-Brown model, Mohr-Coulomb model, and Cam-Clay model. At the same time, based on the statistical model, users can input relevant joint parameters according to the actual situation when building the slope model, and the system will automatically generate the joint fracture network.
Establishment of Model and Selection of Parameters.
In order to obtain the relevant mechanical parameters of rock slope and establish a complete geomechanical model, samples were taken from the site and divided into 6 groups with 2 samples in each group. The samples of each group were subjected to 0, 20, 40, 60, 80, and 100 times of freeze-thaw cycle, respectively, and physical and mechanical tests were carried out. Finally, the parameters shown in Table 3 are counted.
We intercepted a section line on the west slope of the mining area to automatically generate the section map of the west slope, import it into RS2 software, and generate the model boundary. The structural plane of rock mass is based on the generalized Hoek-Brown criterion, and the joint is a constitutive model based on Barton-Bandis criterion. According to the previous research, the slope is mainly affected by two sets of joint surfaces. The above physical and mechanical parameters were input into RS2 software, and the three node triangular element was used to generate the finite element grid. The model contained 17207 nodes and 29427 elements. We considered the actual boundary conditions on site and get the final result, as shown in Figure 12. Finally, five monitoring points were arranged for subsequent research.
Analysis of Maximum Principal Stress and Maximum
Shear Strain of Slope Model. According to the definition of principal stress, under the same external force, the principal stress increases with the increase of buried depth. In order to study the variation of the maximum principal stress under different times of freeze-thaw cycle, the cloud chart of the maximum principal stress is obtained as shown in Figure 13. It can be found from the figure that the maximum principal stress at the bottom of the slope model is greater than that in other areas, while the top is the smallest. At the same time, the value range of the maximum principal In order to reflect the influence of different times of the freeze-thaw cycle on slope failure and instability, the cloud chart of corresponding maximum shear strain is obtained by numerical simulation, as shown in Figure 14. It can be seen from the figure that the safety factor decreases with the increasing times of the freeze-thaw cycle. At the same time, the maximum shear strain reflects the relative deformation of slope failure. The figure also shows that the relative deformation of failure gradually increases with the increase of the times of the freeze-thaw cycle. Because this paper studies the rock slope, it will not form a complete sliding zone like the soil slope. However, local rock mass spalling and instability failure will occur within the freeze-thaw shear area of the slope. The results show that the times of the freeze-thaw cycle has a great impact on the strength and stability of slope rock mass. The more times of the freeze-thaw cycle, the more serious the deterioration of internal performance of rock mass and the lower the stability of slope. Figure 15 shows the change of total displacement nephogram of slope rock mass under different times of the freeze-thaw cycle. It can also be found from the figure that the safety factor decreases gradually with the increase of the times of the freeze-thaw cycle. At the same time, due to the self-weight of the overlying rock mass and mechanical excavation, the total displacement reaches the maximum in the first two steps. According to the numerical simulation results, the strength reduction factor decreases with the increasing times of the freeze-thaw cycle. In order to further reflect the relationship between the strength reduction factor and the total displacement of the slope, the curve shown in Figure 16 is drawn. It can be clearly seen from the figure that under the same times of the freeze-thaw cycle, the initial increase of the maximum total displacement of the slope is not obvious, and then with the continuous increase of the strength reduction factor, the maximum total displacement changes abruptly and increases significantly. This shows that the slope is obviously damaged when the strength reduction factor reaches this value. When the times of the freeze-thaw cycle increases, the strength reduction factor decreases gradually, and the decreasing range is larger and larger. It shows that the freeze-thaw cycle has a great weakening effect on the mechanical properties of slope rock mass.
Analysis of Yield Elements and Yield Joints of Slope
Model. The number of yield elements, yield joints, and their distribution can well show the specific failure degree and failure area of rock. According to the previous test results, the physical and mechanical properties of slope rock mass deteriorate due 14 Geofluids to freeze-thaw damage, and the strength of rock mass decreases. From the numerical simulation results, it can be found that with the increase of the times of the freezethaw cycle, the yield elements of the slope are increasing, and their distribution is spreading from the weathered steps above to the steps below. The distribution of yield joints is becoming more and more dense, and they are mainly distributed on the slope surface like the yield elements. In order to clearly reflect the changes of the number of yield elements and yield joints, the curves of the number of yield elements and yield joints of the slope with the times of the freeze-thaw cycle are drawn, as shown in Figure 17. They all increase gradually with the increase of the times of the freeze-thaw cycle, and the increasing rate increases first and then decreases.
Analysis of Horizontal Displacement and Strength
Reduction Times of Nodes. Figure 12 shows the positions of 18 Geofluids the five selected nodes in the slope model. By analyzing the relationship between the horizontal displacement and the reduction times of each node under different times of the freeze-thaw cycle, we can further understand the instability and failure of the slope. It can be seen from Figure 18 that the horizontal displacement of the node changes slightly at the beginning, and then, the displacement changes abruptly, indicating that the slope has undergone obvious instability and failure. From node 5 to node 1, the height of the node decreases, and the horizontal displacement of the node also decreases gradually. In addition, the total displacement change of each node and the number of stages corresponding to the mutation decrease gradually with the increase of the times of the freeze-thaw cycle. This result shows that the failure occurs gradually in advance and the slope stability becomes worse. Maximum total displacement (m) Figure 16: Relationship between strength reduction factor and maximum total displacement of slope under different times of the freezethaw cycle.
Analysis of Safety
Factor of Slope Model. According to the above analysis, when the times of the freeze-thaw cycle are 0, 20, 40, 60, 80, and 100, the safety factors of the slope are 1.86, 1.79, 1.7, 1.57, 1.42, and 1.18, respectively, and the safety factors are getting smaller and smaller. At the same time, it is found that the reduction ranges of the safety factor are 3.76%, 5.03%, 7.65%, 9.55%, and 16.9%, respectively. The gradually increasing reduction range further shows that the more freeze-thaw cycle, the worse the slope stability. This law can also be found from the trend of the curve in Figure 19. This is because the pore water in the rock mass freezes to form frost heaving force, and the volume expansion leads to the further development of joint fissures. With the increase of the freeze-thaw cycle, the strength of rock slope decreases gradually. Freeze-thaw fatigue damage reduces the physical and mechanical properties of rock, and rock and joints are more prone to yield instability. Therefore, we can understand the freeze-thaw mechanism of rock slope as the cumulative process of rock freeze-thaw damage.
Conclusions
In this paper, the influence of the freeze-thaw cycle on the slope stability of an open pit mine is studied with the help of 3D laser scanning technology and RS2 finite element software. The following three conclusions are obtained: (1) For the point cloud data obtained by 3D laser scanning, firstly, the 3D difference method is used for grid processing. After selecting the discrimination index, all nodes are scanned for flatness detection. After the data is simplified, the improved image segmentation algorithm is used to complete the regional division of the structural plane, and the reasonable flatness detection threshold, regional growth threshold, and area threshold are selected to complete the intelligent recognition of rock mass structural plane (2) The geometric parameters and other information of rock mass discontinuity are extracted, the plane equation of rock mass discontinuity is fitted by the least square method, and the dip angle of rock mass discontinuity is calculated. The structural plane is divided by K-means cluster analysis method based on occurrence information, and the spacing and equivalent trace length of structural plane are calculated. The calculation results are basically consistent with the actual investigation of the mining area (3) According to the mechanical parameters obtained from the freeze-thaw cyclic test and the distribution of joint surfaces measured by 3D laser scanning technology, the slope mechanical model is established. The finite element strength reduction method is used to numerically simulate and analyze the slope structural plane, and the safety factor of the slope under different times of the freeze-thaw cycle is calculated. The results show that with the increase of the freeze-thaw cycle, the principal stress, volume strain, and displacement gradually increase. The number of yield elements and yield joints increases gradually, the safety factor decreases continuously, and the stability of the slope becomes worse, which shows that the rocks in high-altitude mining areas are in a freeze-thaw cycle all year round. The freeze-thaw fatigue damage degrades the physical and mechanical properties of rocks, and the rocks and joints are more prone to yield instability
Data Availability
The experimental data used to support the findings of this study are included within the article.
Conflicts of Interest
The authors declare no conflict of interest.
|
2022-08-03T15:22:15.740Z
|
2022-07-31T00:00:00.000
|
{
"year": 2022,
"sha1": "f63baef80f3ec63f95e28a2555208f9175039934",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/geofluids/2022/1705985.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "cb36708fde71e21792bf981310fe7615698a6a1f",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": []
}
|
54170288
|
pes2o/s2orc
|
v3-fos-license
|
Sugammadex in the Management of Sinus Tachycardia after Rocuronium Administration : A Case Report
In rare cases, rocuronium has been associated with dose-related tachycardia, probably by a cardiac muscarinic M2 receptor blockade mechanism. We report the case of a 30-year-old female who underwent excision of a branchial cyst under general anesthesia. This patient presented an episode of sinus tachycardia (130 bpm) shortly after anesthesia induction with propofol, sufentanyl, and rocuronium. Tachycardia could not be explained by any cause other than the use of rocuronium, which was reverted with sugammadex. Two minutes after sugammadex administration, heart rate normalized, corroborating our hypothesis that rocuronium induced the sinus tachycardia observed in our patient. The patient recovered well from the anesthetic-surgical procedure and showed no further cardiovascular, ventilatory, or neurological changes, being transferred to the post-anesthesia care unit, and finally discharged to the ward.
Introduction
The introduction of neuromuscular blocking agents (NBA) to anesthetic practice allowed great advance in airway management and helped to optimize the surgical field by inhibiting spontaneous ventilation and causing relaxation of skeletal muscles.Rocuronium is an aminosteroid non-depolarizing NBA with rapid onset and intermediate duration of action.Its use in anesthesia is increasing due to the possibility of its application in rapid sequence endotracheal intubation and due to existence of an effective reverser.Despite being quite safe, in rare cases rocuronium has been associated with dose-related tachycardia, which may be related to interactions of this steroidal NBA with cardiac muscarinic M2 receptors [1] [2].We report a case of sinus tachycardia following anesthesia induction in which could not be explained by any cause other than the use of rocuronium.Interestingly, heart rate normalized after administration of sugammadex, a modified gamma-cyclodextrin indicated to reverse neuromuscular blockade produced by rocuronium.
Case Report
A 30-year-old female (height 1.55 m, weight 58 kg, ASAI) was admitted to our hospital for excision of a branchial cyst under general anesthesia.During preanesthetic visit the patient denied previous anesthesias, allergies, and regular use of medications.Her preoperative tests were within normal limits.
Patient was not premedicated, but on admission in operating room she had no signs of anxiety.She was monitored with standard monitors (5-lead electrocardiogram, non-invasive blood pressure, pulse oximetry, and expiratory capnography; Infinity Vista XL monitor, Drager, Lubeck, Germany), bispectral index (BIS; A-2000 BIS Monitoring System, Aspect Medical Systems, Newton, USA), and neuromuscular transmission monitor (TOF-Watch SX, Organon Ireland, Dublin, Ireland).Baseline heart rate, arterial blood pressure, and pulse oximetry were within normal limits.Two peripheral venous access (20 G) were established in upper limbs, one exclusively for propofol infusion.
After preoxygenation with 100% oxygen, anesthesia was induced and maintained by target-controlled infusion of propofol PFS 1% (plasma concentration 3.0 mcg•mL −1 ; Diprifusor, AstraZeneca, Alderley Park, UK).After loss of consciousness, verified by no response to verbal stimulation and a BIS value of 32, sufentanyl 35 mcg and rocuronium bromide 35 mg were administered as bolus injections.
Heart rate increased (130 bpm; sinus rhythm) shortly after rocuronium administration, while arterial blood pressure remained stable (mean arterial pressure was around 60 mmHg).Considering the possibility of an awake patient, a bolus dose of 3 mg midazolam was injected, despite a BIS value of 28.As the patient maintained sinus tachycardia with heart rate around 125 bpm, we proceeded to successfully and uneventfully orotracheal intubation and tried to rule out possible causes of sinus tachycardia.After initiation of controlled mechanical ventilation (Fabius GS, Drager, Lubeck, Germany), pulmonary auscultation, end-tidal carbon dioxide (ETCO 2 ) value, and capnography waveform were normal and the patient had no cutaneous rash or any other sign or symptom of anaphylaxis.Additionally, the patient had BIS values within the range for general anesthesia (<45 throughout the episode) and no signs or symptoms of awareness.At that moment, besides an elevated heart rate, only train-of-four (TOF) monitoring was presenting an inappropriate value: we observed an abnormal TOF response after rocuronium administration, with maintenance of one or two counts in response to stimulation.
After ruling out anaphylactic reaction, pain, awareness, and other common causes of tachycardia in this scenario, we thought that an atypical and rare response to rocuronium could be related to the sinus tachycardia, and proceeded to reversal of neuromuscular blockade with 200 mg of sugammadex.About two minutes after sugammadex bolus, we observed a decrease in heart rate from 120 to 76 bpm (sinus rhythm) and a TOF value of 108%.
As the patient had neither arterial blood pressure instability nor ventilatory changes during the tachycardia episode, we chose to continue with the anesthesia.After injection of a bolus dose of 4 mg of cisatracurium, TOF monitoring presented a fatigue pattern followed by no response to stimulation.The surgery was performed without any further complications.At the end of the procedure, patient was extubated and transferred to post-anesthesia care unit (PACU) where she maintained stability of both cardiovascular and respiratory functions, and had no signs of cognitive dysfunction.After one hour in PACU, she was discharged to the ward with an Aldrete score of 10.
Discussion
In the pase, important hemodynamic changes were commonly associated with D-tubocurarine and gallamine, some of the earliest NBAs used in clinical practice.The synthesis of the contemporaneous NBAs, such as the aminosteroids pancuronium, vecuronium, and rocuronium, and the benzylisoquinolines atracurium and cisatracurium, decreased the incidence of adverse cardiovascular events, but did not eliminate them.Anaphylactic reactions and non-immune histamine release are commonly involved in hemodynamic changes.Additionally, interactions with receptors other than the nicotinic receptor, such as muscarinic receptors (especially cardiac M2 subtype), may cause undesirable cardiovascular changes.NBAs have different degrees of affinity for muscarinic receptors.Gallamine is among those with highest affinity, having tachycardia due to cardiac muscarinic receptors blocking as one of its known side effects [3].A similar response is observed with the use of pancuronium.Conversely, Appadu et al. [2] have demonstrated that rocuronium has the lowest affinity for cardiac muscarinic receptors among aminosteroids [2].In this way, cases of tachycardia due to cardiac muscarinic receptors blocking are rare with rocuronium and usually associated with higher doses (dose-related effect) [1].However, with the increasing use of this NBA in clinical practice, rare events will be increasingly observed.Indeed, in a recent study of Sorensen et al., 10% of patients in the rocuronium group (1.0 mg•kg −1 ) showed tachycardia (heart rate >100 bpm) after anesthesia induction [4].
In our case, tachycardia could not be explained by any cause other than the use of rocuronium.It has already been demonstrated that rocuronium does not induce histamine release [5], eliminating non-immune histamine release as an explanation for the tachycardia episode.A high incidence of rocuronium-induced anaphylactic reactions has been reported by French and Norwegian studies [6].However that was not corroborated by further studies from other parts of the world.Nowadays, it is accepted that the incidence of anaphylactic reactions associated with rocuronium is similar to that of other NBA [6].Our patient had no cutaneous, respiratory, or arterial blood pressure changes throughout the anesthetic and surgical procedures, and later in the PACU.Thus, the patient presented only one of the diagnostic criteria for anaphylactic reaction (tachycardia) [7], which makes anaphylactic reaction diagnosis improbable.It is important to note that we have not proceeded to further laboratorial investigations such as quantification of tryptase levels or cutaneous provocative tests because these tests frequently give false results in situations like ours [7].Neuromuscular blocking in an awake patient is very uncomfortable and provokes autonomic alterations, notably tachycardia.This possibility was ruled out after the administration of a bolus dose of midazolam (with no change in heart rate).Additionally, the patient had BIS values within the range for general anesthesia throughout the tachycardia episode and no signs or symptoms of awareness.
Based on recent papers that showed successful resolution of rocuronium-induced anaphylactic reactions with sugammadex administration [8] [9] and considering that this drug encapsulates rocuronium molecules and prevents its action on nicotinic receptors, we administered sugammadex expecting reversal of the suspected muscarinic receptors blocking.Interestingly, tachycardia resolved after two minutes of sugammadex administration, which is the same time necessary for neuromuscular blocking reversal, supporting the hypothesis of rocuronium-induced tachycardia.Thus, after ruling out common differential diagnosis possibilities and considering the straight temporal correlation between the initiation of the tachycardia episode and rocuronium injection and the immediate decrease of heart rate after sugammadex administration, we hypothesized that rocuronium induced the sinus tachycardia observed in our patient, probably by a cardiac muscarinic M2 receptor blockade mechanism.Of note, rocuronium was not used in high doses, which made our case even more uncommon.
|
2018-11-24T04:52:07.292Z
|
2014-09-12T00:00:00.000
|
{
"year": 2014,
"sha1": "04bdfe9ac5ebd843e05588dc6cb6a2366f3867b2",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=49662",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "04bdfe9ac5ebd843e05588dc6cb6a2366f3867b2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
5715768
|
pes2o/s2orc
|
v3-fos-license
|
Fuzzy Rules and Evidence Theory for Satellite Image Analysis
Design of a fuzzy rule based classifier is proposed. The performance of the classifier for multispectral satellite image classification is improved using Dempster- Shafer theory of evidence that exploits information of the neighboring pixels. The classifiers are tested rigorously with two known images and their performance are found to be better than the results available in the literature. We also demonstrate the improvement of performance while using D-S theory along with fuzzy rule based classifiers over the basic fuzzy rule based classifiers for all the test cases.
Introduction
Analysis of satellite images has many important applications such as prediction of storm and rainfall, estimation of natural resources, estimation of crop yields, assessment of damage caused by natural disasters, and land cover classification. In this paper we focus on land cover classification from multi-spectral satellite images.
The most widely used techniques for this problem employ discriminant analysis, maximum likelihood classification, and neural networks [6], [3]. Such classifiers cannot handle the fact that for land cover a pixel may correspond to more than one types of objects. For example, the area covered by a pixel may correspond to 30% land and 70% water. Note that, the uncertainty involved in classifying such a pixel is not probabilistic, but fuzzy in nature and thereby it demands "soft" classifiers. In developing soft classifiers for land cover analysis two approaches have gained popularity. These are based on (1)fuzzy set theory and (2) Dempster and Shafer's (DS) evidence theory [7].
Numerous fuzzy classification techniques have been developed by many researchers to solve problems in diverse fields. A comprehensive account of such works can be found in [2]. Fuzzy rules are attractive because they are interpretable and provides an analyst a deeper insight into the problem. Use of fuzzy rule based systems for land cover analysis is relatively new. In a recent paper Bárdossy and Samaniego [1] have proposed a scheme for developing a fuzzy rulebased classifier for analysis of multispectral images.
The other approach for designing soft classifiers is to use the evidence theory developed by Dempster and Shafer [7]. Since the theory of evidence allows one to combine evidences obtained from diverse sources of information in support a hypothesis, it seems a natural candidate for analyzing multispectral images for land cover classification.
Here we propose a scheme for designing fuzzy rulebased classifiers for land cover types that uses evidence theory for decision making. This is a two stage process. First we find a good set of fuzzy rules using information from all channels. In the next stage, the responses of the fuzzy rules over a 3 × 3 neighborhood are used to define 8 Basic Probability Assignment which are then combined by DS rule to exploit contextual information to make a better decision. The problem of high variation in the variances of different features, which often degrades the performance of a distance based classifier substantially, is handled in a natural manner by fuzzy rules due to the atomic nature of the antecedent clauses.
Designing the Fuzzy Rule base
The proposed scheme has several stages. First a set of labeled prototypes is generated. Then the prototypes are converted into fuzzy rules. The fuzzy rules are further tuned for improving their performance. Labeled prototypes can be generated using any clustering algorithm followed by labeling the cluster centers. However, for most of such algorithms the number of clusters is a predefined parameter. Here we use the prototype generation scheme described in [5]. It is a two stage algorithm involving unsupervised and supervised learning that dynamically decides the number of prototypes and extract them using the training data. For details the readers are referred to [5].
Designing the fuzzy rulebase
A prototype (representing a cluster of points) v i for class k can be translated into a fuzzy rule of the form : The fuzzy set CLOSE TO v ij is modeled by a Gaussian membership function : Given a data point x with unknown class, we first find the firing strength of each rule. Let α i (x) denote the firing strength of the i th rule on a data point x. We assign the point x to class k, if α r = max i (α i (x)) and the r th rule represents class k. Each fuzzy set is characterized by two parameters v j and σ ij . The v ij s of the rules can be initialized with the components of the final set of prototypes,V f inal , generated by our SOFM based algorithm, The notation V 0 is used to indicate that it corresponds to the initial centers of the membership functions. The initial estimates of the σ ij s are computed as follows. For is computed and is associated with the prototype. We use the k w σ ij as the spread of the membership function whose center is at v ij ; k w > 0 is a constant parameter and its value can have a significant impact on the classification performance for complex data sets.
Tuning the rulebase
The initial rulebase R 0 thus obtained is further refined to achieve better performance. The exact tuning algorithm depends on the conjunction operator used for computation of the firing strengths. The firing strength can be calculated using any T-norm [2]. Use of different T-norms results in different classifiers. The minimum and the product are among the most popular T-norms used as conjunction operators. It is much easier to formulate a calculus based tuning algorithm if product is used. However, if there are many clauses in the antecedent, the firing strength of a rule tends to have low numerical values even when the membership value of each individual clause is quite high. Though computationally this does not pose any problem (we are interested in relative firing strengths of the rules), it is conceptually somewhat unattractive -especially from the interpretability viewpoint. Thus to avoid the use of the product and at the same time to be able to derive update rules easily we use a soft-min operator.
The soft-match of n positive number x 1 , x 2 , ..., x n is defined by where q is any real number.
SM is known as an aggregation operator with upper bound of value 1 when It is easy to see that lim q→∞ SM (x 1 , x 2 , ..., x n , q) = max(x 1 , x 2 , ..., x n ) and lim q→−∞ SM (x 1 , x 2 , ..., x n , q) = min(x 1 , x 2 , ..., x n ). Thus we define the softmin operator as the soft match operator with a sufficiently negative value of the parameter q. The firing strength of the r-th rule computed using softmin is In the present study we use q = −10.0. Let x ∈ X be from class c and R c be the rule from class c giving the maximum firing strength α c for x. Also let R ¬c be the rule from the incorrect classes having the highest firing strength α ¬c for x.
We use the error function We minimize E with respect to v cj , v ¬cj and σ cj , σ ¬cj of the two rules R c and R ¬c using gradient decent. Here the index j corresponds to clause number in the corresponding rule. Minimizing E will refine the rules with respect to their contexts in the feature space. Note that, the context referred here is different from the context of a pixel defined in terms of its spatial neighborhood. The tuning process is repeated until the rate of decrement in E becomes negligible resulting in final rule base R f inal .
Using the theory of evidence for Rule aggregation
For the sake of completeness, we briefly introduce the Dempster-Shafer theory of evidence. Let Θ be the universal set and P (Θ) be its power set.
A Belief measure is a function Bel : P (Θ) → [0, 1] that satisfies the axioms [7]. b1 : for every n and for every collection of subsets of Θ.
There is a plausibility measure with each belief measure defined by P l(A) = 1 − Bel(A c )∀A ∈ P (Θ).
Every belief measure and its dual plausibility measure can be expressed in terms of a Basic Probability Assignment (BPA) function m. m : P (Θ) → [0, 1] is called a BPA iff m(∅) = 0 and A⊆Θ m(A) = 1. A belief measure and a plausibility measure are uniquely determined by m through the formulas: (1) Every set A ∈ P (Θ) for which m(A) > 0 is called a focal element of m. Evidence obtained in the same context from two distinct sources and expressed by two BPAs m 1 and m 2 on some power set P (Θ) can be combined by Dempster's rule of combination to obtain a joint BPA m 1,2 as: Here Eq. (3) is often expressed with the notation m 1,2 = m 1 ⊕ m 2 . The rule is commutative and associative. Evidence from any number (say k) of distinct sources can be combined by repetitive application of the rule as m = m 1 ⊕ m 2 ⊕ · · · ⊕ m k = ⊕ k i=1 m i .
Pignistic probability
Given a belief measure we are often required to make decisions based on the available evidence. In such case Θ becomes the set of decision alternatives and the function Bel denote our belief about the choice of the optimal decision θ 0 ∈ Θ. However, in general it is not possible to select the optimal decision directly from the evidence embodied in the function Bel. In such cases, we use the pignistic transformation, Γ Θ , to construct a probability function for selecting the optimal decision [8]. Thus P Θ = Γ Θ (Bel).
P Θ is called a pignistic probability, which can be used for making decision . The pignistic probability for θ ∈ Θ can be expressed in terms of BPAs as follows: Optimal decision can now be chosen in favor of θ 0 , if θ 0 has the highest pignistic probability.
Scheme for decision making
In our problem the frame of discernment is the set of classes, C={C 1 , C 2 , · · · C c }, where c is the number of classes. The propositions take the form the true class label of the pixel of interest is in A ⊂ C.
Let us denote the pixel of interest as p 0 and its eight spatial neighbors as p 1 , p 2 , · · · p 8 . We use the firing strengths produced by the rulebase in support of different classes for p 0 and one of its neighbors, say p i as the i-th source of evidence. Let r be the number of rules in the fuzzy rulebase. Since c ≤ r, there could be multiple rules corresponding to a class. Let α 0 k be the highest firing strength produced by the rules corresponding to the class C k for p 0 . We treat this value as the confidence measure of the rulebase pertaining to the membership of p 0 to the class C k . Thus, the set of values CM 0 = {α 0 k : k = 1, 2, · · · c} contain the confidence measures for all the classes for p 0 (if a confidence measure is less than a threshold, say 0.01, it is set to 0). A similar set of confidence measures CM i can be constructed for every p i ; i = 1, · · · , 8. Now we use CM 0 and CM i to define the i-th BPA m i to the subsets of C. There are 2 c possible subsets of C, i.e., members of the power set of C. Each subset corresponds to the proposition that the "true" class of p 0 is contained in that subset. We shall consider the subsets containing one and two elements only. The subsets containing one element correspond to propositions of the form "the class contained in the subset is the true class for p 0 " and the subsets containing two elements corresponds to propositions of the form "the true class label of p 0 is any one of the two classes contained in the subset". Assigning BPA to a subset essentially involves committing some portion of belief in favor of the proposition represented by the subset. So the scheme followed for assigning BPAs must reflect some realistic assessment of the information available in favor of the proposition. We define m i as follows: The numerators in the right hand side of the above formulae are measures of confidence in favor of the respective propositions. A closer look on (5) shows that the numerator is a product of two terms. The first term is the average of the confidence measures of p 0 and p i for the class C k , while the second term is an exponential one that reflects the degree of closeness of the confidence measures. Thus as a whole a high value of the numerator reflects two facts: (1) both p 0 and p i has high confidence value for class C k and (2) the confidence values are close to each other. Eq. (6) is a straightforward extension of the same concept when we define the confidence in favor of a pair of classes.
Thus for the eight neighboring pixels we obtain eight combinable sources of evidence. The global BPA can be computed by applying the Dempster's rule repeatedly. The combined global BPA m G is computed as follows: It is easily seen that: l, m = 1, 2, ..., c, l = m; where K is given by Once m G is obtained the pignistic probability for each class is computed. The following formula is used for computing the pignistic probability of class C k : The pixel p 0 is assigned to the class C k such that
Experimental results and discussions
We report the performances of the proposed classifiers for two multispectral satellite images. We call them Satimage1 and Satimage2. The Satimage1 is a 256-level Landsat-TM image of size 512 × 512 pixels captured by seven sensors operating in different spectral bands. Each sensor generates an image with pixel values varying from 0 to 255. The 512 × 512 ground truth data provide the actual distribution of classes of objects captured in the image. From this data we produce the labeled data set with each pixel represented by a 7-dimensional feature vector and a class label. Satimage2 also is a seven channel 256-level Landsat-TM image of size 512 × 512. However due to some characteristic of the hardware used in capturing the images the first row and the last column of the images contain gray value 0. So we did not include those pixels in our study and effectively worked with 511 × 511 images. The ground truth containing four classes is used for labeling the data.
In our study we generated 4 training sets of samples for each of the images. For Satimage1, each training set contains 200 data points randomly chosen from each of eight classes. This choice is made to conform to the protocol followed in [4]. For Satimage2 we include in each training set 800 randomly chosen data points from each of four classes. Bischof et al. [3] used more training points / class than that of ours.
First we report the performances of the fuzzy rulebased classifiers using firing strengths directly for decision making and compare the results with the published results. Then we report the performances of the fuzzy classifiers using evidence theoretic approach for decision making. The performances of fuzzy rulebased classifiers using firing strengths directly for decision making is summarized in the Table 1.
For Satimage1 the best result reported in [4] uses a fuzzy integral based method and gives the classification rate 78.15%. In our case, even the worst result is about 5% better than that.
For Satimage2 the reported result in [3] shows 84.7% accuracy with the maximum likelihood classifier (MLC) and 85.9% accuracy with neural network based classifier. In our case for all training-test partitions the fuzzy rulebased classifiers outperform the MLC and at Tables 2 summarizes the performances of the fuzzy rulebased classifiers using evidence theoretic approach. We used the same set of fuzzy rules as used previously, but the rule outputs are aggregated using the evidence theory.
Comparison of Table 2 with Table 1 clearly shows that in every case there is a consistent improvement in the classification performance. In case of Satim-age1 the improvements varied between 1.1% and 1.5% and the best performing classifier (for training set 4) achieves error rate as low as 11.03%. For Satimage2 also the improvement varied between 1.4% and 1.7%. So the overall improvement for Satimage1 over the existing methods is more than 7%. For Satimage2 also we achieved consistent improvements using training sets of smaller size. For applications like crop yield estimation even a small improvement will have a significant impact on the overall estimate.
Conclusion
We proposed two classifiers: one is fuzzy rule based and the other integrates outputs of fuzzy rules using theory of evidence. Fuzzy rules are extracted with the help SOFM. The system automatically decides on the number of rules.
The fuzzy rule based classifier is of general nature and can be applied in any classification problem, while the evidence theoretic classifier exploits the spatial information available for an image to make the classification decision.
In the evidence theoretic framework we use the pixel under consideration and one of its neighbors to provide a body of evidence in support of different propositions regarding the class membership (to a particular class as well as a pair of classes) of the pixel. The BPAs for the propositions are calculated from the mutual confidences of the pixels in support of respective propositions. Eight bodies of evidence is obtained for eight neighbors of the pixel. Now the evidences are combined to obtain a global body of evidence. Then pignistic probability for each class is computed and the pixel is assigned to the class with highest pignistic probability. The proposed system demonstrates a consistent improvement in performance.
|
2011-04-07T22:18:15.000Z
|
2011-04-07T00:00:00.000
|
{
"year": 2011,
"sha1": "34a90e303bd53420dd0b48b05d233f3d2877a825",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "34a90e303bd53420dd0b48b05d233f3d2877a825",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
224854269
|
pes2o/s2orc
|
v3-fos-license
|
E ff ects of Stoichiometry on Structural, Morphological and Nanomechanical Properties of Bi 2 Se 3 Thin Films Deposited on InP(111) Substrates by Pulsed Laser Deposition
: In the present study, the structural, morphological, compositional, nanomechanical, and surface wetting properties of Bi 2 Se 3 thin films prepared using a stoichiometric Bi 2 Se 3 target and a Se-rich Bi 2 Se 5 target are investigated. The Bi 2 Se 3 films were grown on InP(111) substrates by using pulsed laser deposition. X-ray di ff raction results revealed that all the as-grown thin films exhibited were highly c -axis-oriented Bi 2 Se 3 phase with slight shift in di ff raction angles, presumably due to slight stoichiometry changes. The energy dispersive X-ray spectroscopy analyses indicated that the Se-rich target gives rise to a nearly stoichiometric Bi 2 Se 3 films, while the stoichiometric target only resulted in Se-deficient and Bi-rich films. Atomic force microscopy images showed that the films’ surfaces mainly consist of triangular pyramids with step-and-terrace structures with average roughness, R a , being ~2.41 nm and ~1.65 nm for films grown with Bi 2 Se 3 and Bi 2 Se 5 targets, respectively. The hardness (Young’s modulus) of the Bi 2 Se 3 thin films grown from the Bi 2 Se 3 and Bi 2 Se 5 targets were 5.4 GPa (110.2 GPa) and 10.3 GPa (186.5 GPa), respectively. The contact angle measurements of water droplets gave the results that the contact angle (surface energy) of the Bi 2 Se 3 films obtained from the Bi 2 Se 3 and Bi 2 Se 5 targets were 80 ◦ (21.4 mJ / m 2 ) and 110 ◦ (11.9 mJ / m 2 ), respectively.
Nanoindentation is a versatile technique ubiquitously used to obtain the basic mechanical parameters, such as the hardness and elastic modulus, as well as to delineate the deformation mechanisms, creep and fracture behaviors of various nanostructured materials [15][16][17][18] and thin films [19][20][21][22][23] with very high sensitivity and excellent resolution. On the other hand, wettability is an important property of a solid surface, which is intimately related to the chemical compositions and morphology of the surface [24]. The peculiar wetting behaviors exhibited on the surface of two-dimensional and van der Waals layered materials have been receiving dramatically increased interest in recent years [25][26][27]. It implies that specific water-substrate interaction features are relevant to the atomic and electronic structures of the layered materials. In particular, the hydrophobic surface (water contact angle, θ CA > 90 • ) can be used in many applications of self-cleaning surfaces and antifogging [28,29]. Consequently, how to control the behavior of hydrophobicity or hydrophilicity of films' surfaces is also of great importance in realizing the designed functionality for device applications.
Because of the high volatility of selenium (Se), Bi 2 Se 3 tends to form Se vacancies or antisites that serve as donors to result in a sufficiently high carrier concentration and low carrier mobility [30,31]. When severe loss of Se-atoms occurs during the thin-film growth at elevated substrate temperatures, pure phase Bi 2 Se 3 film is usually not achieved, and the obtained films may present impurity phases or even turn into another phase [32]. Thus, to overcome this problem and obtain high-quality stoichiometric Bi 2 Se 3 thin-films, a Se-rich environment is necessary during films' growth. Indeed, this strategy has been employed to grow high-quality Bi 2 Se 3 thin films by creating a Se-rich environment with a Se:Bi flux ratio ranging from 10:1 to 20:1 using molecular beam epitaxy (MBE) [33,34]. Pulsed laser deposition (PLD) offers a high instantaneous deposition rate, relatively high reproducibility, and low costs. The PLD has been used for growing epitaxial and polycrystalline Bi 2 Se 3 thin films [9,30,[35][36][37]. In 2011, Onose et al. [35] successfully grew epitaxial Bi 2 Se 3 thin-films on InP(111) substrates using a designed target with an atomic ratio of Bi:Se of 2:8. Yet, systematic investigations on the effects of target composition, and hence the resultant films' stoichiometry, on the properties of Bi 2 Se 3 thin films have been relatively scarce.
Herein, we conducted comprehensive characterizations of the structural, compositional, morphological, nanomechanical, and wetting properties of Bi 2 Se 3 thin films grown on InP(111) substrates by PLD. In particular, two different targets (i.e., a stoichiometric target of Bi 2 Se 3 and a Se-rich target of Bi 2 Se 5 ) were deliberately used to tune the stoichiometry of the resultant Bi 2 Se 3 films and to unveil its effects on the surface wettability and nanomechanical properties, since both characteristics are of pivotal importance for their practical applications in Bi 2 Se 3 thin film-based microelectronic and spintronic devices.
Materials and Methods
In order to study the effects of film stoichiometry, two targets with different composition effects were used. One is stoichiometric Bi 2 Se 3 and another is a Se-rich target with a nominal composition of Bi 2 Se 5 . The targets were purchased from Ultimate Materials Technology Co., Ltd. (Ping-Tung City, Taiwan). Noticeably, though having differences in Se/Bi atomic ratios of 3/2 and 5/2, both Bi 2 Se 3 and Bi 2 Se 5 targets were polycrystalline with the right Bi 2 Se 3 phase. Bi 2 Se 3 thin films were deposited on InP(111) substrates using PLD at a substrate temperature of 350 • C in vacuum at a base pressure of 4 × 10 −6 Torr (~0.53 mPa). For the PLD process, ultraviolet (UV) pulses (20-ns duration) from a KrF excimer laser (λ = 248 nm, repetition: 1 Hz) were focused on the polycrystalline Bi 2 Se 3 or Bi 2 Se 5 target at a fluence of 5.5 J/cm 2 . The target-to-substrate distance was 40 mm. The target was ablated for approximately 5 min in order to clean its surface before every deposition. The deposition time was 25 min, which resulted in an average Bi 2 Se 3 film thickness of approximately 191 nm (the growth rate of approximately 1.27 Å/pulse).
The crystal structure and surface morphology of the Bi 2 Se 3 thin films were characterized by X-ray diffraction (XRD; Bruker D8, CuKα radiation, λ = 1.5406 Å, Bruker, Billerica, MA, USA) and field emission scanning electron microscopy (SEM, JEOL JSM-6500, JEOL, Pleasanton, CA, USA) operated at an accelerating voltage of 15 kV, respectively. Film compositions were analyzed through Oxford energy-dispersive X-ray spectroscopy (EDS, Inca X-sight 7558, Oxford Instruments plc., Oxfordshire, UK) equipped with the SEM instrument at an accelerating voltage of 15 kV, dead time of 22-30%, and collection time of 60 s. The atomic percentage of each film was determined by averaging the values measured in 5 or more distinct 14 × 20 µm 2 areas on the surface of films. Moreover, the surface morphology and roughness of the thin films were examined using atomic force microscopy (AFM; Veeco Escope, Veeco, New York, USA).
The nanoindentation was performed on a Nanoindenter MTS NanoXP ® system (MTS Cooperation, Nano Instruments Innovation Center, Oak Ridge, TN, USA) with a pyramid-shaped Berkovich diamond tip. The nanomechanical properties of the Bi 2 Se 3 thin films were measured by nanoindentation with a continuous contact stiffness mode (CSM) [38]. At least 20 indentations were performed on each sample and the distance between the adjacent indents was kept at least 10 µm apart to avoid mutual interferences. We also followed the analytic method proposed by Oliver and Pharr [39] to determine the hardness and Young's modulus of measured materials from the load-displacement results. Thus, the hardness (H) and Young's modulus (E) of the Bi 2 Se 3 thin films are obtained and the results are listed in Table 1. Moreover, the surface wettability of the Bi 2 Se 3 thin films under ambient conditions was monitored using a Ramehart Model 200 contact angle goniometer (Ramé-hart, Succasunna, NJ, USA) with deionized water as the liquid.
Structural and Morphological Properties
Bi 2 Se 3 has a rhombohedral structure with a space group D 5 3d R3m that can be described by a hexagonal primitive cell with three five-atomic-layer thick lamellae of -(Se (1) -Bi-Se (2) -Bi-Se (1) )-, in which the atomic layers are stacked in sequence along the c-axis [9]. The XRD patterns of the Bi 2 Se 3 thin films obtained from the Bi 2 Se 3 and Bi 2 Se 5 targets are shown in Figure 1. As is evident from Figure 1, besides the diffraction peaks of InP substrates at 26.3 • and 54.1 • (JCPDS PDF#00-032-0452), the films exhibited highly c-axis-preferred orientation with (006), (0015), and (0021) diffraction peaks of the Bi 2 Se 3 phase (JCPDS PDF#33-0214). However, minor diffraction peaks belonging to the BiSe phase (PDF#29-0246) can be identified. It is noticed that, although both of the as-grown films exhibit highly c-axis preferred orientation of the Bi 2 Se 3 phase, a slight relative shift in diffraction angles indicative of modification of the c-axis parameter is observed. Indeed, by using the dominant Bi 2 Se 3 (006) and Bi 2 Se 3 (0015) peaks and the hexagonal unit cell relationship [32], the average c-axis lattice constant of the Bi 2 Se 3 thin films prepared using Bi 2 Se 3 and Bi 2 Se 5 targets were 28.39 Å and 28.25 Å, respectively, whose values were slightly smaller the c-axis lattice constant of 28.63 Å from the database of Bi 2 Se 3 powder (JCPDS PDF#33-0214). This could be due to the difference in the internal stress built up during the deposition.
Coatings 2020, 10, x FOR PEER REVIEW 4 of 12 of modification of the c-axis parameter is observed. Indeed, by using the dominant Bi2Se3 (006) and Bi2Se3 (0015) peaks and the hexagonal unit cell relationship [32], the average c-axis lattice constant of the Bi2Se3 thin films prepared using Bi2Se3 and Bi2Se5 targets were 28.39 Å and 28.25 Å, respectively, whose values were slightly smaller the c-axis lattice constant of 28.63 Å from the database of Bi2Se3 powder (JCPDS PDF#33-0214). This could be due to the difference in the internal stress built up during the deposition. The grain sizes (D) of the Bi2Se3 films were estimated using the Scherrer equation D = 0.9λ/βcosθ, where λ, β, and θ are the X-ray wavelength, full width at half maximum of the Bi2Se3 (006)-oriented peak, and Bragg diffraction angle, respectively. The estimated D values of the Bi2Se3 thin films prepared using Bi2Se3 target and Bi2Se5 target were 29.7 nm and 26.0 nm, respectively. Figure 2 shows the AFM and SEM-EDS results of Bi2Se3 thin films prepared using the Bi2Se3 and Bi2Se5 target, respectively. As shown in Figure 2a,b, the films mainly consist of triangular pyramids The grain sizes (D) of the Bi 2 Se 3 films were estimated using the Scherrer equation D = 0.9λ/βcosθ, where λ, β, and θ are the X-ray wavelength, full width at half maximum of the Bi 2 Se 3 (006)-oriented peak, and Bragg diffraction angle, respectively. The estimated D values of the Bi 2 Se 3 thin films prepared using Bi 2 Se 3 target and Bi 2 Se 5 target were 29.7 nm and 26.0 nm, respectively. Figure 2 shows the AFM and SEM-EDS results of Bi 2 Se 3 thin films prepared using the Bi 2 Se 3 and Bi 2 Se 5 target, respectively. As shown in Figure 2a,b, the films mainly consist of triangular pyramids with features of step-and-terrace structures. This is a clear indication that the films are growing along the [0001] direction, which is consistent with XRD results displayed in Figure 1. The films also exhibit highly smooth surfaces with the centerline average roughness R a being~2.41 nm and~1.65 nm for films grown from the Bi 2 Se 3 target and from the Bi 2 Se 5 target, respectively. In addition, the films grown from the Bi 2 Se 5 target also show clearer step-and-terrace structures with fewer large particle-like outgrowth defects on the surface as compared to the film grown from the Bi 2 Se 3 target (see 3D images), indicating that these films are closer to the stoichiometric composition and, thus, are less defective.
Coatings 2020, 10, x FOR PEER REVIEW 5 of 12 with features of step-and-terrace structures. This is a clear indication that the films are growing along the [0001] direction, which is consistent with XRD results displayed in Figure 1. The films also exhibit highly smooth surfaces with the centerline average roughness Ra being ~2.41 nm and ~1.65 nm for films grown from the Bi2Se3 target and from the Bi2Se5 target, respectively. In addition, the films grown from the Bi2Se5 target also show clearer step-and-terrace structures with fewer large particlelike outgrowth defects on the surface as compared to the film grown from the Bi2Se3 target (see 3D images), indicating that these films are closer to the stoichiometric composition and, thus, are less defective. The top-view SEM images displayed in Figure 3a,b further confirmed the aforementioned surface morphology. The cross-sectional view images shown at the bottom of Figure 3a,b indicate that the films are rather uniform with their thickness being in the range of 185~197 nm. Furthermore, as is evident from the EDS results displayed in the insets of Figure 3a,b and the typical EDS spectra of the corresponding thin films shown in Figure 3c, the composition of the film prepared from the Bi2Se3 target clearly showed a substantial Se-deficiency of about 4.4 at.%, while the film prepared from the Bi2Se5 target is nearly stoichiometric, which is consistent with the conjectures discussed above. Intuitively, it is rather straightforward to explain why the Bi2Se3 target would lead to Bi-rich (or Se-deficient) film by recognizing that the re-evaporation of Se from the heated substrate (~350 °C) is much faster than Bi owing the much higher vapor pressure of Se [9,41]. The present results also suggest that to obtain stoichiometric Bi2Se3 films, a Se-excessive target is essential. We note that stoichiometric Bi2Se3 and Bi2Te3 films have been shown to exhibit reduced carrier concentration and increased carrier mobility, which led to the enhanced thermoelectric properties and provided suitable conditions for investigating the topological surface states [9,30,42]. The top-view SEM images displayed in Figure 3a,b further confirmed the aforementioned surface morphology. The cross-sectional view images shown at the bottom of Figure 3a,b indicate that the films are rather uniform with their thickness being in the range of 185~197 nm. Furthermore, as is evident from the EDS results displayed in the insets of Figure 3a,b and the typical EDS spectra of the corresponding thin films shown in Figure 3c, the composition of the film prepared from the Bi 2 Se 3 target clearly showed a substantial Se-deficiency of about 4.4 at.%, while the film prepared from the Bi 2 Se 5 target is nearly stoichiometric, which is consistent with the conjectures discussed above. Intuitively, it is rather straightforward to explain why the Bi 2 Se 3 target would lead to Bi-rich (or Se-deficient) film by recognizing that the re-evaporation of Se from the heated substrate (~350 • C) is much faster than Bi owing the much higher vapor pressure of Se [9,41]. The present results also suggest that to obtain stoichiometric Bi 2 Se 3 films, a Se-excessive target is essential. We note that stoichiometric Bi 2 Se 3 and Bi 2 Te 3 films have been shown to exhibit reduced carrier concentration and increased carrier mobility, which led to the enhanced thermoelectric properties and provided suitable conditions for investigating the topological surface states [9,30,42].
Nanomechanical Properties
The typical nanoindentation load-displacement curves of Bi2Se3 thin film deposited on InP(111) substrates are shown in Figure 4a. The hardness and Young's modulus of Bi2Se3 thin films were calculated from the load-displacement curves [39]; the Poisson's ratio of Bi2Se3 films is set to 0.25 in this study. Figure 4b,c present the penetration depth dependence of hardness and Young's modulus are obtained using the CSM method. In 2004, Li et al. [15] indicated that nanoindentation depth should never exceed 30% of the film's thickness. In this work, the CSM technique system is applied to record stiffness data along with load and displacement data dynamically, making it possible to calculate the hardness and Young's modulus at every data point and get their average values during the indentation experiment [15,39]. The mechanical properties obtained under nanoindentation exhibit a convergent manner and are steady with a rational tolerance around penetrating depths of 40~60nm, reflecting that the material properties obtained are intrinsic and the substrate effect on the
Nanomechanical Properties
The typical nanoindentation load-displacement curves of Bi 2 Se 3 thin film deposited on InP(111) substrates are shown in Figure 4a. The hardness and Young's modulus of Bi 2 Se 3 thin films were calculated from the load-displacement curves [39]; the Poisson's ratio of Bi 2 Se 3 films is set to 0.25 in this study. Figure 4b,c present the penetration depth dependence of hardness and Young's modulus are obtained using the CSM method. In 2004, Li et al. [15] indicated that nanoindentation depth should never exceed 30% of the film's thickness. In this work, the CSM technique system is applied to record stiffness data along with load and displacement data dynamically, making it possible to calculate the hardness and Young's modulus at every data point and get their average values during the indentation experiment [15,39]. The mechanical properties obtained under nanoindentation exhibit a convergent manner and are steady with a rational tolerance around penetrating depths of 40~60nm, reflecting that the material properties obtained are intrinsic and the substrate effect on the present thin films for hardness and modulus tests is negligible. The obtained values of hardness (H) and Young's modulus (E) are listed in Table 1 together with those reported in the literature for Bi 2 Se 3 single crystals and thin films deposited on sapphire substrates. From Table 1, it is somewhat surprising to observe that the values of hardness and Young's modulus of the Bi2Se3 thin films are much larger than those of single crystals. The reason for this peculiar observation, especially the very low values for single crystals, is not clear at present. However, by comparing the results for films, the two prominent mechanical property parameters appear to have intimate correlations with the grain size (D) and surface roughness (Ra). For films grown on InP(111) substrate, as in the present case, the lattice mismatch between the Bi2Se3 thin films and substrate is about 0.2% [35], which, in turn, consistently resulted in films with better crystallinity, as indicated by the narrower full width at half maximum of the diffraction peaks, namely ~0.3° for films grown on InP(111) as compared to that of ~0.5° for the films grown on sapphire substrate [14]. Moreover, when comparing the results for the films grown with different targets, it further indicates that stoichiometry of the film can play an even more prominent role in determining the mechanical properties. Namely, the hardness and Young's modulus of the stoichiometric Bi2Se3 thin films are both about two times larger than that of Se-deficient films, which are again about two times larger than that grown on sapphire substrate. The enhancement of H and E values can be explained by considering the film crystallinity and surface roughness. It has been reported that the crystallinity of Bi2Se3 thin films deposited on InP(111) substrate was better than those deposited on Al2O3 and Si substrates [35]. In general, better film crystallinity often results in superior nanomechanical properties [43,44]. Therefore, compared with those reported in [14], the larger values of hardness and Young's modulus of the present Bi2Se3 thin films could be attributed to their better crystallinity. Furthermore, the film surface roughness can also be an important factor. Jian et al. [45] reported that the nanomechanical properties of ZnO thin films were significantly enhanced as the film surfaces became smoother. Even for AISI 316L stainless steel, the mechanical properties were found to From Table 1, it is somewhat surprising to observe that the values of hardness and Young's modulus of the Bi 2 Se 3 thin films are much larger than those of single crystals. The reason for this peculiar observation, especially the very low values for single crystals, is not clear at present. However, by comparing the results for films, the two prominent mechanical property parameters appear to have intimate correlations with the grain size (D) and surface roughness (R a ). For films grown on InP(111) substrate, as in the present case, the lattice mismatch between the Bi 2 Se 3 thin films and substrate is about 0.2% [35], which, in turn, consistently resulted in films with better crystallinity, as indicated by the narrower full width at half maximum of the diffraction peaks, namely~0.3 • for films grown on InP(111) as compared to that of~0.5 • for the films grown on sapphire substrate [14]. Moreover, when comparing the results for the films grown with different targets, it further indicates that stoichiometry of the film can play an even more prominent role in determining the mechanical properties. Namely, the hardness and Young's modulus of the stoichiometric Bi 2 Se 3 thin films are both about two times larger than that of Se-deficient films, which are again about two times larger than that grown on sapphire substrate. The enhancement of H and E values can be explained by considering the film crystallinity and surface roughness. It has been reported that the crystallinity of Bi 2 Se 3 thin films deposited on InP(111) substrate was better than those deposited on Al 2 O 3 and Si substrates [35]. In general, better film crystallinity often results in superior nanomechanical properties [43,44]. Therefore, compared with those reported in [14], the larger values of hardness and Young's modulus of the present Bi 2 Se 3 thin films could be attributed to their better crystallinity. Furthermore, the film surface roughness can also be an important factor. Jian et al. [45] reported that the nanomechanical properties of ZnO thin films were significantly enhanced as the film surfaces became smoother. Even for AISI 316L stainless steel, the mechanical properties were found to decrease with increasing surface roughness [46]. Since the surface roughness of the present films are all below 2.41 nm, it is reasonable to account, at least partially, for the enhanced H and E values.
Turning to the deformation behaviors during nanoindentation, it is evident that there are several pop-ins occurring along the loading segment for both load-displacement curves shown in Figure 4a. It is noted that similar phenomena were found in the previous studies [13,14], where the pop-ins were also observed in nanoindented Bi 2 Se 3 single-crystal and thin films, despite the fact that the loads at which the pop-ins took place varied in each individual measurement. Moreover, it is noted that there is no sign of reverse discontinuity in the unloading portion of the load-displacement curves (the so-called "pop-out" event) being observed. The reverse discontinuity is commonly ascribed to the pressure-induced phase transformation that has been observed in Si or Ge single crystals [47,48]. The absence of these incidences indicates that pressure-induced phase transition did not occur for the Bi 2 Se 3 films in the pressure range applied in this study. In fact, Yu et al. [49] have reported that the pressure-induced phase transition in Bi 2 Se 3 occurred at pressures of 35.6 and 81.2 GPa as revealed, respectively, by Raman spectroscopy and synchrotron XRD experiments conducted in a diamond anvil cell. These values are much higher than the room-temperature hardness of the present hexagonal Bi 2 Se 3 thin films. On the other hand, the pop-in behaviors during nanoindentation have been reported previously in other hexagonal structured materials, such as sapphire [50] and ZnO single crystals [51], as well as GaN thin films [52][53][54] by using the Berkovich indenter tip. It is generally conceived that the nanoindentation-induced deformation mechanism in these hexagonal-structured materials were primarily dominated by the nucleation and/or propagation of dislocations. Thus, it is plausible to believe that similar mechanisms must have been prevailing in the present Bi 2 Se 3 thin films. Reasonably, it can be seen from Table 1 that the hardness of Bi 2 Se 3 thin films increases when D value decreases, partially due to grain boundary hardening.
Within the context of the dislocation-mediated deformation scenarios, the first pop-in event may reflect the transition from perfectly elastic to plastic deformation. Namely, it is the onset of plasticity in Bi 2 Se 3 thin films. Under this circumstance, the corresponding critical shear stress (τ max ) under the Berkovich indenter at an indentation load, P c , where the load-displacement discontinuity occurs, can be determined by using the following relation [55]: where R is the radius of the tip of nanoindenter. The obtained τ max values are 1.8 and 3.4 GPa for Bi 2 Se 3 thin films grown using Bi 2 Se 3 and Bi 2 Se 5 targets, respectively. The τ max is responsible for the homogeneous dislocation nucleation within the deformation region underneath the indenter tip.
Wettability Behavior
The surface wettability of the Bi 2 Se 3 thin films was examined by water contact angle measurements. If the contact angle (θ CA ) is greater than 90 • , it is said to be hydrophobic, otherwise it is hydrophilic. In Figure 5, the values of θ CA for films are 80 • and 110 • for films grown using the Bi 2 Se 3 target and the Bi 2 Se 5 target, respectively.
As described above, the surface roughness measured by the AFM indicated that the Bi 2 Se 3 thin film grown using the Bi 2 Se 5 target have smaller surface roughness, suggesting that the wettability behavior of the surface was significantly affected by the surface morphology of the films [56]. Alternatively, the atomic arrangements and existence of surface defects might also play a role in the eventual surface energy. In general, the surface wettability is a measurement of surface energy and is most commonly quantified by θ CA [57]. The surface energy for Bi 2 Se 3 thin films was calculated by means of the Fowkes-Girifalco-Good (FGG) theory [58]. According to the analysis of the FGG method, the considered critical interaction is the dispersive force or the van der Waals force across the interface existing between the water droplet and the solid surface. The FGG equation is given as: where γ d s and γ d l are the dispersive portions of surface tension for the solid and liquid surfaces, respectively. By combining Young's equation [56] with Equation (2) and taking the nonpolar liquid deionized water as the testing liquid and set γ d l = γ l , the Girifalco-Good-Fowkes-Young equation becomes as: γ d s = γ l (cosθ CA + 1)/4, where γ d s is the surface energy of measured materials. Using γ l = 72.8 mJ/m 2 , the values of surface energy obtained were 21.4 mJ/m 2 and 11.9 mJ/m 2 for films grown with the Bi 2 Se 3 target and Bi 2 Se 5 target, respectively. The lower surface energy gives rise to higher hydrophobicity. It is noted that the θ CA of 110 • for the present stoichiometric Bi 2 Se 3 thin films deposited on InP(111) substrates using PLD is even larger than that (θ CA~9 8.4 • ) of Bi 2 Se 3 thin films deposited on SrTiO 3 (111) substrate by MBE [59]. In any case, the present study suggests that both the hydrophobic/hydrophilic transition behavior and nanomechanical properties of the Bi 2 Se 3 thin films can be manipulated by controlling the target compositions.
Coatings 2020, 10, x FOR PEER REVIEW 9 of 12 Figure 5. Contact angle test: the images of water droplets on the Bi2Se3 thin film surfaces.
As described above, the surface roughness measured by the AFM indicated that the Bi2Se3 thin film grown using the Bi2Se5 target have smaller surface roughness, suggesting that the wettability behavior of the surface was significantly affected by the surface morphology of the films [56]. Alternatively, the atomic arrangements and existence of surface defects might also play a role in the eventual surface energy. In general, the surface wettability is a measurement of surface energy and is most commonly quantified by θCA [57]. The surface energy for Bi2Se3 thin films was calculated by means of the Fowkes-Girifalco-Good (FGG) theory [58]. According to the analysis of the FGG method, the considered critical interaction is the dispersive force or the van der Waals force across the interface existing between the water droplet and the solid surface. The FGG equation is given as: where ( ) and ( ) are the dispersive portions of surface tension for the solid and liquid surfaces, respectively. By combining Young's equation [56] with Equation (2) and taking the nonpolar liquid deionized water as the testing liquid and set ( ) = , the Girifalco-Good-Fowkes-Young equation becomes as: ( ) = ( + 1)/4, where ( ) is the surface energy of measured materials. Using = 72.8 mJ/m 2 , the values of surface energy obtained were 21.4 mJ/m 2 and 11.9 mJ/m 2 for films grown with the Bi2Se3 target and Bi2Se5 target, respectively. The lower surface energy gives rise to higher hydrophobicity. It is noted that the θCA of 110° for the present stoichiometric Bi2Se3 thin films deposited on InP(111) substrates using PLD is even larger than that (θCA~98.4°) of Bi2Se3 thin films deposited on SrTiO3(111) substrate by MBE [59]. In any case, the present study suggests that both the hydrophobic/hydrophilic transition behavior and nanomechanical properties of the Bi2Se3 thin films can be manipulated by controlling the target compositions.
Conclusions
The present study evidently illustrated that stoichiometry, which can be manipulated by tuning the target composition, can give rise to significant effects on the microstructural, morphological, compositional, nanomechanical and surface wetting properties of the Bi 2 Se 3 /InP (111) thin films. The Bi 2 Se 3 thin films were grown using PLD from a stoichiometric Bi 2 Se 3 target and a Se-rich Bi 2 Se 5 target at a substrate temperature of 350 • C in a vacuum with a base pressure of~4 × 10 −6 Torr. The films were highly (00l)-oriented with smooth surfaces consisting mainly of triangular step-and-terrace structures, which is the common feature of epitaxial Bi 2 Se 3 thin films. Compared to the films grown from the Bi 2 Se 3 target, using the Bi 2 Se 5 target is more favorable for obtaining stoichiometric films with larger hardness and Young's modulus. In addition, the contact angle (surface energy) of the Bi 2 Se 3 films deposited from the Bi 2 Se 3 and Bi 2 Se 5 targets were 80 • (21.4 mJ/m 2 ) and 110 • (11.9 mJ/m 2 ), respectively. These results suggest that, in addition to the usual factors such as surface roughness and grain morphology, stoichiometry as well as defect chemistry originated from Se-deficiency may also play important roles in determining the eventual nanomechanical and wettability properties of Bi 2 Se 3 thin films.
|
2020-10-19T18:08:23.158Z
|
2020-10-05T00:00:00.000
|
{
"year": 2020,
"sha1": "0b680921e9f7b1fb8000d288ccd474ece17a05e4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-6412/10/10/958/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "0bd48855f93969d943c97113cc2a05916b72a628",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
250008493
|
pes2o/s2orc
|
v3-fos-license
|
The German EMPATHIC-30 Questionnaire Showed Reliability and Convergent Validity for Use in an Intermediary/General Pediatric Cardiology Unit: A Psychometric Evaluation
Background Family-Centered Care is a useful framework for improving care for hospitalized children with congenital heart disease. The EMpowerment of PArents in THe Intensive Care-30 (EMPATHIC-30) questionnaire is a widely accepted tool to measure parental satisfaction with Family-Centered Care. Psychometric properties of the EMPATHIC-30 have been evaluated in neonatal and pediatric intensive care units, but not in pediatric cardiac care units. Therefore, our aim was to assess the psychometric properties of the German EMPATHIC-30 in an intermediary/general pediatric cardiology unit. Methods We used data from a quality management survey comprising the German EMPATHIC-30, a sociodemographic questionnaire and four general satisfaction items. Data were collected at the intermediary/general pediatric cardiology unit of a specialized heart center in Germany (n = 366). We split the data randomly into two subsets. In the first subset, we assessed internal consistency reliability with McDonald's omega and Cronbach's alpha, and convergent validity using Spearman's rank correlation. Furthermore, we explored the internal structure with Principal Component Analysis (PCA). In the second subset, we validated the resulting structure using Confirmatory Factor Analysis (CFA). Results The reliability estimates exceeded 0.70 for all five domain scores and 0.90 for the full-scale score. Convergent validity between EMPATHIC-30 domain scores/ the full-scale score and the four general satisfaction items was adequate (rs = 0.40–0.74). The PCA suggested three components, accounting for 56.8% of the total variance. Cross-validation via CFA showed poor model fit (χ2 = 1545.78, χ2/df = 3.85, CFI = 0.70, TLI = 0.66, RMSEA = 0.13), indicating that the EMPATHIC-30 shows no clear and generalizable factor structure in this sample. Discussion The German version of the EMPATHIC-30 exhibited reasonable psychometric properties in an intermediary/general pediatric cardiology unit. Follow-up studies should investigate the factor structure of the EMPATHIC-30 in other pediatric inpatient care settings.
INTRODUCTION
Congenital Heart Disease (CHD) is defined as a structural defect of the heart or intrathoracic vessels (1). With a global prevalence of 9.41 per 1,000 births, it represents the most common birth defect worldwide (2,3). In Europe, ∼36 000 children are born with a CHD each year and around 28% of them have moderate to complex heart defects, requiring interventional or surgical treatment (4). During hospitalization, they are exposed to a myriad of stressors, such as separation from their parents, a stressful environment with bright lights and loud noises, restricted mobility, and disrupted sleep. Research shows that children with CHD are at risk for neurodevelopmental impairment, as well as emotional, social, and behavioral difficulties (5)(6)(7). Distress during hospitalization may contribute to these challenges (7,8). Hence, optimizing the hospital environment potentially is an effective strategy to improve neurodevelopmental and psychosocial outcomes of children with CHD, for which Family-Centered Care (FCC) provides a useful framework (9).
Family-Centered Care is an international standard of healthcare provision based on a mutually beneficial partnership among the healthcare providers, patients, and their families (10,11). In pediatrics, FCC emphasizes the parents as their child's primary source of emotional, social, and developmental support and acknowledges them as integral part of the healthcare team (12). Specific FCC interventions either target the parents (e.g., educational programs, participation of parents in medical rounds), the parent-child dyad (e.g., promoting skin-to-skin contact), or the health-care ecosystem as a whole (e.g., structural implementation of a primary nursing model) (13). Most studies investigating the effects of FCC interventions on child and parent wellbeing have been conducted in Neonatal Intensive Care Units (NICUs), with positive effects reported for physical wellbeing, stress regulation, sleep, and neurodevelopmental outcomes of the child, parent-child attachment, and parental mental wellbeing (14)(15)(16)(17). A meta-analysis of randomized controlled trials showed that FCC interventions improve physical health outcomes in premature infants (e.g., weight gain), while their parents experience less anxiety, depression, and stress (18). Despite the positive effects of FCC interventions in neonatology, studies investigating FCC in children with CHD are scarce. However, several authors argue that FCC practices may be similarly beneficial in this population (19)(20)(21).
Measuring the subjective experience of provided care is crucial for advances in this area of research, especially when FCC principles are not structurally implemented yet (22). In order to measure parent satisfaction with FCC, the EMpowerment of PArents in THe Intensive Care (EMPATHIC) questionnaire is frequently used (23). Latour et al. (24,25) originally developed the questionnaire for Pediatric Intensive Care Units (PICUs), based on expert opinions from over 300 PICU nurses and physicians, as well as over 600 parents of children discharged from a PICU. The original scale comprises 65 items, with each item reflecting care aspects from one of the following five domains: Information, Organization, Parental Participation, Care and Cure, and Professional Attitude (23). The domains were identified in qualitative analyses and evaluated quantitatively, by using Confirmatory Factor Analysis (CFA), with separate models for each domain. The authors subsequently developed a shortened version of the questionnaire, the EMPATHIC-30, to improve user friendliness (26). The number of items was reduced by means of multiple regression analysis, resulting in 30 items. In the past years, the EMPATHIC-30 gained international popularity and has been translated from Dutch into various languages, including English, Spanish, Turkish, and German (27)(28)(29)(30).
In the original publication of the EMPATHIC-30, Latour et al. (26) found high internal consistency reliability estimates for the five domain scores and the full-scale score. Gill et al. (27) tested the questionnaire's psychometric properties in Australian PICUs, NICUs, and general pediatric wards and reported similar values for the internal consistency reliability (27). Above that, the questionnaire showed adequate convergent validity, as assessed by moderate to strong correlations between each of the domain scores and four general satisfaction items, pointing toward applicability of the questionnaire in these care settings. Orive et al. (28) investigated internal consistency reliability and convergent validity of the questionnaire in Spanish PICUs, with similar results. Only few studies have investigated the construct validity of the questionnaire by using factor analysis. Factor analysis is a statistical method to identify latent variables, which explain covariation amongst a set of measured variables (31). It is therefore an essential approach to generate and evaluate hypotheses about the underlying construct an instrument aims to measure (32). Tiryaki et al. (29) investigated psychometric properties of the EMPATHIC-30 in Turkish NICUs and conducted a CFA in a sample of 238 parents. The authors found a moderate model fit of the final factor solution. However, the factor structure was not reported and thus remains unclear. The German version of the EMPATHIC-30 has not been evaluated psychometrically (30). Furthermore, while the EMPATHIC-30 has been extensively evaluated in different care settings, it has not been psychometrically tested for use in pediatric cardiology units. Therefore, our aim was to evaluate the psychometric properties, specifically internal consistency reliability, convergent validity, and factor structure of the German EMPATHIC-30 at an intermediary/general pediatric cardiology unit. In order to assess internal consistency reliability, we used McDonald's omega. Although controversially discussed in the literature, we additionally present the classical Cronbach's alpha, to allow for direct comparison to other studies (33,34). To assess convergent validity, we investigated the relationship between the domain scores and the full-scale score with four general satisfaction items, comparable to the methodology of above-mentioned studies. Furthermore, we investigated the factor structure of the questionnaire, following a two-step procedure. In the first step, we explored the internal structure of the questionnaire on half of the data using Principal Component Analysis (PCA) rather than Exploratory Factor Analysis (EFA). While both PCA and EFA are variable reduction techniques, EFA assumes an underlying construct, which is not measured directly, and PCA reflects a linear combination of variables. We used PCA to explore the internal structure of the questionnaire, because our focus was to explore the structure in total item variance including error, without making assumptions on latent constructs, as these were unknown for the current context (35). In a second step, we used three separate CFA on the other half of the data: The first CFA was conducted to validate the structure resulting from the PCA. The second CFA was conducted to investigate a one-component solution, motivated by potential unidimensionality of the scale. The third CFA was conducted to investigate a five-component solution motivated by the five domains of the EMPATHIC-30.
Study Design and Setting
For the psychometric evaluation of the EMPATHIC-30 questionnaire, we used data from a quality management survey comprising the German EMPATHIC-30, a sociodemographic questionnaire, four general satisfaction items and open commentary fields. Data were collected at the intermediary/general pediatric cardiology unit of the German Heart Center Berlin. With its 24 monitored beds and 1,200 yearly admissions, the unit provides specialized care to patients of all ages, ranging from infants to adults, with varying degrees of CHD. This study was approved by the Medical Ethics Committee Charité Virchow (Nr EA2/032/20).
Procedures
All parents of children with CHD hospitalized at the ward were invited to participate in the quality management survey. Participation was voluntary and anonymous. At discharge, doctors handed out a paper and pencil version of the survey together with a return envelope. After completing the survey, parents returned it in a mailbox on the ward. Data collection took place between August 2019 and June 2021.
Materials
The German EMPATHIC-30 questionnaire comprises 30 statements spanning five domains: Information (5 items), Organization (5 items), Parental Participation (6 items), Care and Cure (8 items), and Professional Attitude (6 items). Every statement is rated on a six-point scoring-scale ranging from 1 "certainly no" to 6 "certainly yes, " or rated 0 for the answer alternative "not applicable." Sociodemographic information was obtained through a purpose-designed questionnaire. It contains one item to specify the respondent (with options "mother, " "father, " "both mother and father, " and "other relatives" with the option of open-ended specification), as well as items relating to age of the child, place of birth and mother tongue of the parents, length of hospital stay, type of and reason for admission, and undertaken medical procedures.
Four general satisfaction items were included in the survey: Two items are rated on the same six-point scale as the EMPATHIC-30 questionnaire: "We would recommend this unit or ward, " "We would be happy to return to this unit or ward". Two more items are rated on a ten-point scale, ranging from "very bad" to "excellent": "Overall performance of doctors" as well as "Overall performance of nurses" (23). Furthermore, commentary fields were included in the survey about general experiences made during admission, hospital stay, and discharge.
Statistical Analyses
Statistical analyses were carried out using SPSS 27 (SPSS Inc, Chicago, Illinois). Non-linear and linear PCA were conducted in SPSS. AMOS, an SPSS extension module, was used for the CFA.
Handling of Answer Alternative "Not Applicable"
Non-linear Principal Component Analysis (CATPCA) was performed to determine the best linear replacement values for observed scores in each item individually, for the scores 0 up to 6 (0 corresponding to the answer alternative "not applicable") (36). Based on transformation plots from nominal optimal scaling, the scores 0 and 6 got assigned a similar quantification; both answer categories had an equivalent interpretation by participants. This was consistent with previous findings by Latour et al. (23).
Scores on the answer category "not applicable" were therefore recoded to the highest value of the scale (i.e., 6). In addition, the transformation plots revealed that the answer categories functioned as near-equally spaced linear scale; models with nominal transformation and with numerical transformation after recoding yielded 0.8% difference in explained variance. All subsequent linear analyses were performed using the recoded scores.
Handling of Missing Data
Returned questionnaires with ≥75% of missing items were excluded from analysis. One third of respondents presented at least one missing value and the total percentage of missing data points was 2.3%. Missing data can affect the estimation and interpretation of PCA (37). Little's Missing Completely at Random (MCAR) test was significant, indicating that missings are not missing completely at random, thus indicating a potentially systematic difference between missing and observed values (38). Therefore, multiple imputation, a proven statistical method to estimate missing values, was used on the recoded scores. Missing scores were estimated in 25 sets, applying Markov Chain Monte Carlo sampling and predictive mean matching (39). Results of the statistical analyses were pooled for the imputed data sets whenever possible.
Data Split for Separate Estimation and Validation
The data set was randomly split in half, creating two subsets (A, B) to perform 2-fold cross-validation. All statistical structure and content analyses were performed on set A. Set B was used only as validation set for the confirmatory evaluation of the internal structure via CFA.
Descriptive Statistics
Descriptive statistics of the EMPATHIC-30 scores as well as sociodemographic characteristics of the sample are reported (means and standard deviations for quantitative variables, absolute frequencies and percentages for categorical variables).
To check for successful randomization, descriptive statistics for the full set, analysis set A, and validation set B, as well as test statistics for the comparison between set A and B are provided.
Internal Consistency Reliability
The internal consistency reliability of the German EMPATHIC-30 questionnaire on domain and full-scale level was assessed with McDonald's omega. Cronbach's alpha was computed additionally. Values greater than 0.70, 0.80, and 0.90 reflect acceptable, good, and excellent reliability, respectively (40).
Convergent Validity
To examine convergent validity of the questionnaire, we used Spearman's rank correlation test for non-normally distributed data, as assessed visually and through significant Shapiro Wilk tests (p < 0.01). We assessed the relationship between the domain scores/ the full-scale score and the four overall satisfaction statements. Based on findings from other validation studies, we expected moderate to strong correlation coefficients, ranging from 0.40 up to 0.79, indicating adequate convergent validity (41).
Internal Structure Principal Component Analysis
We conducted a PCA to explore the internal structure of the questionnaire. An oblique rotation should be applied, which reorients the components in order to simplify the mathematical model and interpretation by allowing for intercorrelations between the components. However, this rotation is not implemented for multiply imputed data. Therefore, we conducted a two-step procedure. First, we performed a PCA on the unimputed data set A to determine the number of components. Pairwise deletion was selected to handle missing values. The suitability of the data was assessed with the Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy and Bartlett's test of sphericity. In this exploratory stage, the KMO value is interpreted as an approximation of the ratio of potential common variance compared to the total variance in the data and thus provides information if subsequent factor analysis is suitable. Final component extraction was based on the combined Monte Carlo Parallel Analysis and examination of the scree plot (42). Oblique rotation allowing for intercorrelations between the components was applied in this step. For items with cross-loadings, the component on which the item loaded higher was selected. Loadings under 0.30 (<10% shared variance between item and component) were considered as negligible and therefore not considered for inclusion in the component structure. Second, we used the results of this PCA to motivate the number of components in a second PCA on the imputed data set A by using Generalized Procrustes Analyses in the subroutine by Wingerde et al. (43). This subroutine imposed a pre-specified number of components and orthogonal rotation of the component loadings, ignoring intercorrelations between the components.
Confirmatory Factor Analysis
We conducted three separate factor analyses on set B of the sample. First, we conducted a CFA to validate the component structure resulting from the two-step PCA. Second, we conducted a CFA based on a one-component model to investigate potential unidimensionality of the questionnaire. Third, we conducted a CFA based on a five-component model to investigate the validity of the five domains of the EMPATHIC-30 (Information, Organization, Parental Participation, Care and Cure, and Professional Attitude). In the CFA measurement models, correlation between the components is allowed. As combining the results of multiply imputed data is not possible in AMOS, we conducted the analyses on the data with missing values using Full Information Maximum Likelihood Estimation and compared model estimates for robustness. To assess model fit, we used the following fit indices: model-Chi-squared test divided by the degrees of freedom (χ²/df), Comparative Fit Index (CFI), Tucker Lewis Index (TLI), and Root Mean Square Error of Approximation (RMSEA). Cut-off values were: χ²/df < 3, CFI of at least 0.90, TLI of at least 0.95, and RMSEA < 0.08 (44,45). A second evaluation of robustness of findings was performed by repeating the same analyses on the other half of the data (set A).
RESULTS
A total of 475 questionnaires were returned between August 2019 and June 2021. The response rate was 68% (percentage of returned questionnaires vs. distributed copies). To ensure homogeneity of the data set, we only included questionnaires filled out by parents. As a result, we excluded 91 questionnaires filled out by adult patients, as well as nine questionnaires filled out by relatives other than parents. Upon first exploration of data, we excluded three more questionnaires with comments in the commentary fields reflecting very high satisfaction, but with lowest possible scores on EMPATHIC-30 items, potentially indicating a mix up between highest and lowest scores. Above that, we excluded six questionnaires with ≥75% missing items. The final number of questionnaires included in the analysis was 366, resulting in 183 questionnaires each for analysis set A and validation set B.
Descriptive Statistics
The child and parent characteristics are presented in Table 1. No significant differences between set A and set B were observed for any of the characteristics, except for the item specifying the respondent, in which a significant shift of mother-only to both parents was seen (X 2 (2, 356) = 8.17, p = 0.017). As the proportion of mothers giving their input does not differ in both sets, we view this difference as negligible. Therefore, we consider the reported characteristics of each set representative for the whole group. Below, we present the characteristics of set A, as this set drives the main psychometric analysis. Most children of participating families were either infants (n = 53, 29.6%), toddlers (n = 30, 16.8%) or preschoolers (n = 39, 21.8%) and the mean age was 5.32 years (SD = 6.63). Seventy-six percent of the questionnaires were completed by mothers. The majority of participants were born in Germany (n = 166, 91.2%) and native German speakers (n = 148, 83.1%). Only 7% of hospital admissions were unexpected and the mean length of hospital stay was 6.32 days (SD = 8.86), ranging from 1 to 105 days.
Parents gave high ratings on the EMPATHIC-30 and all except four items showed mean scores above 5 ( Table 2). On the domain level, mean scores ranged from 5.19 (SD = 0.84) for the domain Organization up to 5.45 (SD = 0.76) for the domain Professional Attitude. The "not applicable" response was given most frequently for the item "The unit could easily be reached by telephone" (n = 42, 23%).
Internal Consistency Reliability
McDonald's omega on the domain level ranged from 0.75 (Organization) to 0.87 (Professional Attitude; Care and Cure) and reached 0.95 for the full-scale. Cronbach's alpha on the domain level was only slightly lower and ranged from 0.73 (Organization) to 0.85 (Professional Attitude). The findings are presented in Table 3.
Convergent Validity
As shown in Table 4, the correlations between the EMPATHIC-30 domain scores and scores on the four overall satisfaction statements ranged from r s(183) = 0.40, p < 0.01 between the domain Organization and satisfaction statement "Readmission to ward, " to r s(183) = 0.68, p < 0.01 between the domain Care and Cure and satisfaction statement "Overall rating doctors." The lowest correlations were found for the domain Organization, with correlations under 0.50 for all satisfaction statements. Similarly, the correlations between the full-scale score and scores on the four overall satisfaction statements ranged from r s(183) = 0.62, p < 0.01 for the statement "Readmission to ward" to r s(183) = 0.74, p < 0.01 for the statement "Overall rating doctors." All correlations were significant and moderate to high, according to expectation. For an overview of correlations between the domain scores, see Table 5. All correlations significant (p < 0.01), two-tailed. All correlations significant (p < 0.01), two-tailed.
Principal Component Analysis
For the first PCA on unimputed data, sampling adequacy was ascertained by a KMO value of 0.89 and a significant Bartlett's test of sphericity (χ² = 3734.43, p < 0.01). The comparison of empirical data to simulated random data through Monte Carlo Parallel Analysis suggested a three-component
Confirmatory Factor Analysis
The first CFA was conducted to validate the fit of the threecomponent solution. As the results of both alternative PCAs showed a comparable three-component solution, we chose to start with a CFA model based on results from the second PCA (on imputed data with orthogonal rotation). However, Model fit statistics for the respective CFA models are summarized in Table 6. Variance explained by the factors for each CFA model are presented in Table 7. To eliminate lack of power or collateral bias between set A and B as potential cause for finding the current results, we have repeated the same analyses on the other half of the data set. These analyses yielded equivalent results, supporting the robustness of our findings.
DISCUSSION
In this study, we evaluated the psychometric characteristics of the German version of the EMPATHIC-30 for use in intermediary/general pediatric cardiology units. Furthermore, we extended the psychometric assessment in comparison to previous studies by evaluating the internal structure of the questionnaire in this care setting.
On average, parents gave high ratings for their satisfaction with FCC. The McDonald's omega values in our study indicated acceptable to good reliability for the items within the five domains and excellent reliability for the full-scale score. These values are consistent with the findings of other EMPATHIC-30 studies (26)(27)(28)(29). We found adequate convergent validity as shown by moderate to strong correlations between the five domains scores/ the full-scale score and the four general satisfaction items. Our results fall in line with previous publications, reporting correlation coefficients in the same order of magnitude (23,27,28). Future studies should extend these findings by investigating convergent validity based on methodology that is more elaborate, such as the use of other standardized instruments measuring parent satisfaction with care, as well as by incorporating assessments of discriminant validity.
We used PCA to assess the internal structure of the German version of the EMPATHIC-30. The analyses from the first PCA revealed a three-component structure with an explained variance over 50%. The first component explains beyond 40%, which supports the unidimensionality of the scale and may indicate that the questionnaire adequately measures the construct of interest (satisfaction with FCC) in our population. The threecomponent structure resulting from the first PCA (conducted on complete case data and allowing for intercorrelations between components) is very similar to the three-component structure resulting from the second PCA (conducted on imputed data, ignoring intercorrelations between components): only two out of 30 items load differently. Considering that the correlations among the components were close to negligible in the first PCA, rotation seems to have a minor impact on the interpretation of the internal structure, which may not be true for missing data (37). Therefore, we are inclined to view the threecomponent structure resulting from the second PCA as the best approximation of the questionnaire's internal structure in our sample. Although the three-component solution differs from the expected five-component structure, it is plausible and interpretable. Based on the semantic content of the respective items, we label the first component as "Perception and respect of the family's needs, " the second component as "Involvement of and collaboration with the parents, " and the third as "Communication and organization." However, despite the interpretability of the three components, the cross-validation of the three-component solutions via CFA resulted in poor fit indices. Model revisions did not significantly improve the model fit. A one-component solution to test for unidimensionality also showed a poor fit to the real data. Although the first component captures over 40% of the total variance in PCA, the true score variance seems to be relatively small compared to the random error variance. Additionally, we validated the five-component solution based on the original domains of the EMPATHIC-30, which indicated a poor fit to the real data. According to the poor model fit indices, all tested component models seem to be an oversimplification of the true structure of the questionnaire.
Our findings suggest that the EMPATHIC-30 has no clear and generalizable factor structure in our population. The ambiguous internal structure found in our study needs to be interpreted in light of the construction of the EMPATHIC questionnaires. In the original publication of the EMPATHIC-65, the five domains were defined during expert group sessions and item groupings into the respective domains were performed consensus based (24).
While the authors used CFA to evaluate the unidimensionality of each domain (assessing whether the items within every domain measured the same construct), they did not evaluate the underlying factor structure of the questionnaire (23). For the development of the shortened EMPATHIC-30 questionnaire, multiple regression analysis was used to evaluate statistical performance of the items, which might explain the divergence between the conceptual and the data-driven structure of the questionnaire (26). Furthermore, scores on the EMPATHIC-30 were high on average, with relatively small standard deviations. Accordingly, the parents in our sample were highly satisfied and the limited variation may contribute to the unclear factor structure. Still, our data showed sufficient true score variation to find three interpretable dimensions. The non-zero but not very high correlations between domain scores support this claim rather than support a true unidimensional structure. Replication studies may shed light on the question whether the unclear factor structure is sample specific. For instance, individual characteristics may influence interpretation of the items and subsequently, the way items divide into latent factors. Investigating the data-driven internal structure vs. theoretically postulated structure by conducting studies in different cultural settings and (sub-) populations may therefore be an interesting avenue for follow-up research. While we did not find strong support for the five-factor structure, we consider the domains informative, especially as they were thoroughly developed through expert panels. Nevertheless, FCC reflects a multi-faceted construct and we need more conceptual work to explain expert consensus on the one hand, and unclear factorial structure on the other, especially in light of the fact that the questionnaire assesses the subjective experience, as opposed to objective criteria for FCC.
Our study warrants some limitations. This is an analysis of quality management data from a single intermediary/general pediatric cardiology unit. Participation of other pediatric cardiology centers would allow for a more robust interpretation of results and in a prospective study design, additional measurements should be included for psychometric evaluation, specifically allowing for an assessment of discriminant validity. Furthermore, based on our results, differential analyses considering population characteristics like age range, duration of stay, and complexity of disease may be important to further increase our insights into the internal structure of the questionnaire.
To sum up, the German EMPATHIC-30 has no clear and simple factor structure in our population, while showing adequate reliability and convergent validity as assessed with four general satisfaction items. Accordingly, the EMPATHIC-30 is a suitable instrument to measure FCC in intermediary/general pediatric cardiology wards. However, follow-up studies are needed to further investigate the factor structure of the questionnaire. To our knowledge, this is the first study to assess psychometric properties of a standardized assessment of satisfaction with FCC in this population. Identifying care aspects that need to be improved during hospitalization is crucial in order to meet the developmental needs of children with CHD.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the Medical Ethics Committee Charité Virchow (Nr EA2/032/20). Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements.
|
2022-06-25T15:09:28.813Z
|
2022-06-23T00:00:00.000
|
{
"year": 2022,
"sha1": "4139b952af590fe7740d695fe763a00981e87188",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fcvm.2022.901260/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "4baea952baf76d361c97ac9d2f8ae145a3ec250f",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": []
}
|
214723597
|
pes2o/s2orc
|
v3-fos-license
|
High-Throughput Stool Metaproteomics: Method and Application to Human Specimens
Widely available technologies based on DNA sequencing have been used to describe the kinds of microbes that might correlate with health and disease. However, mechanistic insights might be best achieved through careful study of the dynamic proteins at the interface between the foods we eat, our microbes, and ourselves. Mass spectrometry-based proteomics has the potential to revolutionize our understanding of this complex system, but its application to clinical studies has been hampered by low-throughput and laborious experimentation pipelines. In response, we developed SHT-Pro, the first high-throughput pipeline designed to rapidly handle large stool sample sets. With it, a single researcher can process over one hundred stool samples per week for mass spectrometry analysis, conservatively approximately 10× to 100× faster than previous methods, depending on whether isobaric labeling is used or not. Since SHT-Pro is fairly simple to implement using commercially available reagents, it should be easily adaptable to large-scale clinical studies.
containing host, microbe, and dietary proteins, among a rich array of biomolecules. The broad proteinaceous representation of relevant biological entities and interactions, in conjunction with noninvasive sample collection, makes stool ideal for studying the complex ecosystem at the host-gut microbe interface (3). Microbiome composition can be readily determined using 16S rRNA gene sequencing from stool DNA and, due to its high-throughput nature, is well-suited for surveying a single individual over extended longitudinal time courses. Metagenomic and metatranscriptomic sequencing technologies can elucidate microbes' functional capacities and states (4). However, additional measurements are needed to elucidate the microbiome-host interactions that can profoundly affect host health.
Stool proteomics offers the ability to simultaneously measure both host-and microbe-expressed proteins, their posttranslational modifications, and the dietary components also present in the gut. These components reflect interactions and physiological states that are otherwise difficult to survey through nucleic acid sequencing alone (5). We previously showed that host proteins in stool reflect expression along the length of the gut and reveal signatures specific to the type of inflammatory state, such as distinct levels of antimicrobial proteins. Importantly, these signatures can vary in a manner distinct from that of the gut microbiota (5). For example, we showed previously that fecal microbiota transplanted into an antibiotic-induced Clostridium difficile infection mouse model normalized the microbial composition but not the host stool proteomic profile. Since proteins can be recovered from archived frozen stool samples, the approach offers a way to illuminate aspects of host mucosal biology noninvasively and longitudinally, long after stool collection.
Despite its functional utility, stool-based metaproteomics remains underutilized compared to the aforementioned next-generation sequencing technologies. One major hindrance to broader implementation has been low sample processing throughput. Indeed, while we and others previously demonstrated the power and utility of stool proteomics, those studies relied on data generated at rates as low as 10 to 30 samples per week (6)(7)(8)(9). Additionally, workflows developed for processing cell culture and tissue lysates are not optimized to eliminate contaminating molecules that are abundant in stool. Insufficient contaminant removal can lead to instrumentation downtime, decreases overall sample throughput, and introduces experimental noise that can dilute biologically relevant signals.
Here, we describe a method, the Stool High-Throughput Proteomics pipeline (SHT-Pro), that increases our ability to acquire high-quality metaproteomic stool analyses by as much as 100-fold when paired with multiplexing technologies such as tandem mass tag (TMT) labeling. As a first demonstration of this method, we applied SHT-Pro to 145 stool specimens longitudinally collected from 29 human participants as part of an ongoing dietary intervention study investigating the biological effects of diets enriched in fiber versus fermented foods (ClinicalTrials.gov registration no. NCT03275662). Processing them in duplicate (290 total samples) using SHT-Pro took approximately 1 week from stool to mass spectrometry-ready peptides, an estimated time savings of over 2.5 months compared to our previously published workflow (5,6). The resulting data set identified over 5,600 unique host and microbial proteins, 45% of which were shared between both study groups. We found that the number of proteins that significantly differed between the two groups increased over time, indicating that diet shapes the stool metaproteome of humans. We further demonstrate that the inclusion of more participants in metaproteomic analyses, in a fashion that this method enables, enhances the ability to classify study subjects compared to the smaller-scale data sets that were more feasible using prior methods. These data support SHT-Pro as overcoming a major hindrance for performing the kinds of clinical-scale studies needed for statistically sound measurements of diet and its impact on the host and its gut microbiome.
RESULTS
SHT-Pro increases sample processing speed with a high degree of reproducibility. A major limitation to large-scale adoption of stool metaproteomics has been its heavily reduced throughput compared to that of DNA sequencing. Considering the labor-intensive, multiday nature of our previously published stool proteomics method, we found that one researcher could reasonably process 25 stool samples per week on average (6,8). To narrow this gap, we developed our pipeline to rapidly process hundreds of stool samples (Fig. 1A) in a matter of days while maximizing liquid chromatography and mass spectrometry (LC-MS) instrumentation stability. Ninety-sixwell protein trap columns (Protifi S-trap) are robust to a wide range of protein/trypsin ratios, making them suitable for stool specimens with various protein contents. Using them for initial protein purification and digestion along with automation technologies for solid-phase extraction cleanup are two critical components of this added efficiency. To test time savings of the new method, we compared sample processing time of our previously published workflow to that of SHT-Pro (Fig. 1B). While processing fewer than 10 samples at a time does not result in substantial time savings (ϳ2.5 to 3 days saved), larger sample sets benefit from dramatic time savings. For example, processing 96 samples (a single 96-well plate) takes as little as 1.5 days using SHT-Pro, compared to approximately 30 days using the previous protocol (approximately 20-fold decrease).
The sample processing speed improvements SHT-Pro provides would be of little value without effective contaminant removal. To evaluate the kind of contaminationdependent analytical degradation that can occur over time, we repeatedly injected a single SHT-Pro processed stool specimen into our mass spectrometer 20 times. Four analyses of a standard complex peptide mixture were interspersed throughout these stool LC-MS analyses: one prior to all stool LC-MS analyses, two spaced 10 stool analyses apart, and one following 20 stool analyses. We observed no substantial degradation of LC-MS performance as measured from search results of the standard peptide mixture (7,205 Ϯ 60 unique peptides) versus four LC-MS analyses of the standard mixture on a new analytical column (average of 7,350 Ϯ 150 unique peptides). In contrast, we observed a 30% decrease in peptide spectral matches (PSMs) and a 27% decrease in peptide identifications in our standard peptide mix using our previous method over a similar number of injections (n ϭ 16 injections) (see Fig. S1A in the supplemental material). While sample purity and mass spectrometer performance are also responsive to other factors, such as desalting protocols, the amount of sample loaded onto the column, and instrument type, these results suggest that peptides resulting from the SHT-Pro pipeline are not substantially contaminated in a way that impairs sensitive LC-MS equipment.
We next tested whether our preparation method also led to high reproducibility. To accomplish this, we aliquoted the stool specimen described above in various amounts (50, 100, and 200 mg) and processed each aliquot by SHT-Pro in duplicate (Fig. S1C). Starting material amounts were chosen based on our previous protocol as well as what we have found to be generally available from human clinical studies. We note, however, that we did not test the lower limit of initial starting material needed for SHT-Pro or attempt to control for the large amount of variation found in stool sample consistency. These analyses identified 11,373 unique peptides originating from 1,879 (1,152 microbial, 727 host) proteins, averaging 1,791 proteins per sample. Over 85% of all proteins were identified from all preparative replicates and starting material amounts, suggesting a low degree of sample preparation bias (Fig. S2A). All input protein amounts produced strong linear correlations (R 2 values for 50 mg ϭ 0.85, 100 mg ϭ 0.92, and 200 mg ϭ 0.91) (Fig. 1E), suggesting that approximately 100 mg of starting material is sufficient for technical reproducibility. Similarly, comparing the intensity of proteins found in a replicate of the 50-mg samples to those of the 200-mg samples yielded an R 2 value of 0.86 (Fig. S2B). As expected, these preparative replication correlation values were less than correlations between technical replicate LC-MS sampling from the same SHT-Pro-prepared peptide mixture (R 2 ϭ 0.99 Ϯ 0.001, n ϭ 6 pairs). These data suggest a high degree of sample-to-sample processing fidelity.
We next examined how new procedural components of SHT-Pro compared to the lower-throughput aspects of our previously published workflow, specifically bead beating versus vortexing and S-trap protein isolation versus trichloroacetic acid (TCA) precipitation combined with SDS-PAGE. We evaluated the number of overall identifications made with two variations of each method, using parallel aliquots of the same stool sample described in Fig. 1A ( Table 1 describes sample configurations) (Fig. S1D). Combined, the four measurements yielded 2,352 total protein identifications. Both samples processed with S-trap protein isolation and digestion had numbers of protein identifications similar to those of the SHT-Pro pilot detailed above (average of 1,610 Ϯ 66). Samples processed with TCA precipitation and SDS-PAGE purification yielded fewer protein identifications (average of 1,255 Ϯ 65). Of note, samples processed with the complete SHT-Pro yielded the greatest number of identifications (1,657) (Fig. 1D). Comparing proteins found in this SHT-Pro sample to those from the initial pilot yielded an R 2 value of 0.67 Ϯ 0.014 despite originating from two different preparative replicates thawed and processed months apart.
The ratios of microbe-to-host protein identifications were slightly higher for SHT-Pro-prepared samples (mean, 1.8) than for our previous workflow (mean, 1.5). Despite this larger proportion of microbe protein identifications, we found that the numbers of host proteins identified only with SHT-Pro (595, config. A) were substantially greater than the number of host proteins identified with our previous workflow (467, config. B). As with the experiment described in Fig. S1C, the largest subset of proteins (1,000, 43% of all proteins) was present in all samples, regardless of preparation pipeline (Fig. 1E). The next largest unique protein set (248) included those shared only by the two S-Trap-prepared samples (config. A and C) and not identified in the SDS-PAGE preparations (config. B and D). Only 32 proteins were found solely in SDS-PAGE-prepared samples (B and D). We attribute the decreased overlap (43% versus 85%) of this data set compared to the dilution-series sample set ( Fig. S2A) to the differing sample preparation conditions of each experiment. The two samples that included SDS-PAGE were more alike in their proteomic profile than the two samples that were processed with the S-Trap (config. A and C) (Fig. 1F). This is likely due to config. C's use of TCA precipitation prior to S-trap processing while config. A did not, which may cause an increase in specific protein subsets.
Given the improved speed, reproducibility, and sensitivity we observed with SHT-Pro, we next tested whether this method tended to identify proteins with biological relevance to the gut environment. We subjected the 100 most abundant proteins to gene ontology enrichment analysis using ShinyGO (10). This revealed that the source stool specimen was significantly (false discovery rate [FDR] of Ͻ0.05) enriched for antimicrobial activity, neutrophil activation markers, and increased protease activity (Fig. S3). Given that the specimen set originated from a patient with inflammatory bowel disease (IBD) during a flare, this result agrees with observations we (11) and others (12) have previously made and supports SHT-Pro's ability to produce biologically relevant information. Table 1 for full experimental conditions. (E) Shared subset plot of all four samples. (F) PCA comparing the four SHT-Pro and previous workflow samples described in Fig. S1B. Protein abundances were normalized and scaled (log 2 ) prior to analysis.
Sample
Bead-beating Application of SHT-Pro to a longitudinal human diet study. The advances in throughput, coupled with high reproducibility and ability to reveal biologically relevant gastrointestinal response pathways, make SHT-Pro amenable to large sample sets that were previously impractical to process. To demonstrate SHT-Pro's utility on a large, longitudinal data set, we applied it to samples collected from an ongoing dietary intervention study (ClinicalTrials.gov registration no. NCT03275662). The overarching goal of this study is to elucidate how diets enriched in high fiber (e.g., whole grains, legumes, and fruits) or in fermented (e.g., kombucha, kimchee, and yogurt) foods affect human health. Over the course of 4 months, study participants increased their intake of one of the two dietary intervention arms, and stool specimens were collected every 2 weeks over four phases: baseline, ramp-up (increasing intake), maintenance (peak intake), and choice (choosing to eat the respective diet or not). We selected a subset of patients (n ϭ 29) for metaproteomic analysis based on sample availability during the baseline (two samples), ramp-up (single sample), and maintenance (two samples) phases for a total of five samples from each person over this period. The resulting 145 stool specimens were processed in duplicate, for a total number of 290 stool measurements ( Fig. 2A). Digested peptides resulting from initial processing with SHT-Pro were chemically labeled with tandem mass tag multiplexing (TMT-11plex) labels to increase throughput and quantifiability. Each TMT-11plex set contained one subject's full time Once all 290 stool samples were transferred to four 96-well plates, they were processed over the course of Ͻ9 days, including approximately 2.5 days devoted to TMT labeling and the associated cleanup steps that follow labeling. A single researcher carried out 80% of these steps (Fig. 2B). This sample set resulted in the identification of 83,061 high-confidence (q Ͻ 0.01) peptides (16,463 unique) assigned to 5,679 protein families ( Fig. 2C and Tables S1 and S2). Of these, approximately 94% (5,372) of identified proteins originated from microbes, with a much smaller host protein set (307). We found a large group of proteins (2,611, 46%) was shared by participants in both groups, while 54% (3062) of proteins identified were uniquely identified within just one dietary subgroup (fermented, 1,361; fiber, 1,701) (Fig. 2D). Replicate stool preparations were highly correlated (average R 2 of Ͼ0.995 for both groups) with only 2 of 145 replicate pairs receiving an R 2 value of Ͻ0.90 (0.81 and 0.57), confirming a high degree of overall preparative reproducibility (Fig. 2E).
Having established the stability of SHT-Pro, we next focused on how microbial and host proteins compositionally contributed to the data set at a high level. Despite comprising just 8% of all proteins identified, host-expressed proteins claimed much larger proportions of overall protein abundances (fermented , 25%; fiber, 28%) with an average host protein intensity approximately 67% (fermented, 72%; fiber, 63%) greater than those of their microbial counterparts ( Fig. 2F and G). At the level of individual study participants, the fermented group had an average of 805 proteins in each sample, while the fiber group had 870 proteins (Fig. S4C). This difference was not significant (P ϭ 0.19 by unpaired t test), suggesting that both groups identified similar numbers of proteins despite the differences in diet. Together, these data demonstrate SHT-Pro workflow yields quantitatively consistent metaproteomic measurements when used with TMT labels or label-free quantification.
SHT-Pro highlights presence of diet-responding proteomic subset. Having shown the efficacy of SHT-Pro in generating large and reproducible metaproteomic surveys, we next sought to understand if these two diets had any discernible effects on the stool proteome. Comparing microbe and host-expressed proteins via principal component analysis (PCA) (Fig. S5A) suggested several global trends. First, we found that microbial protein variation across all study participants was largely explained by the first principle component (37%). Three participants within the fiber group were distinguished from the other participants by PC2. In contrast, host proteins exhibited less subject-specific clustering. Overall, neither microbial nor host protein measurements could clearly distinguish diet-induced effects at this high-dimensional level, whereas individual-specific microbial protein expression was much more substantial. This observation aligns with previous reports of microbiome composition profiles measured via 16S rRNA amplicon sequencing (13,14).
Having observed minimal intergroup differences from high-level analysis, we next sought to determine whether biologically relevant temporal trends could be deduced at a more granular level. Comparing the proteomes of the two diet groups at each time point, we detected a trend suggesting diverging expression of both host and microbial proteomes subsequent to diet augmentation ( Fig. 3A and Fig. S6B). More specifically, we observed an increase in significantly altered host (ramp, 9; maintenance, 17) and microbial (ramp, 10; maintenance, 45) proteins subsequent to the start of diet augmentations, although the significance of these small-number observations did not always surpass strict (FDR Ͻ 0.05) multiple-hypothesis testing ( Fig. S6B and C). Nevertheless, the majority of these host proteins (14/17) increased among fermented group participants while exhibiting negligible overall change in the fiber group. The STRING protein-protein interaction and Gene Ontology (GO) platform suggested these 15 proteins were enriched (FDR Ͻ 0.05) in GO terms, including maintenance of intestinal epithelium, glycosylphosphatidylinositol anchor binding, and sphingolipid metabolism (Fig. 3B). It is notable that 9 of these 17 host proteins were also among the subset of Table S3 for an accompanying list of significantly altered proteins and their normalized abundances. (B) StringDB-generated functional network map using proteins significantly increased in the fermented group at the final maintenance time point. Nodes are colored according to the result of Markov-Clustering algorithm employed by StringDB, with each color signifying unique functional subnetworks. (C) StringDB-generated functional network map of the commonly shared 33 proteins identified in all study participants, regardless of diet group (Fig. 3D), and therefore could be useful markers of a wide range of host responses. As expected, the protein set common to all participants was strongly enriched (FDR Ͻ 0.05) in GO terms commonly found in the gut (Fig. 3C and Table S4). These results suggest diet augmentation with fiber or fermented foods has a distinguishable impact on both host and microbial proteomes and highlights their ability to affect the expression of highly prevalent gut-related proteins.
These results indicate the stool proteome can be conceptually divided into several subset proteomes: an individual-specific proteome largely made of microbial proteins, a diet-impacted proteome, and a common core proteome functionally associated with digestion and largely made up of host proteins common to most individuals. Given both common and unique proteome sets can be readily measured from all subjects, we next focused on whether these proteomes could be used to predict membership in either the fermented or fiber protein groups.
SHT-Pro-derived protein abundance allows for classification based on diet group. As indicated in Fig. 3, we observed modest diet-related differences between the fermented and fiber groups. However, we were curious as to whether more robust intergroup differences were obscured by considerable biological variation in this human cohort. To test this, we employed a leave-one-out cross-validated (LOOCV) random forest machine-learning model, designed to identify distinguishing data features from complex, high-dimensional data (15). The recursive feature selection approach we adopted chose differing combinations of study time points as model inputs, which were scaled to the first baseline time point (Fig. 4A).
To gain insight into whether microbial or host protein abundance on specific days was more predictive in classifying these individuals based on their augmented diets, we ran the classifier only on host or on microbial proteins, considering either individual time points (e.g., only ramp, only maintenance day 1, etc.) or aggregated time points based on intervention status (e.g., all three postdiet intervention days) (Fig. 4A). Overall, we found that the greatest classification accuracy was achieved by considering the abundance of solely host proteins from the final maintenance measurement (89%). In contrast, microbial proteins measured at this time point only yielded 78% accuracy. However, it is noteworthy that evaluating the ramp and both maintenance time points together improved this classification somewhat for microbial proteins (80%) but decreased classification accuracy for host proteins (72%). These data suggest a more comprehensive microbial profile measured following diet induction captured both transient and sustained diet-specific signals, whereas host protein expression tended to evolve over the course of the intervention.
Given that host proteins better predicted group membership (Fig. 4A), we next varied the number of participants included in the model (Fig. 4B) to test whether this observation depended on the underlying depth of the data set. As expected, we observed increased classification accuracy as more study participants were included in the model (averages of 51%, 65%, and 80% accuracy for 10, 20, and 28 participants, respectively), signifying the necessity of more extensive data sets for studies focused on disease prediction and explanatory power. Despite being less than 10% of each sample's proteomic profile, these data support host proteins' ability to generate a more accurate classifier than microbial proteins.
DISCUSSION
Stool-based proteomics' potential use as a basic science tool and a rich resource for clinical biomarker discovery has been touted for over a decade (8,16,17). However, its broad adoption has been largely hindered by multiple difficulties. Chief among them is processing large sample numbers in an efficient manner. Achieving this is a necessary prerequisite for undertaking studies involving large heterogenous populations, such as human trials. While some protein extraction and digestion protocols were recently designed to have robust sample processing pipelines, their targeted throughput of 5 to 25 samples per week makes scaling them with automation difficult. This limits their usefulness to large, longitudinal clinical studies. Relatedly, several protocols we evaluated prior to developing SHT-Pro failed to remove major contaminating molecules found in stool, which was evident from the continuous fouling of liquid chromatography columns and unstable mass spectrometer performance (7). Both necessitate increased equipment maintenance and associated downtime (7). SHT-Pro resolves these deficiencies by leveraging a workflow specifically designed for large longitudinal stool collections while maintaining flexibility to accommodate smaller sample numbers for pilot studies. Importantly, this is accomplished with a high degree of experimental reproducibility. Our previously published methods required the processing of large sample sets over multiple months, leading to a greater need to distinguish prominent preparation artifacts from desired biological protein profiles. Here, we show that SHT-Pro can produce highly reproducible data sets spanning hundreds of samples in a matter of days. For example, in the current study, SHT-Pro saved an estimated 3.5 months (approximately 80% less time) over our previous protocol. Since it is compatible with multiplexing technology, LC-MS data generation times can be further accelerated by an additional order of magnitude. Similar to the high-throughput DNA sequencing pipelines used to characterize gut microbial communities on a massive scale, we envision SHT-Pro will modernize the stool proteomics field and allow the profiling of a variety of diseased conditions, ranging from IBD to multiple sclerosis (18). Future studies may also consider adapting high-throughput proteome preparation pipelines, such as 96-well FASP (19), or all-in-one commercial kits, such as PreOmics' iST kit and ThermoFisher Scientific's Easypep kits. We note, however, that in our unpublished pilot studies, methods that perform quite well with cell or tissue lysate tended to be overwhelmed by stool's molecular diversity (data not shown).
Nevertheless, we acknowledge that SHT-Pro as described here could be further improved in several simple ways. First, the aliquoting of initial stool samples is presently the most labor-intensive and time-consuming step of the overall process. In our diet study, each sample was aliquoted by hand from the original specimen collection vessel to the 96-well bead-beating plate (Fig. 1A). Given that this is a common obstacle for DNA and protein sample preparation pipelines alike, the microbiome field would benefit from an aliquoting technology targeted at this sample-handling burden. Next, while multichannel pipettes currently used in SHT-Pro were critical components, a 96-well pipettor (single head or as part of an automated system) could more uniformly and rapidly dispense buffers, thereby increasing the overall speed of the assay while decreasing the amount of hands-on time laboratory researchers must invest in an experiment and decreasing preparative variation. Additionally, we noted a large portion of time within SHT-Pro was spent evaporating and concentrating samples via Cetrivap/Speedvac vacuum-based concentrators (Fig. 2B). Given the larger volumes 96-well plates produce using our method (100 to 300 l/well), this can be a significant hindrance to overall throughput. As such, an alternative method to concentrate peptides would significantly increase the throughput of SHT-Pro, potentially bringing sample preparation time to less than a day. Last is the issue of cost, which can become a deciding factor when selecting a protocol. Not including automation hardware, SHT-Pro can cost up to approximately $30 per sample when using the 96-well plate method, which is substantially more than a common in-solution digest. However, this must be weighed against the additional time and manpower it takes to process those same samples over a substantially longer time period.
In the current study, we observed over 5,600 proteins that could provide new biological insights into the impact of dietary fiber versus fermented foods. While this identification depth is greater than some of the first reported metaproteomic searches on stool, newer studies have reported substantially more host and microbial protein identifications (53,000 total proteins, various biological matrices) (9,16,20). We attribute the decreased number of identifications in our current study to several factors. First, our use of the TMT multiplexing reagent creates a bias toward proteins that are found in multiple samples: signals found in just one sample are diluted by the number of channels used (21). Thus, we suspect many low-abundant, sample-specific host and microbial proteins were not identified. To combat this, future iterations of SHT-Pro could incorporate peptide fractionation and longer mass spectrometry runs per sample, which has been shown to significantly increase identification of low-abundance host and microbial proteins (9). In this context, striking a balance between throughput and proteomic depth is crucial, as the biological and health-related significance of lowabundance proteins remains promising but unclear. Next, while the database used to search these samples (adapted from the Human Microbiome Project) is fairly extensive, the use of subject-specific metagenomes for the generation of protein databases would likely increase sample-and subject-specific protein identifications. Lastly, compared to previously published work, we injected approximately 4ϫ less material into the mass spectrometer (0.5 g versus 2 g) (20). Given that, on average, we collected approximately 60 g of peptide from each sample (over 100 g/sample was collected in the pilot study), injecting more peptide or fractionating samples would likely increase our protein identification rate.
Despite these remaining challenges, SHT-Pro-generated metaproteome data that resulted in biologically meaningful insights, even in the context of a largely uncontrolled human diet study. Indeed, SHT-Pro revealed a subtle divergence in proteomes after the introduction of fiber and fermented diets, as evidenced by the increased number of significantly altered host and microbial proteins during the ramp and maintenance phases, while baseline measurements remained largely unchanged. These significantly increased proteins were enriched for several categories, including intestinal epithelium maintenance and host sphingolipid metabolism. Interestingly, sphingolipids, along with chemical variants (e.g., glycosphingolipids) and derivatives, previously were shown to regulate invariant natural killer T cells (iNKT) (22). More recently, Bacteroides fragilis, a common gut-dwelling microbe, has been shown to produce sphingolipids, and their production protected mice from an oxazolone-induced colitis model, an effect largely mediated by their regulation of iNKT activation (23). Here, the introduction of fermented foods may have increased the levels of B. fragilis, as has been previously noted in rats fed fermented tempeh, which in turn may increase levels of sphingolipid availability (24). In the current study, we observed 20 proteins attributed to B. fragilis; however, they showed no significant abundance differences between diet cohorts on the final day of maintenance. It is possible that the search algorithm used (TurboSequest, Proteome Discoverer 2.2) was not ideally suited to attribute peptides (and proteins) to the correct species in such a large search space, a common problem in the metaproteomic field (3). In this case, metaproteome-centric search suites, such as MetaLab, may be of some benefit (25). While the purpose of the manuscript is to showcase SHT-Pro as an integral facet necessary for understanding host-microbe interactions, this result suggests that future studies using SHT-Pro would also benefit from a multiomic approach that also leverages 16S rRNA amplicon sequencing and metabolomics profiling. Nevertheless, SHT-Pro-generated data are compelling when considering that, other than dictating increased intake of each experimental cohort's respective diet, study participants had no other nutritional restrictions. As such, any changes in the microbial or host stool proteome could be expected to be subtle and subject specific and likely hidden by data-driven noise. This subtlety is highlighted by the classification success, which was only possible using machine-learning techniques and not easily discernible by simply focusing on simple abundance changes. Importantly, the observed success of the LOOCV random forest model also suggests future microbial proteomic studies would benefit from normalization to a participants' unique baseline signature as well as the inclusion of many participants, an inherent strength of SHT-Pro.
These data likely harbor many more insights, including revealing components of diet. While we have not mapped dietary peptides in this study due to database limitations, plant peptides are evident in our data set and suggest utility in helping inform the many challenging aspects of dietary assessment in free-living humans. When paired with other omic data (e.g., 16S rRNA, metabolomics, and clinical measurements), these proteomic profiles are poised to significantly contribute to our understanding of the dietary impact on individuals over time. Questions such as these, which require much more in-depth analysis, will be answered in a separate manuscript, and the focus of this article is largely SHT-Pro's increase in quality and speed compared to the previous workflow, and more in-depth analysis of multiomic data associated with the dietary intervention will be completed as part of a larger publication.
Nevertheless, taken together, SHT-Pro reveals itself as a robust pipeline for processing stool samples in an extremely timely manner, and we believe its wide-scale adoption and improvement will enable powerful discoveries in the field of host-gut microbiome interactions.
MATERIALS AND METHODS
Buffers. For the lysis buffer, 6 M urea, 5% sodium dodecyl sulfate (SDS), and 50 mM Tris were combined, with the pH adjusted to 8 using phosphoric acid. Roche cOmplete Mini protease inhibitor cocktail (04693159001; Roche) was added prior to adding buffer to samples. The Protifi binding buffer (PBB) contained 90% methanol and 10% triethylammonium bicarbonate buffer (TEAB; catalog number T7408; Sigma-Aldrich), adjusted to pH 7.1 using phosphoric acid. The digestion buffer contained 100 mM TEAB and 5 g trypsin (V5113; Promega). For peptide elution buffers, the first elution was performed using digestion buffer, the second elution was performed using 0.2% formic acid (FA), and the third elution was performed using 50% acetonitrile and 0.2% FA.
Isolation of stool proteins and peptides (96-well variant). A step-by-step guide is available on Protocols.io at the following web address: https://doi.org/10.17504/protocols.io.9gph3vn. Approximately 100 to 200 mg (when available) from each collected stool specimens was aliquoted into a 96-well plate along with approximately 600 mg of 0.1-mm ceramic beads (27-6006; Omni International). To each filled well, 750 l of lysis buffer was added and plates were sealed with the Omni-provided sealing mats. To increase their seal, each plate was additionally sealed with parafilm, although we found this was not necessary. The sealed plates were subjected to 10 min of bead beating at 20 Hz using a Qiagen TissueLyser II. After bead beating, each plate was centrifuged at 300 relative centrifugal force (RCF) at 4°C for 10 min. Five hundred microliters of the resulting supernatant was transferred to a new 2-ml 96-well plate (186002482; Waters), sealed with a sealing mat, spun again at 300 RCF at 4°C for 10 min, and then transferred into a fresh 2-ml plate. Samples then were reduced with 10 l of 50 mM dithiothreitol (Sigma-Aldrich) for 30 min at 47°C and alkylated with 30 l of 50 mM iodoacetamide (Sigma-Aldrich) for 1 h at room temperature in the dark. Fifty microliters of the reduced and alkylated supernatant was transferred to a new 2-ml 96-well plate for further processing, while the remaining material was stored at Ϫ80°C for potential future analysis. Supernatant-resident stool proteins were washed, digested, and eluted as described in the Protifi S-trap protocol (see http://www.protifi.com/wp-content/uploads/2018/ 08/S-Trap-96-well-plate-long-1.4.pdf for the complete protocol). Briefly, 50 l of supernatant was acidified with 5 l of 12% phosphoric acid, to which 300 l of S-trap binding buffer was added. Each resulting mixture was loaded into a single well. Positive pressure was used to load the proteins into each well (Waters Positive Pressure-96 processor) with pressure at approximately 6 to 9 lb/in 2 on "low-flow" setting. Note that if, after 1 min, volume still remains in the well, using a pipette tip to move any debris to the side of well will begin the flow again. Loaded proteins were washed with 300 l PBB five times. After washing, 125 l of digestion buffer was added and proteins were digested for 3 h at 47°C. Peptides were then eluted with 100 l TEAB, followed by 100 l of 0.2% formic acid, followed by 100 l of 50% acetonitrile (ACN), 0.2% formic acid. These were captured in a 1-ml 96-well plate (AB-1127; Thermo Scientific), and the volume was dried down in a Centrivap SpeedVac (model 7810016). Plated samples then were desalted using RP-S tips on the Agilent Bravo AssayMap using a built-in desalting protocol, eluted with 50% ACN, and dried down. Plated peptide concentration was normalized using readings from the Biotek Synergy microplate reader and the Take3 microvolume plate (single samples were adjusted using a NanoDrop ND-1000). Samples then were labeled with a TMT-11 multiplexing kit using the manufacturer's recommended method (A34808; Thermo-Fisher Scientific). Channel-specific isobaric tag intensities were adjusted to 11 (1:1) using recorded intensities from a 1-h gradient mass spectrometry run and subsequently reinjected into the mass spectrometer after normalization.
Isolation of stool proteins and peptides (individual tube variant).
Approximately 100 to 200 mg (when available) from each collected stool specimens was aliquoted into a bead-beating tube along with approximately 600 mg of 0.1-mm ceramic beads (19-732; Omni International). To each tube, 750 l of lysis buffer was added. Samples were subjected to 10 min of bead beating at 3,500 rpm (Omni Beadruptor 12 19-050). After bead beating, each sample was centrifuged at 300 RCF at 4°C for 10 min. Five hundred microliters of the resulting supernatant was transferred to a fresh 2-ml tube, spun again at 300 RCF at 4°C for 10 min, and then again transferred to a fresh 2-ml plate. Samples then were reduced with 10 l of 50 mM dithiothreitol (Sigma-Aldrich) for 30 min at 47°C and alkylated with 30 l of 50 mM iodoacetamide (Sigma-Aldrich) for 1 h at room temperature in the dark. Fifty microliters of the reduced and alkylated supernatant was transferred to a new 2-ml tube for further processing, while the remaining material was stored at Ϫ80°C for potential future analysis. Supernatant-resident stool proteins were washed, digested, and eluted as described in the Protifi S-trap protocol (see http://www.protifi.com/wp -content/uploads/2018/08/S-Trap-mini-protocol-long.3.6.pdf for the complete protocol). Briefly, 50 l of supernatant was acidified with 5 l of 12% phosphoric acid, to which 300 l of S-trap binding buffer was added. Each resulting mixture was loaded into a single well. A vacuum manifold was used to load samples with pressure set at approximately 3 to 5 lb/in 2 . Note that if, after 1 min, volume still remains in the well, using a pipette tip to move any debris to the side of the well will begin the flow again. Loaded proteins were washed with 300 l PBB five times. After washing, 125 l of digestion buffer was added and proteins were digested for 3 h at 47°C. Peptides were then eluted with 100 l TEAB, followed by 100 l of 0.2% formic acid, followed by 100 l of 50% acetonitrile, 0.2% FA. Eluate was captured and the volume was dried down in a Centrivap SpeedVac (model 7810016). Dried samples were then resuspended in 250 l 0.2% FA. Resuspended samples were then desalted using Seppak tC 18 cartridges and subsequently dried down (WAT036820; Waters). Each sample was then resuspended in 30 l and the peptide concentration was normalized (NanoDrop ND-1000).
Previous workflow protocol. Samples were prepared as described in Gonzalez et al. (6). Briefly, sample pellets were disrupted using 500 l 8 M urea lysis buffer supplemented with Roche cOmplete protease inhibitor (04693159001; Roche) by vortexing. After pellet resuspension, insoluble material was pelleted down at 2,500 RCF for 10 min at 4°C, and the collected supernatant was subjected to ultracentrifugation (35,000 rpm for 30 min at 4°C; Beckman-Coulter Optima Ultracentrifuge) to remove bacteria. The ultracentrifuge supernatant was subsequently reduced, alkylated, and precipitated overnight in a Ϫ20°C freezer using trichloroacetic acid (15% total volume). Protein pellets were resuspended in 40 l of loading buffer and briefly run in SDS-PAGE (approximately 5 mm; Invitrogen NuPAGE 4 to 12% Bis-Tris) for further purification, after which they were subjected to in-gel tryptic digestion using sequencing-grade trypsin (V5113; Promega). After digestion, each sample was cleaned up using C 18 columns and dried down. Peptides were then normalized using a NanoDrop ND-1000.
Mass spectrometry. Peptide samples were diluted to 0.5 g/l. Subsequently, 1 l was loaded onto an in-house laser-pulled 100-m-inner-diameter nanospray column packed to ϳ220 mm with 3-m 2Å C 18 beads (Reprosil). Peptides were separated by reverse-phase chromatography on a Dionex Ultimate 3000 high-performance liquid chromatograph (HPLC). Buffer A of the mobile phase contained 0.1% FA in HPLC-grade water, while buffer B contained 0.1% FA in ACN. An initial 2-min isocratic gradient flowing 3% B was followed by a linear increase up to 25% B for 115 min, increased to 45% B over 15 min, and a final increase to 95% B over 15 min, whereupon B was held for 6 min and returned to baseline (2 min) and held for 10 min, for a total of 183 min. The HPLC flow rate was 0.400 l/min. Samples were run on either a Thermo Fusion Lumos (large study) or Thermo Orbitrap Elite (pilot comparisons) mass spectrometer that collected MS data in positive ion mode within the 400 to 1,500 m/z range.
For TMT-labeled samples, a top-speed MS3 method was employed on the Fusion Lumos with an initial Orbitrap scan resolution of 120,000. This was followed by high-energy collision-induced dissociation and analysis in the Orbitrap using "Top Speed" dynamic identification with dynamic exclusion enabled (repeat count of 1, exclusion duration of 90 s). The automatic gain control for Fourier transform (FT) full MS was set to 4e5 and for ITMSn was set to 1e4. ITCID was used with the MS2 method, and the MS3 AGC was set to 1e5.
Peptide/protein searches. The resulting mass spectra raw files were first searched using Proteome Discoverer 2.2 using the built-in SEQUEST search algorithm. Built-in TMT batch correction was enabled for all samples. Three FASTA databases were employed: Uniprot Swiss-Prot Homo sapiens (taxon ID 10090, downloaded January 2017), the Human Microbiome Project database (FASTA file downloaded from https://www.hmpdacc.org/hmp/HMRGD/ in January 2017), and a database containing common sample-handling contaminants. Target-decoy searching at both the peptide and protein level was employed with a strict FDR cutoff of 0.05 using the Percolator algorithm built into Proteome Discoverer 2.2. Enzyme specificity was set to full tryptic with static peptide modifications set to carbamidomethylation (ϩ57.0214 Da) and, when appropriate, TMT (ϩ229.1629 Da). Dynamic modifications were set to oxidation (ϩ15.995 Da) and N-terminal protein acetylation (ϩ42.011 Da). Only high-confidence proteins (q Ͻ 0.01) were used for analysis.
Statistical analyses. Statistics were calculated using R with statistics packages (FactoMinerR 1.36, factoextra 1.0.5, ggplot2 2.2.1, Hmisc 4.0-3, psych 1.7.8, Mfuzz 2.34.0, ggpubr 0.1.5, RColorBrewer 1.1-2, UpSetR, 1.3.3, limma 3.30.13, and venneuler 1.1-0) and Qlucore Omics Explorer 3.3. Protein abundance was normalized as a percentage of summed reporter intensity for all quantified proteins in a given sample (protein intensity/total sample intensity). Each TMT-11 run was filtered for. Where necessary for meeting statistical assumptions, abundances were log 2 transformed. The appropriate multiplehypothesis tests (one-way analysis of variance) were applied to abundance comparison data using Qlucore Omics Explorer or custom R scripts. Correlational P values were corrected using the FDR setting and the R package psych 1.7.8. Protein abundance heat maps were generated with Qlucore Omics Explorer 3.3 or R's built-in heatmap function. FDRs and fold changes (where appropriate) were generated using Qlucore's built-in FDR estimator, and the values are reported in tables in the supplemental material.
Data availability. The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium via the PRIDE partner repository with the data set identifier PXD017450.
SUPPLEMENTAL MATERIAL
Supplemental material is available online only.
|
2020-03-12T10:39:54.247Z
|
2020-03-08T00:00:00.000
|
{
"year": 2020,
"sha1": "09c4c43fc56389b9cba3cda0752810a206a451a0",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1128/msystems.00200-20",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c333c9f172cd1e6f34631b0df6c7be446aafb696",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine",
"Biology"
]
}
|
6564616
|
pes2o/s2orc
|
v3-fos-license
|
Musculoskeletal disorders as underlying cause of death in 58 countries, 1986–2011: trend analysis of WHO mortality database
Background Due to low mortality rate of musculoskeletal disorders (MSK) less attention has been paid to MSK as underlying cause of death in the general population. The aim was to examine trend in MSK as underlying cause of death in 58 countries across globe during 1986–2011. Methods Data on mortality were collected from the WHO mortality database and population data were obtained from the United Nations. Annual sex-specific age-standardized mortality rates (ASMR) were calculated by means of direct standardization using the WHO world standard population. We applied joinpoint regression analysis for trend analysis. Between-country disparities were examined using between-country variance and Gini coefficient. The changes in number of MSK deaths between 1986 and 2011 were decomposed using two counterfactual scenarios. Results The number of MSK deaths increased by 67% between 1986 and 2011 mainly due to population aging. The mean ASMR changed from 17.2 and 26.6 per million in 1986 to 18.1 and 25.1 in 2011 among men and women, respectively (median: 7.3% increase in men and 9.0% reduction in women). Declines in ASMR of 25% or more were observed for men (women) in 13 (19) countries, while corresponding increases were seen for men (women) in 25 (14) countries. In both sexes, ASMR declined during 1986–1997, then increased during 1997–2001 and again declined over 2001–2011. Despite decline over time, there were substantial between-country disparities in MSK mortality and its temporal trend. Conclusions We found substantial variations in MSK mortality and its trends between countries, regions and also between sex and age groups. Promoted awareness and better management of MSK might partly explain reduction in MSK mortality, but variations across countries warrant further investigations. Electronic supplementary material The online version of this article (doi:10.1186/s12891-017-1428-1) contains supplementary material, which is available to authorized users.
Background
Musculoskeletal disorders (MSK) cover a wide range of disorders affecting joints, bones, muscles and soft tissues. Many MSK are recurrent or lifelong disorders [1]. The main consequences of MSK are typically long term pain, physical disability, loss of independence, reduced social interaction, and a decline in quality of life [2]. Globally, 18.5% of years lived with disability was attributed to MSK in 2015 (68% increase from 1990) [3]. When taking into account both death and disability, all MSK combined accounted for 6.0% of the total global disability-adjusted life years [4]. Population growth, ageing, obesity, increased sedentary lifestyles, and work-related issues imply that the number of people suffering from MSK (and, thus the burden from MSK) will increase dramatically worldwide over the coming decades [5].
Increased risk of mortality may be another consequence of MSK, even though for the majority of MSK the mortality rate is low. Despite this, to have an accurate estimation of burden of MSK, mortality associated with MSK must also be measured [1]. Previous studies reported higher risk of mortality among people with some MSK including rheumatoid arthritis (RA) and osteoarthritis (OA) compared with the general population possibly due to increased risk of cardiovascular disease and infection [6][7][8][9][10]. However, due to low mortality rate less attention has been paid to MSK as underlying cause of death in the general population.
To our best knowledge, only a recent study investigated temporal trend in mortality with MSK as underlying cause in Sweden [11]. MSK mortality has been influenced by the emergence of new advances in MSK management including introduction of biological agents for RA. For example, previous studies reported that biological agents are associated with increased risk for serious infectious events and declined risk of cardiovascular events [12][13][14]. Considering this new advances and also scarcity of study on MSK mortality, an update on the trends in MSK mortality is needed. In addition, we quantified the magnitude of the absolute and relative between-country disparities in MSK mortality which has important policy implications. Furthermore, we decomposed changes in number of MSK deaths into demographic and epidemiologic changes which are important in health systems policy making. The aim of the current study was to investigate the trends in MSK mortality rates and associated between-country disparities using data from the World Health Organization (WHO) mortality database.
Data sources
Annual data on all-cause mortality and mortality due to MSK as underlying cause of death during 1986-2011 were obtained from the WHO mortality database (http://www.who.int/healthinfo/mortality_data/en/, accessed April 2016). This database provides annual data on underlying cause of death by age, sex, and cause of death as submitted by national death registration systems. International Classification of Diseases (ICD) codes were used to extract data on MSK mortality (ICD-8 and ICD-9 codes 710-740, and ICD-10 codes M00-M99). Several countries used different codes for MSK deaths which was taken into account in extracting the data. Due to low number of MSK deaths, we only included countries with more than one million population in 2010. In total, 58 countries (United Kingdom included as three countries: England & Wales, Northern Ireland, and Scotland) had required data for our analysis and were included. The application of ICD revisions varied across countries during the study period (Additional file 1: Table S1).
While there were no missing data for 49 countries, in 9 countries missing data points ranged from one in Kazakhstan and Dominican Republic to 6 in Panama resulting in a total of 42 missing values out of 3016 sexcountry-year data points. Missing values were imputed using multiple imputation (10 imputations) applying Poisson regression model adjusted for year, sex, age group, and ICD revision with population as exposure (number of deaths was used as dependent variable). Population data by sex and age were obtained from the United Nations Population Prospects database (http://esa.un.org/unpd/ wpp/). Moreover, we grouped these countries in 10 regions according to the United Nations Statistics Division (http://unstats.un.org/unsd/methods/m49/m49regin.htm): Asia (n = 7), Eastern Europe (n = 8), Northern Europe (n = 11), Southern Europe (n = 6), Western Europe (n = 6), Caribbean (n = 4), Central America (n = 4), South America (n = 8), North America (n = 2), and Oceania (n = 2).
Trend analysis
We computed age-standardized mortality rates per million population by means of direct standardization using the WHO Reference Population [15]. Age-standardized rates per million population were calculated for each country/region and year for each sex. We also computed women to men age-standardized rate ratio and its 95% confidence interval (CI). The percent change was calculated as the difference between the rate of 1986 and 2011 divided by the rate of 1986.
Temporal trends in age-standardized mortality rate were analyzed by joinpoint regression using the Joinpoint Regression Program version 4.2.0.2 from the Surveillance Research Program of the US National Cancer Institute (http://surveillance.cancer.gov/joinpoint). Joinpoint regression identifies points with a significant change in trend ("joinpoints") and determined linear trends between joinpoints. In the software a series of permutation tests is applied to compute the number of joinpoints to best fit the data [16]. For each joinpoint an annual percentage change (APC) is estimated by fitting a regression line to the natural logarithm of the agestandardized rates, using calendar year as a predictor. The average annual percent change (AAPC) as the weighted average of APCs was computed to provide a summary measure of the trend for the whole time period [17]. Since recent trends are possibly the best predictor of mortality rates in coming years and also to avoid biases caused by differences in ICD revision, we additionally computed the percentage changes and AAPCs for period 2001-2011.
Between-country disparity
We used between-country variance (BCV) to examine the trend in absolute between-country disparity in MSK mortality. The BCV was calculated using following formula [18].
Where p j is country j's proportion of the total population, y is country j's age-standardized MSK mortality rate, and μ is the pooled age-standardized MSK mortality rate of all countries.
Gini coefficient was used for examining changes in relative between-country disparities. Gini coefficient is a commonly used disparity measure [18] and is based on the Lorenz curve which plots the cumulative share of population ranked by health variable, in an increasing order, against the cumulative share of health variable. The Gini coefficient is equal to twice the area between the Lorenz curve and diagonal. Its value ranges from 0 (perfect equality) to one (maximum possible inequality).
Decomposition analysis
We decomposed the drivers of changes in the number of MSK deaths between 1986 and 2011 into three components using two counterfactual scenarios [19]: 1) population growth scenario using the number of population in 2011 and the age-sex structure and MSK death rates in 1986, 2) population growth and aging scenario using the number of population and the age-sex structure in 2011 and MSK death rates in 1986. Difference between the actual number of deaths in 1986 and those estimated from the population growth scenario is the change due to population growth. The difference between the population growth scenario and population growth and aging scenario is the change due to population aging. The difference between the actual number of deaths in 2011 and the population growth and aging scenario is the change due to epidemiologic changes. The epidemiologic changes are changes in the age-, sex-, and cause-specific rates of death and include all changes in mortality that cannot be explained by population growth and aging. The actual change in the number of deaths between 1986 and 2011 is equal to the net change in these three components.
Number of deaths and proportion of all causes deaths
In total, 192 666 069 men and 177 582 994 women died in 58 countries during 1986-2011, of these 419 848 men and 956 011 women died with MSK as underlying cause. About 70 and 75% of all MSK deaths were observed in men and women aged 65 years and older, respectively. On average, MSK deaths constituted 0.22% (ranging from 0.02% in Romania to 0.48% in Spain) and 0.54% (ranging from 0.05% in Romania to 1.28% in Spain) of all-cause deaths among men and women, respectively. Among men, MSK deaths increased from 11861 deaths in 1986 (0.18% of all-cause deaths) to 22380 deaths in 2011 (0.28% of all-cause deaths, Additional file 2: Figure S1), representing an increase of 89% (ranging from 85.3% reduction in Romania to 2600% increase in Greece). Among women, MSK deaths increased from 28 272 in 1986 (0.46% of allcause deaths) to 44 652 in 2011 (0.61% of all-cause deaths), corresponding to an increase of 58% (ranging from 71.9% reduction in Romania to 552.9% increase in Greece). Across regions, the highest reduction in the number of MSK deaths between 1986 and 2011 was seen in Eastern Europe for men and in Northern Europe for women. South America had the highest increases in the number of MSK deaths for both sexes.
Age-standardized mortality rate
The pooled age-standardized MSK mortality rates were 17.7 and 26.5 deaths per million person-years among men and women, respectively (women to men rate ratio of 1.5). The mean MSK mortality rates ranged from 2.6 deaths per million person-years in Romania to 45.3 in Trinidad and Tobago for men, and from 4.4 in Bulgaria to 68.9 in Trinidad and Tobago for women ( Fig. 1). In all countries but Guatemala and Greece, women had statistically significantly higher MSK mortality rate than men (Additional file 1: Table S2). Among regions, the highest and lowest MSK mortality rates for both sexes were observed in Central America and Eastern Europe, respectively (Additional file 3: Figure S2).
The mean MSK mortality rate increased from 17.2 deaths per million person-years in 1986 to 18.1 in 2011 for men, representing a 5.3% increase (ranging from an 86% reduction in Panama to a 1325% increase in Greece with a median of 7.3% increase, Table 1). Mortality reductions of 25% or more were observed in 13 countries, while increases of 25% or more were seen in 25 countries. The absolute change in MSK mortality rate ranged from about 50 less deaths per million person-years in Guatemala to about 19.5 more deaths in Greece between 1986 and 2011. Across regions, the highest MSK mortality decline was observed in Central America (28% reduction) and the highest increase in Caribbean (58% increase). Between 1986 and 2011, age-specific MSK mortality rate declined in men 0-19 years and 65-74 years and increased in all other age groups (Fig. 2).
For women, the mean MSK mortality rate declined from 26.6 deaths per million person-years in 1986 to 25.1 in 2011, representing a 5.6% reduction (ranging from a 76.5% reduction in Singapore to a 233% increase in Greece with a median of 9.0% reduction). Mortality decreases of 25% or more were observed in 19 countries, while corresponding increase were seen in 14 countries. The absolute change ranged from 30.5 less MSK deaths per million person-years in England & Wales to about 34 more deaths in Colombia between 1986 and 2011. Across regions, the highest MSK mortality decline was seen in Northern Europe (41.1% reduction) and the highest increase in South America (40.5% increase). Age-specific MSK mortality rates declined in most age groups with the highest decline and increase for 0-9 years and 25-29 years age groups, respectively.
In 39 out of 58 countries women experienced more favorable changes (i.e., either more reductions or less increases) compared with men. In addition, the absolute differences in MSK mortality between women and men declined from 9.4 deaths per million person-years in 1986 to 6.9 deaths in 2011. Corresponding women to men rate ratio (95% CI) declined from 1.54 (1.51 to 1.58) to 1.38 (1.36 to 1.40) during the same period. Moreover, in 15 out of 18 age groups, women experienced more favorable changes in age-specific MSK mortality rates compared with men.
Joinpoint Regression analysis
The pooled MSK mortality rate decreased between 1986 and 1997, then increased during 1997-2001 and again decreased thereafter in both sexes (with lower rate of increase and higher rate of decrease for women compared with men, Table 1 and Fig. 3). Moreover, while the trend for whole period revealed statistically non-significant annual changes during 1986-2011 (0.2% annual increase and 0.3% annual decrease for men and women, respectively), the recent trend (2001-2011) showed a statistically significant reduction of 0.3% and 1.2% per year among men and women, respectively. However, there were substantial between-country variations in temporal trend (Additional file 4: Figure S3 and Additional file 1: Table S3). During 1986-2011, the highest annual reduction was observed in Romania and highest annual increase in Greece for both sexes. On the other hand, during the most recent decade the highest reduction was observed among women in Republic of Moldova (9.9% annual reduction) and the highest increase among men in Czech Republic (15.0% annual increase).
Between-country disparities
The between-country variance in MSK mortality declined from 131.4 and 207.2 death per million person-years in 1986 to 73.3 and 181.6 in 2011 among men and women, respectively (Fig. 4). In all study years, the magnitude of the between-country variance was higher among women compared with men. The relative between-country disparity measured by Gini coefficient declined from 0.33 and 0.28 in 1986 to 0.25 and 0.26 in 2011 among men and women, respectively. While during 1986-1999, the magnitude of relative between-country disparities were higher among men compared with women, there were lower disparities among men thereafter.
Decomposition analysis of the number of MSK deaths
In 2011, the number of MSK deaths increased by 67% compared to 1986 and this was mainly due to population aging (Fig. 5). In six regions the number of MSK deaths increased due to population growth and aging and declined due to epidemiologic changes, but only in Northern Europe this led to an actual decline in the number of MSK deaths. In Eastern Europe the combination of increase in the number of MSK deaths due to aging population and declines due to population growth and epidemiologic changes translated into a 9% reduction in the number of MSK deaths. In Caribbean, South America, and North America number of MSK deaths due to all three components increased.
Across countries, the number of MSK deaths due to population growth, aging and epidemiologic changes Table S4). In twenty countries despite the declines due to population growth (4 countries) or epidemiologic change (16 countries), the total number of MSK deaths increased between 1986 and 2011. A total of 12 countries presented reductions in the total number of MSK deaths which caused due to either epidemiologic change (Singapore, Poland, Finland, England & Wales, Guatemala, and Panama) or population growth (Estonia) or both (Republic of Moldova, Romania, Russian Federation, Ukraine, and Lithuania).
Discussion
From 1986 to 2011, on average across all countries, the total number of MSK deaths and its proportion from allcause deaths increased in both sexes and this increase was mainly due to population aging. In addition, the age-standardized MSK mortality rate increased by 5.3% in men and declined by 5.6% in women during the same period. Although more women died from MSK compared with men, women experienced more favorable changes during the study period. While there were substantial relative and absolute disparities in MSK mortality between countries, these disparities have declined over time with more profound reduction among men.
The total number of MSK deaths increased due to population growth and aging without a significant reduction due to epidemiologic changes. Across regions while MSK deaths due to epidemiologic changes declined in most regions, these reductions led to actual reductions in MSK deaths only in Eastern and Northern Europe. These findings highlight importance of these demographic forces particularly aging in MSK deaths which should be taken into account by health policy-makers. It should be noted that changes in ICD revision and quality of vital registrations are included in epidemiologic changes and might partly offset reductions due to other epidemiologic changes including improvement in MSK management.
We found substantial variations between regions/countries not only in level of MSK deaths but also in temporal changes of MSK deaths. For example, MSK deaths increased in Southern Europe, Caribbean, Central America, and South America over recent decade. Differences between regions/countries in availability and access to treatments, socioeconomic status, prevalence of MSK risk factors including obesity, epidemiology of disease, quality of vital registration, and cause of death certification including transition from paper to electronic certification and from manual to automated coding system might partially explain these disparities. Moreover, previous studies reported racial disparities in prevalence of musculoskeletal disorders and access to treatment which might partially explain the observed disparities in our study [20][21][22]. Furthermore, although both relative and absolute between-country disparities have declined over time still substantial disparities are present. This is particularly of concern among women where compared with men the relative inequality slightly declined between 1986 and 2011 (5% reduction among women vs. 24% reduction among men). It should be noted that the observed disparity in recent decade cannot be attributed to betweencountry differences in ICD revision since many countries (46 out of 58 countries) were applying the same ICD revision during 2001-2011. Further analyses are required to investigate these disparities in more details.
The observed increases in pooled MSK mortality rates during 1997-2001 are possibly due to the introduction of ICD-10 coding system in many countries (most countries introduced ICD-10 between 1995 and 2001) considering that a ICD-10 to ICD-9 comparability ratio greater than 1 have been reported for MSK deaths [23][24][25].
While not case for all countries but we observed a jump in MSK mortality rates in many countries in early years of introducing ICD-10 revision. This suggests that taking the impact of ICD-10 introduction into account would be associated with a steady reduction in pooled MSK mortality rates over the whole study period. Global focus on MSK particularly since endorsement of the Bone and Joint Decade 2000-2010 [26] by the United Nations and the WHO might partly explain observed reduction in MSK mortality rate. Moreover, new interventions particularly the biologic agents for RA, emergence of new imaging technologies, and advances in the rapid surgical procedures substantially changed the clinical management of patients and might partially explain the observed declining trend in MSK mortality rates [2]. This decline in mortality rate alongside population aging imply potential increases in burden of MSK in coming years and health policy-makers should be aware of this. Several strategies has been suggested in response to this expected increase in burden of MSK including raising patients' awareness about importance of a healthy life style, raising awareness of health professionals through providing adequate training in MSK, early diagnosis and treatment of MSK, improving access to MSK therapies including rehabilitation, implementation of integrated and patient-centred multi-disciplinary models of care, and delivery of primary prevention initiatives at a population level [6,[27][28][29].
Almost in all countries women had higher MSK mortality rate compared with men. Several explanations have been suggested for this sex disparity including higher prevalence of MSK among women, sex differences in biological and hormonal factors, in severity and remission from MSK, in access and responses to treatments, and in susceptibility developing other complications such as cardiovascular disease [30][31][32][33][34][35]. While sex disparity in MSK mortality declined over the study period, substantial effort and resources are required to close the observed gap. For example, if we naively assume that men will continue to have an annual reduction of 0.3% observed in 2001-2011 until 2025, then the annual reduction among women should be doubled (from 1.2 to 2.6%) to close sex disparity in MSK mortality rate by 2025.
In interpreting the results of the current study, several limitations should be considered. First, MSK are underreported as underlying cause of deaths on death certificates and the degree of underreporting might vary between time and space that could bias our findings. Similarly, there are between country/time variations in presence of errors or incompleteness in death certificates which might bias our results. Second, changes in the death certification, coding process over time (e.g., a transition from paper to electronic certification, from manual to automated coding system, from ICD-9 to ICD-10) might bias the results of mortality trends. For example, considering a ICD-10 to ICD-9 comparability ratio above 1, our estimates for countries with declining trends are possibly an underestimation of the magnitude of the true trend, and for countries with increasing trend our estimates could be either an overestimation of the magnitude of the true trend or a bias in direction of change. However, without knowing the country-specific comparability ratios it is hard to quantify the size of biases in our estimations. Furthermore, it should be noted that this differences in ICD revision could not account for the observed disparities in most recent decade since most countries were applying ICD-10 revision over this period. Third, due to lack of data we were not able to investigate mortality of MSK subcategories. For example, only 29 (mostly European countries) out of 58 countries had the required data on mortality of RA over the study period (in these countries RA constituted 22% of all MSK deaths). Fourth, the countries included in our study were mainly upper-middle and high income countries with a reliable vital registration system and available data on WHO mortality database and therefore our results might not be generalizable to lower-middle and low income countries. Fifth, the small number of deaths in some countries might have limited the power of our study to detect significant joinpoints over the study period. In addition, direct age-standardization method used in the study is sensitive to small number of events and the results for countries with low number of events should be interpreted with caution. It also should be noted that the WHO standard population was developed in 2001 and might not reflect demographic changes occurred since 2001. Furthermore, the disparity measures applied in the study are sensitive to outliers and cannot capture socioeconomic gradient in MSK mortality. Sixth, this is a descriptive aggregate-level study and no causal inferences should be made from the findings. Despite these limitations, to our best knowledge, this is the first study to investigate temporal trend and between-country disparities in MSK mortality across a large number of countries. The results of the current study might provide useful insights on epidemiological status of MSK and can be used by policy-makers in planning MSK management at both national and global level.
Conclusion
The total number of MSK deaths and its proportion from all-cause deaths increased between 1986 and 2011 and this was mainly due to population aging. On the other hand, taking the potential impact of ICD-10 revision into account, the pooled mean age-standardized MSK mortality rate declined over the study period with a more favorable reduction among women. The highest MSK mortality rates were observed in Central America and the lowest in Eastern Europe. Between 1986 and 2011, the highest reduction in MSK mortality rate was observed in women in Northern Europe and the highest increase in men in Caribbean. Increases in MSK mortality rates in Southern Europe, Caribbean, Central America, and South America during most recent decade require further actions. Further investigations are required to explain substantial absolute and relative disparities in MSK mortality rate and its temporal trend between sexes, countries, and regions.
|
2018-04-03T03:30:48.198Z
|
2017-02-02T00:00:00.000
|
{
"year": 2017,
"sha1": "e5f91087fab27588bcae8305cf9513c25f2900e8",
"oa_license": "CCBY",
"oa_url": "https://bmcmusculoskeletdisord.biomedcentral.com/track/pdf/10.1186/s12891-017-1428-1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e5f91087fab27588bcae8305cf9513c25f2900e8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
238198317
|
pes2o/s2orc
|
v3-fos-license
|
Asymmetric scattering between kinks and wobblers
The asymmetric scattering between wobblers and kinks in the standard $\phi^4$ model is numerically investigated in two different scenarios. First, the collision between wobblers with opposite phase is analyzed. Here, a destructive interference between the shape modes of the colliding wobblers takes place at the impact time. The second scenario involves the scattering between a wobbler and an (unexcited) kink. In this case the energy transfer from the wobbler to the kink can be examined. The dependence of the final velocities and wobbling amplitudes of the scattered wobblers on the collision velocity and on the initial wobbling amplitude is discussed. Both situations lead to very different fractal structures in the velocity diagrams.
Introduction
The scattering between kinks has become a very popular research topic in recent decades because of its astonishing properties [1][2][3]. The study of the collisions between kinks and antikinks in the φ 4 model was initially addressed in the seminal references [4][5][6]. As it is well known, only two different scattering channels arise: bion formation (where kink and antikink collide and bounce back over and over emitting radiation in every impact) and kink reflection (where kink and antikink collide and bounce back a finite number of times before moving away). These two channels are predominant, respectively, for low and large values of the initial collision velocity. In these studies emerges the fascinating property that the two previously mentioned channels are infinitely interlaced in the transition of these regimes, giving rise to a fractal structure embedded in the final versus initial velocity diagram. The kink reflection windows included in this region involve scattering processes where kink and antikink collide and bounce back a finite number of times before definitely escaping away. This kink dynamics could have important consequences on physical applications where the presence of these topological defects allows the understanding of certain non-linear phenomena. Kinks (and topological defects in general) have been employed in a wide variety of physical disciplines, such as Condensed Matter [7][8][9], Cosmology [10,11], Optics [12][13][14], molecular systems [15,16], Biochemistry [17], etc.
The appearance of a fractal structure in the velocity diagram describing the kink scattering for the φ 4 model is based on the existence of an internal vibrational mode (the shape mode) associated to the kink solutions. The presence of this massive mode together with the zero mode triggers the resonant energy transfer mechanism, which allows the redistribution of the energy between the kinetic and vibrational energy pools when the kinks collide. In a usual scattering event the kink and the antikink approach each other and collide. A certain amount of kinetic energy is transferred to the shape mode, such that kink and antikink become wobblers (kinks whose shape modes are excited), which try to escape from each other. If the kinetic energy of each wobbler is not large enough both of them end up approaching and colliding again. This process can continue indefinitely or finish after a finite number of collisions. In this last case, enough vibrational energy is returned back to the zero mode as kinetic energy, which allows the wobblers to escape. This mechanism and other related phenomena have been thoroughly analyzed in a large variety of models , revealing the enormous complexity of these events and the difficulty in explaining this phenomenon analytically. The collective coordinate approach has been used to accomplish this task for decades, reducing the field theory to a finite dimensional mechanical system, where the separation between the kinks and the wobbling amplitudes associated to the shape modes are promoted to dynamical variables. This method has been progressively improved, see for example [3,4,71,72] and references therein, and recently, a reliable description of the kink scattering in the φ 4 model has been achieved in the reflection-symmetric case [73], by introducing in this scheme the removal of a coordinate singularity in the moduli space and choosing the appropriate initial conditions.
As previously mentioned, after the first collision the initially unexcited kink and antikink become wobblers, so in a n-bounce scattering process the subsequent n − 1 collisions can be understood as scattering processes between two wobblers. This observation justifies an intrinsic interest on the collision between these objects. The evolution of a single wobbler has been studied by employing perturbation expansion schemes by different authors, see [74][75][76] and references therein. The scattering between wobblers in the φ 4 model has been discussed in [77] for a space reflection symmetric scenario. This situation is relevant in the original kink scattering problem where the mirror symmetry is preserved. The goal of these investigations is to bring insight into the resonant energy transfer mechanism by means of numerical analysis of the scattering solutions derived from the corresponding the Klein-Gordon partial differential equations. In this context it is worthwhile mentioning that the scattering of wobblers in the double sine-Gordon model has been studied by Campos and Mohammadi [78].
In this paper we shall continue with this line of research by investigating the asymmetric scattering between wobblers in two different scenarios, which are considered representative of this context. The scattering processes addressed in previous works involve wobblers which evolve with the same phase. This implies that a constructive interference between the shape modes associated to each wobbler takes place at the collision. In this work we propose the analysis of the scattering between wobblers with opposite phases, such that now a destructive interference between the vibrational modes occurs at the impact. The second scenario is described by the collision between a wobbler and an unexcited kink. This allows us to monitor the transfer of the vibrational energy from the wobbler to the kink. We will show that the fractal structures ruled by the resonance phenomenon in these two cases display very different patterns.
The organization of this paper is as follows: in Section 2 the theoretical background of the φ 4 model together with the analytical description of kinks and wobblers is introduced. The kink-antikink scattering is also discussed, which allows us to describe the numerical setting employed to study the problem. Section 3 is dedicated to study the scattering between wobblers with opposite phase, whereas the collision between a wobbler and an unexcited kink is addressed in Section 4. Finally, some conclusions are drawn in Section 5.
The φ 4 model: kinks and wobblers
The dynamics of the φ 4 model in (1+1) dimensions is governed by the action where the Lagrangian density L(∂ µ φ, φ) is of the form The use of dimensionless field and coordinates, as well as Einstein summation convention, are assumed in expressions (1) and (2). Here, the Minkowski metric g µν has been set as g 00 = −g 11 = 1 and g 12 = g 21 = 0. Therefore, the non-linear Klein-Gordon partial differential equation characterizes the time-dependent solutions of this model. The energy-momentum conservation laws imply that the total energy and momentum are system invariants. The kinks/antikinks (+/−) are travelling solutions of (3), whose energy density is localized around the kink center x C = x 0 + v 0 t (the value where the field profile vanishes). The parameter v 0 can be interpreted as the kink velocity. As it is well known, the solutions (5) are topological defects because they asymptotically connect the two elements of the set of vacua M = {−1, 1}. These solutions have a normal mode of vibration. When this mode is excited the size of these solutions (called wobbling kinks or wobblers) periodically oscillates with frequency ω = √ 3. This fact has been numerically checked and has been analytically proved in the linear regime. The spectral problem Hψ ω 2 (x) = ω 2 ψ ω 2 (x) of the second order small fluctuation operator associated with the static kink/antikink, involves the shape mode with eigenvalue ω 2 = 3. The discrete spectrum of the operator (6) is completed with the presence of a zero mode , whereas the continuous spectrum emerges on the threshold value ω 2 = 4. As a result of this linear analysis, the expression can be considered a good approximation of a traveling wobbler in the linear regime a 1. Note that φ The maximum deviation of the wobbler (7) from the kink (5) takes place at the points where the relation holds. An optimized strategy to measure the wobbling amplitude of a traveling wobbler in a numerical scheme is to monitor the profile of these solutions at the points (8). By using fourth order perturbation theory in the expansion parameter a, it has been proved that a depends on time, a = a(t), and decays following the expression where ξ I is a constant. However, when the initial wobbling amplitude a(0) is small, the decay is very slow and becomes appreciable only after a long time t ∼ |a(0)| −2 [74,75]. The scattering between a kink and an antikink has been thoroughly analyzed in the physical and mathematical literature during the last decades. In this case, a kink and antikink which are well separated are pushed together with initial collision velocity v 0 . Taking into account the spatial reflection symmetry of the system the kink can be located at the left of the antikink or vice versa. For very small values of the time t (with respect to the impact time), the previous scenario is characterized by the concatenation for x 0 0, where we have introduced the notation φ (±) The initial separation distance between the kink and the antikink is equal to 2x 0 . The configuration (10) defines the initial conditions of the scattering problem. As it is well known, there exist two different scattering channels in this case: (1) bion formation, where kink and antikink end up colliding and bouncing back over and over, and (2) kink reflection, where kink and antikink collide, bounce, and finally recede with respective final velocities v f,L and v f,R in the opposite direction in which they were initially traveling. These scattering regimes are predominant, respectively, for low and high values of the initial velocity v 0 . In Figure 1 the two previously mentioned final velocities v f,L and v f,R are plotted as a function of the initial collision velocity v 0 . From the spatial reflection symmetry exhibited by the initial configuration (10) it is clear that v f,L = −v f,R and that the velocity of a bion must be zero. Therefore, the velocity diagram in Figure 1 is symmetric with respect to the v 0 -axis. In the next sections we shall address asymmetric scattering events where this symmetry is lost and |v f,L | = |v f,R | in general. The fascinating property found in this scattering problem is that the transition between the two previously mentioned regimes is ruled by a fractal structure where the bion formation and the kink reflection regimes are infinitely interlaced. The kink reflection windows included in this initial velocity interval involve scattering processes where kink and antikink collide and bounce back a finite number of times exchanging energy between the zero and shape modes before definitely moving away. These processes involve the so called resonant energy transfer mechanism.
For the previously mentioned n-bounce processes (with n ≥ 2) it is clear that after the first impact the subsequent collisions correspond to scattering processes between wobblers because, in general, the collision between kinks causes the excitation of their shape modes. Taking into account the spatial reflection symmetry of the problem, the wobbling amplitudes and phases of the colliding wobblers are equal. Therefore, these events are characterized by an initial configuration of the form This scattering problem has been numerically studied in [77]. By mirror symmetry, it can be assumed that the phases of the shape modes of the traveling wobblers are also the same at the impact time, so a constructive interference takes places in the collision. As a consequence, it was found that the fractal pattern enlarges and becomes more complex as the value of the initial wobbling amplitude a increases. Another interesting property in this context is the emergence of isolated 1-bounce windows, which are not present in the original kink-antikink scattering. It is clear that the scattering between wobblers characterized by the initial configuration (12) is extremely relevant to study the resonant energy transfer mechanism in this problem. However, because of the spatial reflection symmetry of this type of processes, wobblers transfer the same amount of energy to each other at the collision, that is, the scattered wobblers travel away with the same final speeds and wobbling amplitudes. In this work we are interested in analyzing more general scattering events where the energy transfer mechanism becomes asymmetric with respect to the traveling wobblers. The first type of processes which could involve novel properties in this framework is the collision between two wobblers with opposite phase. This scenario can be characterized by the initial configuration We have employed the notation W W as subscript of Φ in (13) to simply emphasize that the wobblers have different initial phases and to distinguish this configuration from (12). In this case it is assumed that the wobblers evolve preserving a phase difference of π, giving place to a destructive interference in the excitation of the shape modes of each wobbler when they collide. It is expected that the final versus initial velocity diagrams associated to these scattering events will be affected by this fact and that they will be very different from those found in the constructive interference scenario (12), analyzed in [77]. Another important situation which deserves attention is the scattering between a wobbler and a kink. These asymmetric events can be characterized by the initial configuration where without loss of generality the non-excited antikink/kink φ (∓) has been placed to the right of the wobbler/antiwobbler. This situation allows to analyze how the vibrational energy is transferred to the non-excited kink in a better way than in the previous contexts.
In order to study the scattering between kinks and wobblers in the two previously described scenarios, in the present work we shall employ numerical approaches based on the discretization of the partial differential equation (3) with different initial conditions determined by the configurations (13) and (14). The particular numerical scheme used here is a fourth-order explicit finite difference algorithm implemented with fourth-order Mur boundary conditions, which has been designed to address non-linear Klein-Gordon equations, see the Appendix in [54]. The linear plane waves are absorbed at the boundaries in this numerical scheme avoiding that radiation is reflected in the simulation contours. To rule out the presence of spurious phenomena attributable to the use of a particular numerical algorithm, a second numerical procedure is used to validate the results. This double checking has been carried out by means of an energy conservative second-order finite difference algorithm with Mur boundary conditions.
As previously mentioned, the initial settings for our scattering simulations are described by single solutions (kinks or wobblers) which are initially well separated, and are pushed together with initial collision velocity v 0 . This situation is characterized by the concatenation (13) for the scattering between wobblers with opposite phase and by (14) for the scattering between a wobbler and a kink, both of them with x 0 0. These configurations verify the partial differential equation (3) in a very approximate way for very small values of the time when x 0 0 and a 1. Therefore, Φ(t = 0) and ∂Φ ∂t (t = 0) provide the initial conditions of our scattering problem.
In particular, our numerical simulations have been carried out in a spatial interval x ∈ [−100, 100] where the centers of the single solutions are initially separated by a distance d = 2x 0 = 30. Simulations have been performed for v 0 ∈ [0.04, 0.9] with initial velocity step ∆v 0 = 0.001, which is decreased to ∆v 0 = 0.00001 in the resonance interval.
At this point it is worthwhile mentioning that the expression (7) is only an approximation of the exact wobbler solution. When this expression is employed as initial condition in the Klein-Gordon equation (3) a small amount of radiation is emitted for a very small period of time. In this time interval the approximate solution (7) decays to the exact wobbler. When considering a traveling wobbler, this radiation emission can cause a very small change in its velocity. This effect takes place when δ = 0, π in the expression (7) and it is maximized for δ = ±π/2. In order to avoid this effect we shall implement initial conditions by setting δ = 0 in the configurations (13) and (14). By taking this restriction we guarantee that the traveling wobbler involved in (14) continues to move with velocity v 0 after the initial radiation emission. As mentioned above, this effect is very small and unnoticeable in the final versus initial velocity diagrams. However, we shall analyze the velocity difference of the resulting wobblers and in this context it is better to avoid this influence. On the other hand, for the values δ = 0 or δ = π the decay of the approximation (7) to the real wobbler induces a very small variation in its wobbling amplitude. This effect also is very small and does not affect the global properties of the scattering processes discussed in this paper. An alternative scheme to implement initial configurations (14) with non-vanishing initial phases is to find an approximately equivalent configuration with vanishing phase. This can be obtained, for example, taking into account that
Scattering between wobblers with opposite phases
In this section we shall analyze the asymmetric scattering between two wobblers whose shape modes have the same amplitude but they have opposite phases with respect to our inertial system, which is located at the center of mass. In this context, a wobbler and an anti-wobbler approach each other with initial velocity v 0 and −v 0 , respectively. They evolve preserving the phase difference of π and collide giving place to a destructive interference between the shape modes of the involved wobblers. Figure 2 shows the final versus initial velocity diagrams for three representative values of the initial wobbling amplitude, a = 0.04, a = 0.1, and a = 0.2. Unsurprisingly, the only scattering channels to emerge in this new scenario are still bion formation and kink reflection. As before, the former is predominant for small values of the initial velocity v 0 , while the latter is found for large values. However, these velocity diagrams display some important differences regarding the scattering of wobblers addressed in [77], where the corresponding shape modes have the same phase and a constructive interference occurs in the collision. In this new context, the destructive interference avoids the emergence of isolated 1-bounce windows (at least for non-extreme values of a), as can be observed in Figures 2 and 3. The suppression of this mechanism implies that the fractal structure width does not grow.
In Figure 3 the evolution of the fractal pattern can be visualized as the value of the initial amplitude a increases. First, we can observe that the value of the critical velocity v c varies very slowly as the initial amplitude a grows. For instance, v c ≈ 0.2601 for a = 0.02, whereas v c ≈ 0.2681 for a = 0.2, following a linear dependence in a for intermediate values. Second, it can be seen that the 2-bounce windows are deformed as the value of a increases and get broken up in smaller 2-bounce windows. The first 2-bounce window shown in Figure 3 for a = 0.04 can be used to illustrate this mechanism. This window gets distorted when a = 0.12 and split into two pieces for a = 0.14. In turn, one of these pieces is divided again into two new 2-bounce windows for a = 0.20. Third, spontaneous generation of n-bounce windows with n ≥ 2 can also be identified in the sequence of graphics included in Figure 3. For instance, for a = 0.12 a small 3-bounce window spontaneously emerges in the interval [0.21585, 0.2174], which was occupied by the bion formation regime for previous values of a. Subsequently, this window is split into two parts, resulting in a 2-bounce window in the middle for a = 0.14, which is surrounded by new n-bounce windows. This new 2-bounce window gets bigger as a increases and finally splits into two new 2-bounce windows once more, as you can see from the graphics for a = 0.20. This window generation mechanism could explain the clustering of 2-bounce windows that arise around the v 0 = 0.2566 value for a = 0.20. Another important characteristic of this type of scattering processes is that the final velocities of the scattered wobblers are different. This behavior is not surprising because the initial configuration (13) is not symmetric. Recall that the initial wobbling phases of the colliding wobblers are different. This velocity difference is very small, and therefore not noticeable in the velocity diagrams shown in Figure 2. In order to emphasize this feature we define the magnitude as the difference between the final speed |v f,R | of the rightward traveling wobbler and the final speed |v f,L | of the leftward traveling wobbler. Positive values of ∆v f imply that the wobbler scattered to the right travels faster than the wobbler scattered to the left, whereas negative values describe the reverse situation. In Figure 4, the magnitude ∆v f is plotted as a function of the initial velocity v 0 and the wobbling amplitude a. There, we can see that ∆v f has oscillating behavior, which means that there are alternating initial velocity windows in which the wobbler traveling from the left travels faster than the wobbler traveling from the right and vice versa. The amplitudes of the oscillations exhibited by ∆v f grow as the value of the parameter a increases. This is reasonable because the vibrational energy stored in the shape mode is greater for bigger values of a and the resonant energy transfer mechanism may deflect a greater amount of this energy to the kinetic energy pool. However, the most remarkable property exhibited by Figure 4 is that the zeroes of ∆v f , the initial velocity values for which the two wobblers disperse with the same velocity, are approximately independent of the initial amplitude a. This behavior is precisely followed for sufficiently large values of v 0 , where the effect of the resonance regime is not noticed (approximately for v 0 ≥ 0.3 in Figure 4). In Table 1, the zeros v k of the final velocity difference ∆v f (explicitly computed for the case a = 0.04) are shown in the non-resonance regime. The values v k correspond to the nodes of the oscillations found in Figure 4, which have been remarked by means of vertical dashed lines. The location of these points seems to depend mainly on the value of the wobbling phase when the collision between the wobblers occurs. This conjecture is heuristically supported by the following simple argument. Remember that x 0 denotes the initial position of the kink center, while ω represents the wobbling frequency. As previously discussed, the values x 0 = 15 and ω = √ 3 have been implemented for our numerical simulations. Let v 0 be the initial velocity at which the wobblers are initially approaching. In the point particle approximation the collision would happen at the time t I = x 0 v 0 . We must bear in mind that there are several factors in the real dynamics which break the precision of this assumption. For example, the interaction between the kinks and/or wobblers can make the collision velocity vary (it is not a constant velocity v 0 ). We shall assume that the phase of the wobbler at the instant t I can be expressed as where c(x 0 ) is a correction factor which is included to incorporate the previously mentioned behavior.
The main assumption in this case is that c(x 0 ) does not depend on v 0 . If we think about the initial impact velocity as a variable v, then it makes sense to consider Those phenomena depending only on the wobbling phase must exhibit a periodicity based on the relation where T is the periodicity associated to our problem. In general, T = 2π but in the present scenario where we are interested in the zeroes v k of ∆v f the symmetry of the initial configuration leads to the choice T = π. From (16) we conclude that the discrete set of velocities must share similar features. The nodes v k of ∆v f can be approximately figured out by using equation (17). In Table 1 (third column) the values V k = f k (v 0 , π), obtained by using the formula (17) Table 1: Comparison between the zeros v k of the final velocity difference ∆v f and the values V k = f k (v 0 , π) obtained by using equation (17) for the scattering between wobblers with opposite phase and initial wobbling amplitude a = 0.04.
At this point it is worthwhile mentioning that the zeroes v k introduced in Table 1 have been computed when δ = 0 in the initial configuration (13). The particular location of these points depends on the initial phase δ introduced in (13), although it is clear that the same pattern is periodically reproduced for the values δ + kT with k ∈ Z. Once the final velocities of the scattered wobblers have been examined, we shall now analyze the behavior of the wobbling amplitude of these evolving topological defects. In Figure 5, the oscillation amplitudes of the wobblers moving to the left and to the right are represented as a function of the initial velocity v 0 and the initial amplitude a. There it can be seen that this magnitude follows an oscillating behavior with respect to the kink-antikink scattering. The variation of these oscillations grows as the parameter a increases. Furthermore, the amplitudes of the resulting wobblers follow an antagonistic behavior. When the oscillation amplitude of the wobbler moving to the left reaches a maximum as a function of the initial velocity v 0 , the oscillation amplitude of the wobbler moving to the right is minimized and vice versa. The asymmetry of the initial configuration (13) causes the wobblers to vibrate at different amplitudes in general. On the other hand, there are some points in the graphs shown in Figure 5 where the amplitudes of the two wobblers coincide. Surprisingly, these points coincide with the zeroes v k of the final velocity difference ∆v f (as we can observed by means of the vertical dashed lines plotted in Figure 5). In conclusion, for the initial velocities v k the scattered wobblers travel with the same velocity and vibrate with the same wobbling amplitude.
In order to explore the relation between the final velocity and the final wobbling amplitude of the scattered wobblers, we define the amplitude difference where a f,R and a f,L are, respectively, the final oscillation amplitudes of the wobblers moving to the right and to the left. ∆a > 0 means that the wobbler scattered to the right vibrates strongly than that moving to the left, whereas ∆a < 0 describes the opposite situation. Figure 6 shows simultaneously the final velocity and the amplitude differences ∆v f and ∆a, as functions of the initial velocity v 0 for the particular value a = 0.10. It can be seen that when a scattered wobbler gains more kinetic energy than the other, it obtains less vibrational energy, and vice versa. The values v k are interpreted as the collision velocities for which the final velocities and the wobbling amplitudes of the scattered wobblers are the same. Finally, another consequence of the asymmetry of these scattering events is that the bion (formed as a bound state between the two colliding wobblers) can now move with certain final non-vanishing velocity after the impact. This velocity will be very small and for this reason it is sometimes difficult to compute its magnitude numerically. In Figure 7 the region of the velocity diagram introduced in Figure 2 for a = 0.10 with v 0 ∈ [0.10, 0.18] has been enlarged to illustrate the behavior of the bion velocity. Again, we find an oscillating pattern, clearly seen in Figure 7 for the interval v 0 ∈ [0.13, 0.16]. Also, it turns out that the formula (17)
Scattering between a kink and a wobbler
In this section we shall study the scattering between a wobbler and a kink. This scenario is characterized by the concatenation (14). With the first choice of signs, this configuration describes a wobbler and an antikink which travel respectively with velocities v 0 and −v 0 . The rightward traveling wobbler and the leftward traveling antikink approach each other, collide, and bounce back. As usual, the formation of a bion and the reflection of the solutions complete the list of possible scattering channels. In the reflection regime, the initially unexcited antikink becomes an anti-wobbler after the collision because, in general, the shape mode of this solution is excited. Therefore, after the impact two wobblers emerge moving away with different final velocities in our inertial system. The goal of this study is to analyze the transfer of the vibrational and kinetic energies between the resulting wobblers. The dependence of the final velocities of the scattered extended particles on the initial velocity v 0 has been graphically represented in Figure 8 for the cases a = 0.04, a = 0.1, and a = 0.2.
Some of the most relevant characteristics described in [77] for the scattering between wobbling kinks are also found in this framework, such as the emergence of isolated 1-bounce windows and the growing complexity of the fractal pattern as the initial wobbling amplitude a of the originally rightward-traveling wobbler increases. It is also worthwhile mentioning the presence of oscillations in the 1-bounce tail arising for large values of the initial velocity. However, these features are less accentuated in this scenario. The reason of this behavior lies in the fact that the constructive interference is maximized when the wobblers collide with the same wobbling phase. In particular, we can observe the existence of two From the previous list of 1-bounce windows, it can be verified that once an isolated 1-bounce window emerges its location is approximately fixed (although its width slightly grows) as the initial wobbling amplitude a increases. This behavior can be checked in Figure 9. Note that the deviation from the rule described above is a small translation of the center of these windows. In Figure 9 the vertical dashed lines mark the values of the initial velocity which determine the centers of the 1-bounce windows for the extreme case a = 0.2. Once again, these velocities approximately follow relation (17), which reveals that the role of the phase of the evolving shape mode is predominant in this phenomenon.
The velocity diagrams shown in Figure 8 also have some distinctive properties of their own. Because the scattering processes introduced in this section are asymmetric, the final velocities of the resulting wobblers are different, as well as their wobbling amplitudes. In order to illustrate this feature more clearly, the difference ∆v f between the final speeds of the scattered wobblers is plotted for different values of the wobbling amplitude a in Figure 10. For the sake of simplicity, only 1-bounce events have been included in Figure 10. As in the case of the scattering between wobblers with opposite phase discussed in Section 3, the zeros of this function ∆v f are approximately independent of the initial amplitude a and, indeed, coincide with the zeroes v k introduced in Table 1 in Section 3. This behavior underlies the fact that the initially rightward wobbler defined in the configuration (14) has the same initial conditions as those given by the configuration (13). In Figure 11 the final wobbling amplitudes of the scattered wobblers are plotted as a function of the initial velocity v 0 and the initial wobbling amplitude a. Recall that a L (v 0 , a) and a R (v 0 , a) represent, respectively, the final wobbling amplitudes of the resulting leftward and rightward traveling wobblers after the collision. We can observed that the shape modes of the scattered wobblers become excited and its amplitudes are similar as a function of the initial velocity, oscillating around the values found for the kink-antikink scattering events (with a = 0). However, the amplitude of these oscillations is much bigger for the final rightward traveling wobbler.
To illustrate the role of the the zeroes v k of the final velocity difference ∆v f shown in Table 1 in this scenario, the functions ∆v f and ∆a have been represented simultaneously for the case a = 0.10 in Figure 12. As in the scattering between wobblers with opposite phase, the values v k determine the initial velocities for which the final velocities and the final wobbling amplitudes are the same for the both scattered wobblers.
Conclusions
This paper delves into the study on the scattering between wobbling kinks initially addressed in [77]. Here, we have investigated the asymmetric scattering between kinks and wobblers (kinks whose shape mode is excited) in the standard φ 4 model. In particular, two different scenarios in this context have been considered: (a) the scattering between wobblers with opposite phases, and (b) the scattering between a wobbler and an unexcited antikink. Both cases exhibit the usual bion formation and reflection regimes, Table 1. which are infinitely interlaced forming a fractal structure embedded in the final versus initial velocity diagram. However, the first case involves a destructive interference of the shape modes in the collision. As a consequence, the growth in the complexity of the fractal pattern is smaller than that found in [77], where the colliding wobbling kinks travel with the same phase leading to a constructive interference at the impact. For example, the emergence of isolated 1-bounce windows is not found in this new case (at least for moderate values of the initial wobbling amplitude a), although the splitting of n-bounce widows is present. On the other hand, the kink scattering in the second scenario displays similar features (although more attenuated) than to those found in [77].
Due to the asymmetry of the initial configurations (13) and (14), the final velocities and wobbling amplitudes of the scattered wobblers are different in general. However, there is a sequence of initial velocities for which both the final velocities and wobbling amplitudes coincide. These values are almost independent of the initial wobbling amplitude a when the initial wobbling phase considered in (13) and (14) is fixed. Besides, the values of these velocities very approximately follow the expression (17). This means that the phase associated to the shape modes of the evolving wobblers at the collision instant plays a predominant role in the scattering properties of these objects. Indeed, (17) allows to obtain values of the initial velocities which share similar features. For example, this expression has been used in the second scenario to predict the location of the maxima of the isolated 1-bounce windows. Finally, it is also worthwhile mentioning the results displayed in Figures 6 and 12. It can be verified that systematically when a scattered wobbler gains more kinetic energy than the other, it obtains less vibrational energy and vice versa.
The research introduced in the present work opens up some possibilities for future work. For example, the φ 6 model implies a resonance regime similar to the φ 4 model, although it does not present vibrational eigenstates in the second-order small fluctuation operator. The characteristics of scattered wobbling kinks can be analyzed to study their influence on the resonant energy transfer mechanism. Alternatively, you can build a model twin to the φ 6 model that involves internal modes. By doing this, we could compare the scattering processes of the twin model with those of the standard φ 6 model. In this way, it will be possible to examine the role that shape modes play in the collision process. Furthermore, many other different topological defects (kinks in the double sine-Gordon model, deformed φ 4 models, hybrid and hyperbolic models, etc.) could be studied in the new perspective presented here. Work in these directions is in progress.
|
2021-09-29T01:16:00.037Z
|
2021-09-28T00:00:00.000
|
{
"year": 2021,
"sha1": "92907101ba4bdd76a27ac2bdfde275e93f502c0c",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2109.13904",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "92907101ba4bdd76a27ac2bdfde275e93f502c0c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Computer Science",
"Physics"
]
}
|
244779729
|
pes2o/s2orc
|
v3-fos-license
|
FcER1: A Novel Molecule Implicated in the Progression of Human Diabetic Kidney Disease
Diabetic kidney disease (DKD) is a key microvascular complication of diabetes, with few therapies for targeting renal disease pathogenesis and progression. We performed transcriptional and protein studies on 103 unique blood and kidney tissue samples from patients with and without diabetes to understand the pathophysiology of DKD injury and its progression. The study was based on the use of 3 unique patient cohorts: peripheral blood mononuclear cell (PBMC) transcriptional studies were conducted on 30 patients with DKD with advancing kidney injury; Gene Expression Omnibus (GEO) data was downloaded, containing transcriptional measures from 51 microdissected glomerulous from patients with DKD. Additionally, 12 independent kidney tissue sections from patients with or without DKD were used for validation of target genes in diabetic kidney injury by kidney tissue immunohistochemistry and immunofluorescence. PBMC DKD transcriptional analysis, identified 853 genes (p < 0.05) with increasing expression with progression of albuminuria and kidney injury in patients with diabetes. GEO data was downloaded, normalized, and analyzed for significantly changed genes. Of the 325 significantly up regulated genes in DKD glomerulous (p < 0.05), 28 overlapped in PBMC and diabetic kidney, with perturbed FcER1 signaling as a significantly enriched canonical pathway. FcER1 was validated to be significantly increased in advanced DKD, where it was also seen to be specifically co-expressed in the kidney biopsy with tissue mast cells. In conclusion, we demonstrate how leveraging public and private human transcriptional datasets can discover and validate innate immunity and inflammation as key mechanistic pathways in DKD progression, and uncover FcER1 as a putative new DKD target for rational drug design.
INTRODUCTION
Systemic disease diabetes mellitus (DM) is characterized by an inability of the body to either produce or effectively respond to the glucose-regulating hormone, insulin. The International Diabetes Federation 2017 estimates that there are 425 million people (both diagnosed and undiagnosed) with diabetes in the world, which will reach 629 million people by 2045. The kidney is a highly vulnerable tissue in the diabetes milieu as the prevalence of end-stage renal disease (ESRD) is up to 10 times higher in people living with diabetes (1). Diabetic nephropathy or diabetic kidney disease (DKD) is a key microvascular complication of diabetes, classically identified by the presence of proteinuria (microalbuminuria in early stages and macroalbuminuria as DKD advances) in people with diabetes. However, increasing evidence has shown that a significant number of patients with type 2 DM may have decreased glomerular filtration rate (GFR) without significant albuminuria, known as non-albuminuric DKD (2). Progression of DKD is more likely to occur in patients who have long-standing diabetes, poor glycemic control, or associated morbidities such as hypertension or obesity. But the progression rate to kidney failure in people who are non proteinuric but living with diabetes is much lower than those who are proteinuric (3). The lifetime risk of DKD is roughly equivalent to type 1 (insulin-dependent, juvenile-onset) and type 2 (adult-onset) diabetes (4,5). There are no identified therapies that can specifically target reversal or slow down the progression of DKD injury. Current DKD management involves general measures including, lifestyle modification, blood pressure control, glycemic control, plus the use of lipidlowering drugs, albuminuria-reducing drugs, and treatment with sodium-glucose co-transporter 2 (SGLT2) inhibitors (6)(7)(8)(9)(10)(11). Among people with diabetes, the development of DKD carries a higher mortality risk. A substantial proportion of people with DKD will have progressive loss of kidney function and will develop ESRD (12). Hence, there is a clear unmet need to understand the specific biological basis of kidney injury in diabetes, to understand pathways that contribute to progressive DKD injury, and develop specific DKD-targeted therapies that can reverse or slow down the unrelenting progression of DKD and, improve both the quality and quantity of life for this patient group.
The development and progression of DKD is thought to involve a combination of hemodynamic, metabolic, ischemic, and inflammatory factors (13)(14)(15)(16). However, the exact mechanisms of kidney injury in a hyperglycemic milieu, with underlying genetic and racial risk factors that may increase DKD disease risk, remain to be elucidated. The predominant structural changes include mesangial expansion, glomerular basement membrane thickening, podocyte injury, and, ultimately, glomerular sclerosis (17). Albuminuria and progressive chronic kidney disease (CKD) are major clinical manifestations of DKD (18). Although, classic lesions of diabetic nephropathy is similar in both type 1 and type 2 diabetes, however, renal lesions are more heterogeneous in patients with type 2 diabetes with some patients developing more advanced vascular or chronic interstitial lesions than diabetic glomerulopathy (19,20).
The principal biomarkers presently used to predict DKD progressions are albuminuria and estimated glomerular filtration rate (eGFR) (21). However, not all cases of classical DKD are accompanied by increases in albuminuria, which reduces the value of this biomarker, particularly in early DKD (22)(23)(24). Optimizing the management of diabetes can reduce the rate of kidney function decline, but with a lack of clarity on DKD pathogenesis, specific DKD therapies have not been developed to date (25).
We hypothesized that innate immunity and inflammation may play an important mechanistic role in DKD progression. Our study is designed to evaluate such perturbations in the kidney tissue and circulating blood of people with diabetes, and then interrogates the expression of specific markers in the biosamples with the increased severity of diabetic kidney damage and dysfunction. The hypothesis stems from the fact that dendritic cells, mast cells, and macrophages have all been reported to infiltrate diabetic kidneys (15,26). A population study of diabetes revealed a positive correlation between plasma IgE and diabetes or prediabetes (27). Increased levels of eosinophilia positively correlated with worsening stages of CKD and DKD (28), thus providing evidence of dysregulated innate immunity. However, this is not an invariant finding in all people living with diabetes. Histological findings are consistent with apparent observations of mast cells and eosinophils in kidney tissues from DKD.
In this paper, we carefully mapped three independent, nonoverlapping cohorts of individuals with or without diabetes, to identify genes that might play a crucial role in the development and progression of DKD. We performed an unbiased discovery analysis of gene expression changes in peripheral blood mononuclear cells (PBMC) collected from individuals with varying stages of DKD in Cohort 1. Analysis of genes in Cohort 1 was focused on evaluating pathways that may play a role in the proinflammatory component of DKD injury, and are also impacted by the advanced stages of kidney injury (Table 1). With the assumption that infiltration of leukocytes in the diabetic kidney may impact the severity of glomerular disease progression in diabetes, in Cohort 2 we performed unbiased discovery of transcriptional changes in the glomerulus of DKD tissue. We aimed to find enrichment of DKD specific overlapping genes between the two cohorts, which can then be used as a biomarker for DKD. Finally, in Cohort 3, we independently validated the protein expression and localization of the most enriched gene in renal tissue samples from varying stages of DKD, assessing if genes identified from Cohort 1 and 2 could be localized in DKD tissue. Figure 1 summarizes the overall study design.
Study Design and Samples
This project benefits from a unique study design, utilizing analysis of transcriptional data from 103 unique samples, from 2 different tissue sources: PBMC and kidney tissue. In addition to evaluating common gene expression signatures in DKD across both tissue sources, we also designed the study to evaluate the impact of transcriptional changes with the increasing severity of DKD injury. This study is divided into three cohorts, microarray gene expression analysis from PBMCs and kidney glomerulous (publicly available datasets) forms cohort 1 and 2, respectively, which was used for unbiased discovery of genes that correlated with DKD severity. Finally, in cohort 3, we performed in situ validation of the target gene expression in the DKD kidney ( Figure 1) in situ.
Cohort 1 consisted of transcriptional profiling of PBMC from 30 unique DKD patients with different stages of CKD and 10 normal healthy controls without diabetes or kidney damage (no proteinuria and eGFR > 90 ml/min/1.73m 2 ) (29). The 30 patients with diabetes have advancing stages of renal injury, as mapped by proteinuria and eGFR: 11 had eGFR > 90 ml/min/1.73m 2 and without any microalbuminuria, 7 had eGFR > 90 ml/min/ 1.73m 2 , with microalbuminuria, 5 patients had eGFR between 60-90 ml/min/1.73m 2 , with macroalbuminuria, and 7 patients had macroalbuminuria and eGFR < 60 ml/min/1.73m 2 . DM patients were on insulin therapy. Demographic and clinical details are provided in Table 1. All blood samples were collected between November 2011 and August 2012 from a single academic center (University of Wisconsin). Written informed consent was obtained from all the participants, and the study was approved by the institutional review boards of the University of Wisconsin and the University of California, San Francisco.
Cohort 2 consisted of 51 unique patients where glomerulous had been microdissected and transcriptionally profiled (GSE1009, GSE47183, and GSE30528) from either patients with DKD (n=25) or without DKD controls (minimal change disease; n=26). Diabetic nephropathy is characterized by diffuse or nodular glomerulosclerosis, afferent and efferent hyaline arteriolosclerosis, and tubulointerstitial fibrosis and atrophy (30). In this study, we focused on glomerular genes that may contribute to glomerulosclerosis, which is the typical initial clinical manifestations of DKD. Since the hemodynamic phenotype in early diabetes is characterized by glomerular hyperfiltration which has been associated with progressive diabetic nephropathy, we selected genes enriched with expression in the kidney glomerulus for separations of the initial cohorts, as diabetes is a pathology that primarily results in changes in the kidney filtration unit. These smaller subcompartment gene set differences allow for a greater understanding of specific changes in the kidney glomerulus in DKD injury, which eventually is associated with progressive diabetic nephropathy.
Cohort 3 consisted of 12 unique kidney tissue samples that includes, 3 patients without diabetes who had normal renal tissue resection at nephrectomy for renal tumor removal and 9 DKD patients with renal biopsies, with varying grades of CKD: 3 with eGFR > 90, 3 with eGFR 60-90 and 3 with eGFR < 60 ml/min/ 1.73m 2 . Cohort 3 served as the independent validation set for DKD specific renal tissue gene expression, as mined from an overlap of genes from Cohorts 1 and 2.
PBMC Collection and RNA Extraction
Total RNA extracted from PBMC by the RNeasy mini kit (Qiagen); RNA concentration measured by NanoDrop ® ND-1000 (NanoDrop Technologies) and RNA integrity assessed by the Agilent 2100 Bioanalyzer using RNA Nano Chips (Agilent Technologies). RNA was stored in RNase-free water at −80°C until sample preparation for transcriptional analysis-unbiased discovery was done by microarrays (Agilent Technologies), and validation was done for target genes by quantitative reverse transcriptase-polymerase chain reaction (qRT-PCR).
PBMC Microarray Hybridization
Complementary DNA (cDNA) was prepared by reverse transcribing the total RNA using T7-promoter primer and MMLV reverse transcriptase; 100 ng of total RNA processed by the Agilent LIRAK PLUS, two-color Low RNA input Linear
Immunohistochemistry
Histological analysis of target gene expression was performed via immunohistochemistry (IHC) in human kidney tissue (Cohort 3). Formalin-fixed paraffin-embedded (FFPE) sections of 5 µm were stained with mouse monoclonal antibodies against our target gene of interest. We purchased the primary FcER1 antibody (sc-390222) and secondary m-IgG Fc BP-HRP (sc-525409) from Santa Cruz Biotech (Dallas, TX) and used at a dilution of 1:100 with appropriate negative controls.
Immunofluorescence Immunofluorescence (IF) was performed on FFPE tissue sections. Paraffin blocks were deparaffinized and rehydrated with heat-induced epitope retrieval with citrate buffer (pH 6.0) at and blocked by 5% BSA. Kidney tissue sections were incubated overnight with primary human antibodies. The primary antibodies used was: Mast cell tryptase, CD68, and CD45 (DAKO). The secondary antibody used were: anti-mouse IgG FITC and anti-rabbit IgG Texas red (Vector Laboratories). Sections were mounted in Prolong gold antifade-DAPI aqueous mounting medium (Invitrogen USA) and visualized Customized bioinformatics allowed for the selection of genes that correlated with DKD severity in both data sets by hypergeometric enrichment. Network and pathway analysis was performed with significant overlapping genes. One target with the most significant expression with DKD severity and the associated cell type was validated in an independent set of DKD samples (Cohort 3
Data Analysis
All data analyses were run by using R version 4.04 and Python 3.8.8 (31,32). The raw PBMC microarray data were processed using the Limma package in R and normalized within (loess) and between (aquantile) arrays (33). The levels of gene expression across different levels of proteinuria (i.e., normoalbuminuria, microalbuminuria, and macroalbuminuria) were measured and ranked by Jonckheere-Terpstra (JT). JT is a rank-based nonparametric method for testing differences between more than two groups. It was used to rank genes with increasing or decreasing expression in different levels of albuminuria ( Figure 2). The JT test function of the SAGx package in R was implemented for this analysis (34). The Principal Component Analysis (PCA) plot showing unique genes from the microarray analysis performed on PBMCs from patients at different stages of DKDs has three outliers ( Figure 2A), which are removed to plot the dendrogram of unsupervised clustering for all samples in Figure 2B. These 3 samples had poor sample quality with greater missing data which may account for their outlier distribution. Expecting some intra-cohort variability is expected in human phenotyping studies. Despite this, we show that the majority of patients in each cohort, as shown in Figure 2B, fall within groupings that are driven in a biologically meaningful manner. Data visualization was done with GraphPad. Significant trends in gene expression with the level of albuminuria were computationally determined. The facet grid plot ( Figure 2D) was created by using Python 3.8.8. The analysis utilized the Pandas and Seaborn libraries (35,36). Candidate genes were selected by taking the top ten statistically significant proteins. For analysis of Cohort 2, probe labels across different glomerular microarray platforms in GEO were converted to Entrez Gene identifiers with AILUN, and further analysis was performed by expression genome-wide analysis (eGWAS) (37). A one-tailed t-test was used to calculate P values between cases and controls. P values were converted to Z-scores and meta-analysis was performed based on the weighted Z-method. Expression leading to a Z-score above 5 or below -5 was considered significant. Manhattan plots were generated by using the qqman and qnorm R libraries ( Figure 3A) (38,39). Pathway analysis dot plots were constructed by using Pandas and ggplot2 ( Figures 2E, 3B) (40). For pathway analysis dot plots the FDR limit was set to <= 0.05. We calculated the likelihood of repeated differential expression of genes across the kidney samples with DKD and compared specific glomerular gene expression from Cohort 2 with completely independent PBMC gene expression data from patients' with DKD in Cohort 1, to enrich for DKD specific biologically relevant gene expression signatures from different DKD tissue sources ( Table 2 and Figure 4). We used Ingenuity Pathway Analysis (IPA) (http://www.ingenuity.com/; QIAGEN, Redwood City, CA, USA) to connect a comprehensive list of genes potentially associated with the development of diabetes. IPA is a bioinformatic tool that connects a list of genes into a set of networks based on the Ingenuity Knowledge Base, which contains information on biomolecules (represented by nodes in the networks) and their relationships (represented by edges and arrows in the networks). In our study, we uploaded the overlapping genes between, cohorts 1 and 2, to IPA and performed the core analysis function to detect the signaling pathways that are potentially associated with diabetes. The resulting pathways, functions, and networks are scored based on the negative base-10 logarithm of the p value from a right-tailed Fisher's exact test.
The p values obtained using this test identify statistically significant enrichment of the focus genes in a given function, pathway, or network (41)(42)(43). Imaging in Cohort 3 was analyzed on Definiens Tissue Studio (Definiens, Germany) ( Figure 5).
Cohort 1: DKD Gene Expression in PBMCs Correlates With Disease Progression and Identifies FcER1 as the Topmost Gene With the Highest Expression
Transcriptional analysis of PBMC samples from patients with progressive kidney injury and diabetes identified 853 genes that increased expression, and 1355 genes that decreased expression (JT test; p < 0.05), with the progression of albuminuria, and worsening of eGFR in patients with diabetes. The clustering of these samples before and after the JT test is shown in Figures 2A-D. In Figure 2D, candidate genes were selected by taking the top ten statistically significant genes. The analysis demonstrates the higher expression of FcER1 in more advanced A B FIGURE 3 | (A) Different glomerular microarray platforms in GEO were converted to Entrez Gene identifiers and analyzed by eGWAS (Cohort 2). Using a one-tailed T-test P values between DKD and controls were calculated, which were converted to Z-scores and meta-analysis was performed based on the weighted Z-method. Gene expression leading to a Z-score above 5 or below -5 was considered significantly up or downregulated. Table 1). It is interesting to note that the PBMC transcriptome of health and ESRD cluster closer together (Figures 2A, B). The dot-plot in Figure 2E shows top pathways in cohort 1 that include Toll-like receptor, insulin-related, TNF, and type 2 DM signaling pathways.
Cohort 2: Publicly Available Datasets Identify FcER1 Signaling as the Most Upregulated Pathway in DKD Glomerular Injury
Datasets in GEO were computationally interpreted eGWAS to identify differential gene expression associated with DKD glomerular injury ( Figure 3A). We identified 325 significantly upregulated and 248 downregulated genes in DKD glomerulous. The dot-plot in Figure 3B shows top pathways in cohort 2 that includes, T-cell receptor, FcGR-mediated, FcER1-mediated, AGE-RAGE, and Toll-like receptor signaling pathways. Pathway enrichment analysis showed that the FcER1 signaling pathway is the most upregulated KEGG pathway and various types of immune responses (e.g.defense, positive regulation, activating signal transduction) are significantly high among all other biological processes. It was interesting to note that biological processes including kidney and nephron development are significantly downregulated.
Overlapping Genes From Cohorts 1 and 2 That Correlate With DKD Severity Identified FcER1 Signaling as the Common Most Enriched Pathway
We interrogated overlapping increased specific gene expression from the meta-analysis of publicly available transcriptional data (cohort 2) with lab-generated PBMC transcriptional studies in DKD (cohort 1). Twenty-eight genes were found to be significant for DKD and overlapping between both datasets. Pathway analysis of these enriched genes with IPA revealed canonical pathways that are significantly overrepresented in this gene set are presented in Figure 4A. FcER1G was found to be the most significantly expressed gene dysregulated in DKD tissue showed increased expression in PBMC with increasing severity of DKD renal injury ( Table 2). FcER1 signaling was also identified as one of the significantly enriched canonical pathways along with Tec Kinase, NF-B, IL-10, and T Helper cell differentiation pathways. Figures 4B shows that FcER1G is the center node connecting most of the identified significant genes. Figure 4C is the known signaling pathway of FcER1, where the proteins including FcER1, TNF, SLP, and Ras are also present in the common 28 genes list. samples obtained from patients with varying CKD stages of DKD (Cohort 3). FcER1 expression was high in the renal interstitium and some in glomeruli. Significantly increased immunopositivity to FcER1 was noted in samples with DKD ( Figures 5A-D), with significance for a progressive increase in staining intensity of FcER1 with disease severity [mildmoderate DN (p < 0.01) and severe DN (p < 0.001)] when compared to controls ( Figure 5E). The quantitative changes in FcER1 tissue gene expression in advancing DKD disease in Cohort 3, were mirrored by a similar trend in quantitation of FcER1 PBMC gene expression in advancing DKD disease, in an entirely different cohort of patients in Cohort 1 ( Figures 5F, G). IF co-staining DKD Cohort 3 tissue samples with both FcER1 and inflammatory cell markers, specifically showed a co-expression with tryptase, a mast cells marker (Figure 6).
DISCUSSION
Current therapy for DKD is mostly focused on the use of angiotensin-converting enzyme inhibitors (ACEi) or angiotensin receptor blockers (ARBs) to reduce intra-glomerular pressure (44) along with multifactorial interventions such as improved control of blood pressure, blood glucose, and lipid, and smoking cessation as a means of slowing the atrophy of renal function in diabetes (45). Sodium-glucose-linked transporter 2 (SGLT2) inhibitors have been shown to further reduce renal events in patients with diabetes as an additive benefit over ACEi or ARB therapy. SGLT2 inhibitors suppress renal glucose reabsorption to reduce blood glucose and body weight, alter renal hemodynamics, reduce intraglomerular pressure, attenuate diabetes-associated hyperfiltration, tubular hypertrophy, and tubular toxicity of glucose to directly protect the kidney. Furthermore, SGLT2 inhibitors reduce the workload of the proximal tubules to improve tubulointerstitial hypoxia, and then allow fibroblasts to resume normal erythropoietin production, and thereby protecting the kidney (46). While DKD is not considered a primarily "immunemediated" form of kidney disease, extensive evidence supports the involvement of many immune system components in DKD progression and even initiation. Emerging knowledge about the dysregulation of the immune response and inflammation, suggests a key pathogenic link between aberrant innate and adaptive immunity, metabolism, and progressive kidney damage, and thus open the door to explore new immunemodulatory therapies in DKD. Inflammation is being increasingly understood to be a prominent pathological feature of diabetic nephropathy. Increased inflammation and inflammatory markers have been shown to predict worse renal outcomes (15). Recently, baricitinib and a CCR2 antagonist CCX140 have been shown to reduce proteinuria in DKD, likely because of a global effect on renal tissue inflammation (47,48). CCR2 expressed by monocytes and macrophages are the main receptor for the pro-inflammatory chemokine CCL2. Urinary excretion of CCL2 correlates with the severity of DKD, again supporting the role of increased inflammation and renal injury in diabetes (49)(50)(51)(52). Additionally, the importance of pharmacologic intervention to block inflammation in the diabetic kidney, can be seen in pre-clinical models, where either blockade of CCR2, or lack of CCR2 in a knockout model, can reduce renal macrophage and monocyte infiltration and may translate to a downstream reduction of interstitial fibrosis in the kidney (45,53).
Recognizing that there is a critical immune-mediated component of renal injury in diabetes, we undertook an innovative study design (Figure 1) of unbiased transcriptional discovery and independent gene and protein validation of key dysregulated pro-inflammatory pathways in independent patients with diabetes with varying stages of DKD injury, examining different tissue sources, PBMC ( Figure 2) and kidney tissue (Figure 3). This analysis reveals a skewed gene expression profile of an overlapping hub of genes, in both specific immune cells in the blood and similar infiltrating immune cells in the DKD kidneys, with significant enrichment in Fc epsilon receptor (FcER1), and T-cell receptor-mediated signaling pathways ( Figures 2E, 3B). From the list of candidate genes, tumor necrosis factor (TNF), toll-like receptor (TLR), and chemokine receptor 1 (CCR1) have been previously described in the literature as playing a role in DKD; supporting the biological relevance of the identified gene-set in this study (54)(55)(56)(57)(58). Microarray analysis of the PBMC dataset (Cohort 1), identified FcER1, TRPC3, SNX20, FAM20A, and SLC12A7 as genes showing increased expression with the severity of diabetes (Supplementary Table 1). FcER1 and TRPC3, both are expressed and associated with inflammatory signaling in mast cells. SNX20 plays a role in cellular vesicle trafficking, FAM20A is expressed in hematopoietic cells, and SLC12A7 is required for basolateral Cl (-) extrusion in the kidney and contribute to renal acidification. In Cohort 2, our eGWAS meta-analysis data evaluated up-regulated genes in microdissected glomeruli from DKD patients. Comparing both the PBMC and kidney tissue in independent DKD cohorts surprisingly identified a common set of 28 genes with significantly increased expression in both DKD PBMCs and glomerulous ( Table 2), with high significance for FcER1 in DKD disease (Figure 4). Further expression and colocalization of FcER1 in yet another independent set of human DKD kidney tissue samples, with varying stages of renal injury, confirmed increased FcER1 gene and protein levels in human DKD injury, where FcER1 expression co-localized in infiltrating mast cells (Figures 5, 6). It is important for us to understand that glomerular genes in cohort 2 includes the entire glomerulus consists of glomeruli and glomerular-interstitial spaces. Tubulointerstitium is a continuation of the glomerular interstitium. So it is understandable that the inflammatory response is spilling into these continual mesenchymal spaces as the DKD pathology recruits more cells. Also, even in the case of early DKD though the major involvement is of the glomerulus, there is inflammatory cells recruitment from the tubular interstitium. All these could explain the expression of FcER1 protein in tubules as well ( Figures 5 and 6). In future, we will address the tubular involvement in DKD and its association with FcER1.
FcER1 is a high-affinity IgE receptor expressed on mast cells, basophils, eosinophils, and antigen-presenting cells (59). Mast cells are an innate immune cell type that has been previously shown to infiltrate the renal parenchyma in pre-clinical models of DKD and human diabetic kidney disease. We hypothesize that mast cells increasingly infiltrate the human diabetic kidney and support an inflammatory milieu, through increased expression of FcER1, which then may play a critical role in DKD progression. Increased intra-renal FcER1activation in infiltrating mast cells in a patient with diabetes may promote the release of many inflammatory mediators, including TGF-b, TNF-a, IL6, Tryptase, IL1, which can then subsequently drive the development of renal fibrosis. In diabetic nephropathy, mast cells participate in renal fibrosis by contributing to excessive accumulation of extracellular and mesangial matrix and also producing non-fibrillar short-chain type VIII collagen (60). Preclinical studies support this hypothesis, as mast cell inhibitors such as cromolyn or ketotifen (Zaditor) have been found to protect mice with diabetes from the development of renal injury (27). Some of this reno-protective effect may likely be mediated by reducing the expression of FcER1. In this study, we found coexpression of the mast cell marker tryptase and FcER1 in DKD with a trend towards higher tissue expression of both with DKD progression (Figure 6). This fact argues for the biological relevance of this axis in DKD damage rather than this only being a non-specific effect of the kidney being just exposed to a diabetic milieu. Also, we noted that the PBMC transcriptome of healthy and ESRD cluster closer together in Figure 2, which is consistent with the IF results in Figure 6, where the kidney FcER1 expression appears to be dampened in patients with severe diabetic nephropathy compared with moderate disease. We believe this is because both, healthy and ESRD, have lower levels of immune response genes compared to active DKD kidney injury.
In conclusion, we demonstrate how transcriptomic datasets may be combined and integrated to highlight the most robust markers. This study highlights the importance of both immune and nonimmune mechanisms driving diabetic nephropathy. The innate immune response occurring in the diabetic kidney is an anticipated consequence of the chronic stresses and injury in the diabetic kidney. Ultimately, the ongoing diabetic renal inflammation results in substantial kidney damage with progressive fibrosis that eventually leads to end-stage renal disease. Therefore, therapeutic strategies targeting the innate immune system will be important for the treatment of diabetic nephropathy. This is the first study implicating a direct role of the IgE receptor FcER1 in DKD progression, thus uncovering a new druggable target for improving DKD or slowing DKD progression. Further validation studies are planned to evaluate the role of FcER1 blockade in preclinical models of diabetes to directly address the efficacy of targeting this molecule as a novel therapeutic for DKD.
DATA AVAILABILITY STATEMENT
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: https://www.ncbi. nlm.nih.gov/, GSE142153.
ETHICS STATEMENT
Written informed consent was obtained from all the participants, and the study was approved by the institutional review boards of the University of Wisconsin and University of California, San Francisco.
AUTHOR CONTRIBUTIONS
MN conducted experiments. SS, PB, and MN performed data analysis, data interpretation, manuscript writing. HS participated in the sample collection and study design. TS contributed in experiments, data analysis and manuscript writing. MS conceived the study, analysis and data interpretation, writing and revising the manuscript critically for intellectual content. All authors contributed to the article and approved the submitted version.
|
2021-12-02T14:25:20.873Z
|
2021-12-01T00:00:00.000
|
{
"year": 2021,
"sha1": "9a1429441c80e5de97286d13920d27bfe9c658a8",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2021.769972/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9a1429441c80e5de97286d13920d27bfe9c658a8",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": []
}
|
245353875
|
pes2o/s2orc
|
v3-fos-license
|
Shortcuts to Quantum Approximate Optimization Algorithm
The Quantum Approximate Optimization Algorithm (QAOA) is a quantum-classical hybrid algorithm intending to find the ground state of a target Hamiltonian. Theoretically, QAOA can obtain the approximate solution if the quantum circuit is deep enough. Actually, the performance of QAOA decreases practically if the quantum circuit is deep since near-term devices are not noise-free and the errors caused by noise accumulate as the quantum circuit increases. In order to reduce the depth of quantum circuits, we propose a new ansatz dubbed as"Shortcuts to QAOA"(S-QAOA), S-QAOA provides shortcuts to the ground state of target Hamiltonian by including more two-body interactions and releasing the parameter freedoms. To be specific, besides the existing ZZ interaction in the QAOA ansatz, other two-body interactions are introduced in the S-QAOA ansatz such that the approximate solutions could be obtained with smaller circuit depth. Considering the MaxCut problem and Sherrington-Kirkpatrick (SK) model, numerically computation shows the YY interaction has the best performance. The reason for this might arise from the counterdiabatic effect generated by YY interaction. On top of this, we release the freedom of parameters of two-body interactions, which a priori do not necessarily have to be fully identical, and numerical results show that it is worth paying the extra cost of having more parameter freedom since one has a greater improvement on success rate.
I. INTRODUCTION
In the noisy intermediate-scale quantum (NISQ) era [1], the number of reliable quantum operations is limited by the quantum errors which contains quantum decoherence, rotation error, and so on. Thus people are interested in the quantum-classical hybrid algorithm whose quantum circuit depth is decreased with the help of classical optimizers, like the Quantum Approximate Optimization Algorithm (QAOA) [2] which is expected to get an approximate solution for combinatorial optimization problems. In this hybrid algorithm, the quantum state is prepared by a quantum computer, the parameters in the quantum circuit are optimized by a classical optimizer to find an evolution path that needs less circuit depth. Furthermore, QAOA is expected to have a better performance than quantum adiabatic algorithms (QAA) and get a quantum advantage using near-term devices [3,4]. However, the performance of QAOA is limited by the noise of the near-term devices [5], and Google's experiment for QAOA with their Sycamore superconducting qubit quantum processor shows that errors overwhelm the theoretical performance increase at larger layers [6]. Therefore executing QAOA on near-term quantum computers is a challenging task. It is important to reduce the circuit depth of QAOA to make it achievable for nearterm devices.
QAOA can be regarded as a digitized and variational version of QAA. QAA starts with a simple-to-prepared ground state of an initial Hamiltonian, and the adiabatic theorem guarantees that the ground state of the * cyh@originqc.com † gpguo@ustc.edu.cn final Hamiltonian can be obtained if the time-dependent Hamiltonian varies slowly. The adiabatic condition requires that the running time of QAA scales as T ∼ O(1/∆ min 2 ), ∆ min is the minimal spectral gap [7] of the quantum system during the evolution, thus many problems are hard to optimize using QAA because of the overlong annealing time. Besides, the digitization of QAA may lead to a deep quantum circuit to minimize the trotter error [8,9]. QAOA is expected to overcome these problems by optimizing parameters (including evolution path and duration for every digitized step) using a classical optimizer. The optimized parameters of QAOA are related to a fast evolution path, so QAOA is expected to break through the limits of the adiabatic condition [3].
Shortcuts to adiabaticity (STA) [10,11] is a class of methods to accelerate the quantum adiabatic process, counterdiabatic (CD) driving [12][13][14] is a technique of STA to reduce the finite-speed diabatic effect by adding counter terms to the time-dependent Hamiltonian. The CD driving Hamiltonian has a better performance and reduces the evolution time compared to adiabatic evolution [15]. Besides, Ref. [16] proposes a superadiabatic route to implement universal quantum computation by using CD driving, and energy cost of STA via CD driving is studied at Ref. [16,17]. In addition, energy cost optimization of CD driving, through the correct choice of the counterdiabatic Hamiltonian is studied in Ref. [18]. Recently, Ref. [4] indicates QAOA is at least counterdiabatic and has a better performance than finite time adiabatic evolution. There is also an effort to add a conterdiabatic term to the ansatz of QAOA to reduce the quantum circuit depth [19,20].
Our work focuses on the MaxCut problems and Sherrington-Kirkpatrick (SK) Model, both of them are NP-hard problems, and the QAOA is expected to have arXiv:2112.10943v3 [quant-ph] 24 Apr 2022 a quantum acceleration on these problems. In this work, we investigate the counterdiabatic effect of QAOA, and the simulation result implies that adding a two-gate term associate with the YY interaction to the quantum circuit will accelerate the optimization of QAOA. Besides, we release the QAOA parameter freedom of the two-body interactions (including ZZ interaction and YY interaction) to reduce the quantum circuit depth, eg. each two-body interaction has its independent parameter. The idea of extending the parameter degree is mentioned in 2017 [21] and they use random initial parameters of each ZZ interaction. In our case, firstly, we optimize the QAOA parameters to get an optimal global value, and then use the optimal parameters of QAOA as the initial parameters of each two-body term to do a further local optimization. The proposed algorithm uses the philosophy of STA, so we call it "Shortcuts to QAOA" (S-QAOA), and the simulated result shows that S-QAOA can get a good result with a shallower quantum circuit compared with QAOA.
II. PROBLEMS AND QUANTUM ALGORITHMS
In this work, we focus on the MaxCut problem and SK model. The MaxCut problem is defined on a graph G(V, E), V = {1, 2, · · · , n} is the set of vertices, and E = {((i, j), w ij )} is the set of edges where (i, j) is a pair of connected vertices and w ij is the weight of the edge (i, j). We study two classes of graphs: the first class is unweighted 3-regular graphs (u3R) whose weights w ij are a constant for the all edges, eg. w ij = 1, ∀(i, j) ∈ E; the second class is weighted 3-regular graphs (w3R) whose weights w ij are uniform random numbers in range [0, 1]. The target Hamiltonian of MaxCut problem is: The SK model is defined on the complete graph and the coefficient w ij is randomly chosen from the set {−1, 1}. The Hamiltonian of the SK model is : A. Quantum Adiabatic Algorithms QAA is able to find the ground state of a target Hamiltonian if the annealing process is slow enough, and its feasibility is guaranteed by the adiabatic theorem. In QAA, a simple-to-prepared quantum state is chosen as the initial state, which usually is the uniform superposition over computational basis states: ψ(t = 0) = |+ ⊗n , n is the number of qubits of the quantum system. ψ(t = 0) is the ground state of Hamiltonian H B = − n i=1 X i . The time-dependent Hamiltonian of QAA is: where H C represents the target Hamiltonian. The time evolved state driven by H would be: where = 1 and T is the time-ordering operator. The approximate ground state of H C can be obtained in the end t = T if the Hamiltonian H varies adiabatically. STA is able to suppress the finite-speed diabatic excitations by adding the CD driving term to the Hamiltonian H [14], this CD term is theoretically known and given by adiabatic gauge potential (AGP) A λ [22]: However, it requires the knowledge of the spectral properties of the instantaneous Hamiltonian to get the AGP, and the exact AGP is typically nonlocal, which makes the experimental implementation difficult [23]. The exact AGP can be approximated by a local parameterized gauge potential A λ (α), and the requirement of A λ (α) is to minimize the action S(A λ ): the coefficient α can be determined variationally [24]. Furthermore, the approximate AGP can be systematically constructed by the nested commutators [25]: The first order of the approximate AGP is the commutator of the initial Hamiltonian H B and the target Hamiltonian H C , so the counterdiabatic Hamiltonian becomes: and it can be proven that the coefficient α 1 is negative [4]. In l = +∞ limits, the CD driving terms A l λ (α) are able to compensate the diabatic excitations exactly, but it is necessary to just consider the first few orders of A l λ (α) to avoid the nonlocal operations.
B. Quantum Approximate Optimization Algorithm
The ansatz of QAOA consists of a series of digitized evolution, and the parameters of QAOA are optimized by a classical optimizer [2]: The purpose of the classical optimization is to find the optimal parameters to minimize the expectation of the target Hamiltonian: If the layer p of QAOA is large enough, the expectation E will decrease during the optimization and converge to the target Hamiltonian ground state. The effective Hamiltonian of QAOA can be constructed by the second order Baker-Campbell-Hausdorff (BCH) expansion [4]: Compare the result of Eq.8 and Eq.11, we can find that the effective Hamiltonian H ef f of QAOA has a first-order CD driving term, and the coefficient of this term is also negative. So the evolution of QAOA will include some counterdiabatic effects to compensate the diabatic excitation.
C. Shortcuts to QAOA
Inspired by the technique of STA, we consider the counterdiabatic effect of QAOA by introducing more two-body interactions: The ansatz and the effective Hamiltonian become: where α k , k = 1, · · · , p are the variational parameters for H M . The effective Hamiltonian in Eq.12 might have more CD terms and would accelerate the process of quantum optimization. Introducing more interactions led to a deeper quantum circuit, this problem can be partially solved by a compact implementation of the H M and H C (Eq.13), which does not need extra SWAP gates if using a local connectivity superconducting quantum computer. Furthermore, the combination of YY/XX and ZZ interaction can be realized using two CNOT gates and some single-qubit gates, which has the same number of CNOT gates as the way only implementing ZZ interaction (the details of this part will be discussed in Sec.IV). In this case, the circuit depth of S-QAOA and QAOA can be regarded as the same, because the errors caused by CNOT gates are more serious than single-qubit gates.
Another difference between S-QAOA and QAOA is the optimization of the parameters: in S-QAOA, after the optimization of QAOA parameters, the parameter freedoms of two-body interactions are released and a further optimization is performed. More parameters will improve the expressivity of the quantum circuit and can get a better result than QAOA in the same quantum layer. The procedure of S-QAOA is as follows: 1. Optimize the QAOA parameters using INTERP strategy [3], get the optimal parameters (β,γ) p for layer p.
2. Release the parameter freedom of ZZ interaction, and introduce an extra two-body interaction M ij = PiQj +QiPj 2 The initial parameters of S-QAOA are: β k =β k , γ ij k = γ k , α k = 0. The strength of M ij interaction should be positively related to that of Z i Z j interaction, so the parameter γ ij is added to each M ij interaction (This is the ansatz of S-QAOA for SK model, and it is the same for MaxCut problem except for a coeffecient 1 2 . For simplicity, we will only show the ansatz for SK model in the following.) 3. Use the finite-difference method to calculate the gradients of parameters ({γ ij k }, β k , α k ): is a small constant. Set a threshold δ 1 and if |g(θ)| > δ 1 , add θ to set A.
Optimize the parameters in set
A until convergent, if the decrease of energy is smaller than a threshold δ 2 after the optimization, exit; else, update the optimized parameters and return to step3.
III. SIMULATION RESULT
We study the u3R and w3R MaxCut problem on 14vertex graphs, and SK model on 6-vertex graphs. In each case, the results are averaged on 20 random graphs. There are three ansatzes being studied: QAOA; only releasing the parameter freedom of ZZ interaction (This ansatz only contains ZZ interaction, and for simplicity, we call it 'ZZ technique' below); adding an extra twobody interaction and releasing the freedom of parameters (S-QAOA). The simulation result implies that the third ansatz has the best performance in all cases, and the suitable extra two-gate term is able to accelerate the optimization process significantly. A comprehensive study of the optimal type of extra two-gate term in S-QAOA can be found in Fig.8, which shows the performance and the comparison of all the possible extra two-gate types: , P Q ∈ {Y Z, Y Y, XX, XZ, XY }. In general, Y Y interaction has the best performance, and it is included in S-QAOA ansatz. Thus the operation of each layer k ∈ [1, p] of S-QAOA is: The results of u3R MaxCut problem are shown in Fig.1. Obviously, only releasing the freedom of parameters of the existing ZZ interaction will produce a better result in the same layer than QAOA. On top of this, adding the Y Y interaction will improve the performance further, especially at the low layer where the performance of QAOA is not so good. When the quantum layer p increases, the difference between the three ansatz is smaller, this is due to that the evolution time is large enough for QAOA if the quantum layer is large and there is little space for improvement. Fig.2 demonstrate the simulation results of w3R graphs, and the superiority of S-QAOA is more obvious in this case, the average fidelity is improved significantly even at p = 10. The Max-Cut problem on w3R graphs is more difficult than u3R graphs, since the energy gap of the MaxCut Hamiltonian on w3R graph is smaller than that of u3R graph, and it needs a longer evolution time to satisfy the adiabatic condition. The result shows the potential of S-QAOA to solve the problems which are difficult for QAA and QAOA.
SK model is defined on the complete graph, it is challenging to implement SK model on a NISQ device which has a limited qubit connectivity. Because of the all-to-all interaction of SK model, all the nodes can be entangled together at p = 1, and there are sufficient parameters to optimize if the parameters of ZZ interaction are independent. So that a pretty good result can be obtained at p = 1 if only releasing the parameter freedom (Fig.3). Furthermore, if a Y Y interaction is added to the ansatz, the fidelity is obviously improved and reaches about 80% at p = 1. S-QAOA introduces only one extra parameter compared with the ZZ technique, and the performance of S-QAOA is significantly better than the latter at p = 1. The significant improvement produced by Y Y interaction confirms its effects on countering the diabatic excitations and accelerating the process of quantum optimization. The difference between S-QAOA and ZZ technique becomes smaller and smaller when the quantum layer p is increased, and this is due to that the parameter freedoms of ZZ technique are enough for the optimization at large layer p, and there is not much space for S-QAOA to improve.
S-QAOA has a better performance in all cases we study, e.g. for a specific quantum layer, R p = p S /p Q > 1, p S (p Q ) is the possibility of the optimal solution got by S-QAOA(QAOA). S-QAOA does a further optimization and has more parameters compared with QAOA, so the number of function evaluations of S-QAOA is more than that of QAOA, e.g.
is the cumulative number of function evaluations of S-QAOA(QAOA), which will faithfully reflect the total cost of the algorithm. More specifically, where f G is the number of function evaluations for calculating the gradients of parameters, and f O is the number of function evaluations for further optimizations in S-QAOA. It is necessary to consider whether it is costeffective to do these extra optimizations. We show the ratio of R f and R p : R f p = R f /R p in Fig.4, and R f p < 1 represents that it is deserved to do the extra optimizations of S-QAOA to produce a higher improvement of fidelity. It is clearly that the ratio R f p < 1 if p ≤ 4 for almost cases. If p increases, QAOA can produce quite high fidelity for SK model and u3R MaxCut problem. There is little space to improve for S-QAOA, so the ratio R f p of SK model and u3R MaxCut problem approach to 1 or even larger than 1 for large p. For w3R Max-Cut problem, the fidelity of QAOA is far away from 1, so S-QAOA can improve the fidelity effectively with some further optimizations. In all, S-QAOA is an effective way to improve the result of QAOA, especially in case QAOA has limited performance.
IV. DISCUSSION
The quantum circuit of QAOA consists of the alternating implementation of e −iγH C and e −iβH B , the nest commutator of H B and H C is able to span the entire Lie algebra associated with the Hilbert space of the n-qubit system. QAOA can approximate any element of the en- MaxCut problems and SK model. We include 20 random instances for each problem. There are some unusual data in w3R MaxCut case, and the reason is that the cost function of S-QAOA is the expectation of the energy, so it is possible to get a lower fidelity with a better expectation value. These unusual points disappear when p increases.
tire unitary group U (2 n ) if a sufficient deep quantum circuit is applied [26,27]. Based on the existing generator H B and H C , S-QAOA provides an extra generator associated with the two-body interaction to accelerate the process of approximating the desired unitary operation. The numerical result shows that the generator associated with the YY interaction has the best performance, and the quantum circuit depth is reduced significantly by including it in S-QAOA ansatz (Fig.8). The advantage of YY interaction might be explained by the connection of it and the CD driving terms (Eq.5). For MaxCut problem and SK model, the first order of Eq.5 is: There is a little improvement when we add the above Y Z term to the quantum circuit Eq.13 (Fig.8). The limited improvement of Y Z term is possibly due to that the effective Hamiltonian of QAOA contains the first order of CD driving terms (Eq.11). In order to introduce more counter terms to compensate the diabatic excitations, we consider the second order of CD driving terms(Eq.13): There are some positive coefficients c 1 , c 2 , c 3 , c 4 , and {n i } represent the neighbors of vertex i, eg. (i, j) ∈ E, ∀j ∈ {n i } The first two terms on the right side of equation Eq.18 can be generated if a series of H 1 has the same form as the first order of CD driving terms (Eq.8). H 2 and H 3 will generate terms same as the first two terms on the right side of equation Eq.18: H 2 contains the same interaction as the first term on the right side of equation Eq.18, and H 3 contains the same interaction as the second term. Besides, the sign of the coefficient of H 2 and H 3 are the same, it is the same for that of Eq.18. So it can partially compensate the excited states by adding the Y Y interaction to S-QAOA ansatz.
Besides, the parameter freedom of ZZ interaction is released to further reduce the quantum circuit depth, and since the strength of Y Y and ZZ interactions should be positively related, the coefficient γ ij is added to each YY interaction: The coefficient α plays the same role as in STA, and is also be determined variationally.
To reduce the number of CNOT gates, the order of operations in the ansatz is adjusted, and the operations of the Y Y and ZZ interactions are combined together: The combined implementation of YY and ZZ interactions can be realized as shown in Fig.5. Fig.6 shows the comparison of the performance of ansatz in Eq.22 and Eq.23 by considering the MaxCut problem on w3R graphs. The result implies that the performance of those two ansatzes are basically the same, so we choose the ansatz in Eq.23 to reduce the quantum circuit depth.
V. SUMMARY AND OUTLOOK
In this work, S-QAOA ansatz is proposed to reduce the required quantum circuit depth. The main innovation of S-QAOA is: firstly, the extra two-body interaction is considered in the S-QAOA to compensate the diabatic effect and accelerate the process of quantum optimization; secondly, the parameter freedoms of two-body interactions are released to enhance the capacity of the quantum circuit. We study the performance of S-QAOA and QAOA on MaxCut problem and SK model, and the simulation result implies that S-QAOA has better performance at lower quantum layers compared with QAOA. So S-QAOA is a good candidate to solve the combinatorial problems using NISQ.
Releasing the parameter freedom needs extra cost of optimization, the numerical simulation shows it is costeffective because of the greater improvement on fidelity.
As shown above, the number of CNOT gate will be the same for QAOA and S-QAOA if introducing YY or XX interaction to S-QAOA. Other two-body interaction types can be implemented using three CNOT gates as shown at Ref. [28]. In S-QAOA, further optimization is performed on the parameters that have large gradients, and there needs more work to explore how to release the parameter freedoms properly, e.g., how to pick out the most critical parameters to do a further optimization. The most influential parameters should be different for different problems, and it is important to develop an efficient way to pick them out. Besides, in our primary exploration, introducing more parameters in S-QAOA makes the optimization more challenging when considering the shot noise, so the optimization methods that are more ro-bust to noise should be considered in further work, like COBYLA, SPSA, etc. Furthermore, it deserves further exploration to explain the reason for the YY interaction's effectiveness more clearly. We will study more cases and do simulations with noise to test and improve our idea in the next step.
ACKNOWLEDGMENTS
We thank Lingxiao Xu, Cheng Xue, Huanyu Liu and Qingsong Li for valuable discussions and suggestions. The results of this work are simulated using pyQpanda, and the pyQpanda package can be downloaded at https://github.com/OriginQ/QPanda-2.
Appendix A: Results With Error Bars
The results in Fig.1,2,3 are averaged over the random graphs. Typically, the fractional error r got by QAOA should be concentrated for the same class of problems. So it is better to show the results with error bars to represent the variance in the fractional error, and determine whether the performance difference between S-QAOA and QAOA is outside the margin of errors. Fig.7 shows the fractional errors with error bars for Maxcut problems and SK model. Though there is a relatively large error bar because of the limited statistics, the performance of S-QAOA (labeled by 'YY') is better than QAOA significantly. To choose the best one in these two-gate terms, we do a comprehensive simulation for MaxCut problem and SK model, and the result can be found in Fig.8. The simulation result implies that the M ij = Y i Y j has the best performance in all cases. A further study in Fig.9 is to explore the necessity of adding more two-body interactions in the ansatz, and it is obvious that the ansatz adding more interactions can not get better performance than the ansatz just adding YY interaction. So it is enough to just add the YY interaction in S-QAOA ansatz. 9. Comparison of the performance of only adding the YY interaction to the ansatz and adding two or three additional interactions to the ansatz, which are combination of: {Y Z, Y Y, XX} interactions, e.g., 'YZ YY', 'YZ XX', 'YY XX' represent there are two additional interactions followed by each ZZ interaction; 'YZ YY XX' represents the YZ, YY, XX interactions followed by ZZ interaction one by one. The results are averaged on 10 random graphs, and the performance of the ansatz that includes two or three additional interactions is almost the same as the ansatz that only includes the YY interaction. So if we consider two-gate terms, it is enough to add the YY interaction to accelerate the evolution.
|
2021-12-22T02:15:35.215Z
|
2021-12-21T00:00:00.000
|
{
"year": 2021,
"sha1": "9afef3d26321411f5d17e6ae44e4249b4de58da8",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2112.10943",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "9afef3d26321411f5d17e6ae44e4249b4de58da8",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
2411194
|
pes2o/s2orc
|
v3-fos-license
|
Clinical challenge: fatal mucormycotic osteomyelitis caused by Rhizopus microsporus despite aggressive multimodal treatment
Background Mucormycosis is an invasive mycotic disease caused by fungi in the zygomycetes class. Although ubiquitous in the environment, zygomycetes are rarely known to cause invasive disease in immunocompromised hosts with a high mortality even under aggressive antifungal and surgical therapy. Clinically, mucormycosis frequently affects the sinus occasionally showing pulmonary or cerebral involvement. However skeletal manifestation with Rhizopus microsporus (RM) osteomyelitis leading to emergency surgical proximal femoral resection with fatal outcome has not been described yet. Case presentation We report the case of a 73-year-old male suffering from myelodysplastic syndrome with precedent bone marrow transplantation. Six months after transplantation he consulted our internal medicine department in a septic condition with a four week history of painful swelling of the right hip. Radiography, computed tomography and magnetic resonance imaging revealed multiple bone infarcts in both femurs. In the right femoral head, neck and trochanteric region a recent infarct showed massive secondary osteomyelitis, breaking through the medial cortex. Emergency surgical proximal femoral resection was performed due to extensive bone and soft tissue destruction. Microbiological and basic local alignment search tool (BLAST) analysis revealed RM. Amphotericin B and posaconazole treatment with septic revision surgery was performed. However the disease ran a rapid course and was fatal two months after hospital admission. Conclusion This alarming result with extensive RM osteomyelitis in the proximal femur of an immunocompromised patient may hopefully warn medical staff to perform early imaging and aggressive surgical supported multimodal treatment in similar cases. Electronic supplementary material The online version of this article (doi:10.1186/1471-2334-14-488) contains supplementary material, which is available to authorized users.
Background
Zygomycetes are environmental nonseptate molds widely distributed in soil, plants, and decaying material [1,2]. The class zygomycetes contain the order mucorales, the latter including the genus Rhizopus. A clinically important Rhizopus species is Rhizopus microsporus (RM), being one of the main causes of mucormycosis, an opportunistic lifethreatening infection in immunocompromised patients.
Previously reported risk factors for mucormycosis are prolonged neutropenia, immunosuppression, iron overload and prolonged hyperglycemia or manifest diabetes. Patients treated with allogeneic hematopoietic stem cell transplantation (allo-HSCT) often suffer from a combination of these risk factors [1]. The prognosis and outcome of invasive mucormycosis in patients with immune deficiency or hematologic malignancies is generally rather poor [1,4,8]. However in most reported cases fatal outcome could be prevented [8,9].
Here we describe the unique case of fatal invasive osteomyelitis in an allo-HSCT recipient caused by RM. Extensive diagnostic evaluation revealed multiple old bone infarcts complicated with invasive fungal disease. Although systemic antifungal treatment and repetitive radical surgery was started immediately cure could not be provided.
Case presentation
We report on a 73-year-old male with recently diagnosed myelodysplastic syndrome RAEB I showing complex karyotype. A routine-checkup revealed a tricytopenia in the blood count as well as a mild splenomegaly, further examinations including bone marrow biopsy confirmed the diagnosis. Facing the high-risk constellation of this disease (IPSS-R risk score: poor; [10]) and the excellent clinical condition of the patient with no relevant comorbidities, allogeneic stem cell transplantation was considered the sole option for a cure. With no HLA-identical siblings available, unrelated donor search was initiated resulting in the identification of a suitable HLA-matched donor.
Eleven months later allogeneic matched unrelated donor stem cell transplantation after conditioning chemotherapy with fludarabine, treosulfane as well as antithymocyte globuline and prophylactic immunosuppressive medication containing of cyclosporine and mycophenolatmofetile was performed successfully. No relevant complications occurred during the first weeks of follow-up besides moderate acute graft versus host disease of the skin which was immediately responsive to steroid treatment. Bone marrow examinations one month after transplantation showed complete cytogenetic remission of the disease as well as complete donor chimerism. With no further signs of graft versus host disease and a good clinical condition of the patient the immunosuppressive treatment could be constantly tapered during the following months. Continuous prophylactic antiinfectious medication with acyclovir, cotrimoxazole and posaconazole was administered.
Six months after transplantation without remaining medication the patient was presented to our internal emergency department with a critical septic condition showing fever (body temperature above 40°C) and dyspnea, and was immediately transferred to the intensive care unit (ICU). Intubation and mechanical ventilation had to be initiated due to respiratory failure. Computed tomography (CT) scan revealed bilateral infiltrations referring to atypical pneumonia and regional osteopaenia with mild focal bone lysis at the level of the lumbar spine. Lumbar spondylodiscitis was ruled out by magnetic resonance tomography imaging (MRI).
No relevant bacterial, viral or fungal cause could be identified by bronchoalveolar lavage and multiple blood culture collections during the stay on the ICU. Cardiac echocardiography was performed and culture-negative endocarditis could be ruled out. The patient constantly improved under combinatory empiric antibiotic, antiviral and antifungal (azole) medication, extubation could be performed 7 days after intubation. While the respiratory situation completely stabilized, the clinical condition of the patient constantly deteriorated in the following days. For the first time the patient reported right hip pain.
In consequence ultrasound, CT and MRI of the right hip and thigh were performed in order to identify and localize a potential inflammatory focus. The imaging revealed multiple old bone infarcts in both femurs as well as a new infarct with massive secondary osteomyelitis in the right femoral head, neck and trochanteric region, breaking through the medial cortex into the surrounding soft tissue ( Figure 1). With emergency surgical intervention extensive bone and soft tissue destruction with ubiquitous blackened tissue became evident ( Figure 2a). Proximal femoral resection with a broad antibiotic (vancomycin, gentamycin, clindamycin) and antimycotic (amphotericin B) loaded spaceholder and extensive tissue debridement were performed. Microbiological and histopathological analysis and basic local alignment search tool (BLAST) identified RM (Figure 2b-d). Bacteria could not be identified even after prolonged microbiological culturing for 10 days. Immediate high dosed liposomal amphotericin B (6 mg/kg bodyweight/day), supplemented with high dosed posaconazole (4 × 200 mg/day) was administered. After a short-term improvement of the patient's condition with promising regression of inflammatory markers and a fever-free period, the wound at the level of the proximal thigh showed increased wound secretion and shading of the surrounding skin. Despite another 3 surgical interventions with debridement, lavage and vacuum assisted closure-therapy (VAC) the intraoperative and cutaneous state deteriorated with RM invading the tissue in a diffuse manner and ultimately perforating the skin. With this dramatic disease progression a whole-body-MRI was conducted 6 weeks after re-admission. Multiple other bone infarctions (left femur, both humeri, both tibiae) with suspected superinfection of RM became evident. Considering the drastic disease progression under maximal multimodal care further therapeutic interventions were stopped in consent with the patient and his family and palliative home care support was initiated. The patient passed away a few days later.
Discussion
Invasive disseminated Rhizopus-infections develop in fewer than ¼ of localized forms and have an estimated mortality rate of 78-100% in allo-HSCT recipients [8]. We describe the fatally ending case of a disseminated RM osteomyelitis of an immunocompromised patient suffering from myelodysplastic syndrome. To the best of our knowledge, a comparable case has not been reported in the literature yet. Immunocompromised patients, particularly suffering from hematological diseases treated by allo-HSCT face a high risk of invasive fungal infections [6]. Invasive aspergillosis and candidiasis represent the leading cause of invasive mold infection, whereas invasive mucormycosis is less common [4]. A recent increase in the incidence of mucormycosis may be explained by the increasing use of antifungal agents lacking activity against the class zygomycetes and the growing number of high risk bone marrow transplantations [5]. Zygomycetes are found worldwide with distribution throughout the whole environment (e.g., air, soil, food, and wood). The class zygomycetes contain the order mucorales and the genus Rhizopus. Rhizopus can be further subdivided in several species. Some species e.g. RM are known human pathogens [6]. However, reports on RM-related infections are relatively rare compared with those related to other Rhizopus species [11]. Wilkins reported the case of a non-immunocompromised patient with a postoperative RM osteomyelitis of the femur after anterior cruciate ligament repair. Multimodal treatment led to eradication of the disease [12]. A successful treatment of RM osteomyelitis of the right tibia in an immunocompromised patient was presented by Vashi [9].
Compared to focal RM fungal infections disseminated infections are even more seldom [2,13,14]. Especially HSCT recipients run high risk of developing invasive fungal infections during the immediate post-transplant and the pre-and post-engraftment period and the engraftment period up to 3 months after transplantation [2]. Our patient showed suspected infection about 3 months after transplantation.
Mucormycosis spreads from isolated infection hematogenously to other organs. The most common sites of origin are sinuses (39%), lungs (24%), and skin (19%) [2,15]. Dissemination commonly affects lung and brain, whereas liver, heart, and kidneys are rarely colonized [5]. Our patient showed multiple lesions in the final state as demonstrated by MRI. Due to the concomitant presence of bone necrosis and metastatic fungal implants we believe that initially bone infarction, caused by immonostatic drugs, lead to an ideal environment for fungal growth. A similar clinical constellation with Rhizopus species-superinfection after B-19 virus-induced bone marrow necrosis in a patient with sickle-cell disease was presented by Fartoukh et al. [16]. However, surgical intervention was not performed. In our case the focus of verifiable RM infection was the right proximal femur and its surrounding soft tissues with huge abscess formations. Other lesions at the left lower extremity and both upper extremities obviously followed or had preexisted in a dormant state.
Furthermore establishing the diagnosis of invasive fungal infections, primarily based on standard culturebased mycological methods is often difficult, especially in early stages [6]. Accordingly high rates of delayed treatment beginnings are described [11]. Routinely taken blood culture samples confirm the diagnosis in less than 10% of all cases [17]. Therefore surgical intervention is often required not just for treatment but to gain tissue specimens to increase diagnostic accuracy. Direct microscopy with optical brighteners, microbial cultures and histopathology are recommended allowing a rapid narrowing-down of the diagnosis [18]. With direct microscopy hyphae of mucorales display a typical appearance. They show a variable width (6-25 μm), are non-or pauci-septate, have an irregular, ribbon-like appearance and a variable angle of branching. Differentiation between various genera is based on the presence and location of rhizoids, the branching nature of the sporangiophores, the shape of the columella, the size and shape of the sporangia, and the maximum growth temperature [19]. In addition to microscopy culture of specimens is considered an essential investigation. Although the sensitivity of culture is not high, it allows identification and susceptibility testing [18]. Currently molecular testing displays the most reliable diagnostic tool for identification of human pathogenic mucorales. As the detection of mucorales-specific antigens so far has not become generally accepted for diagnostic purposes because of its relatively low sensitivity [20], currently the most effective method for mucorales-detection is PCR [18]. The internal transcribed fungal spacer (ITS) region (18S rRNA and 28S rRNA) is sequenced and the isolates are identified by e.g. Basic Local Alignment Search Tool (BLAST®). In our case we were able to identify RM by a combination of phenotypic methods (Figure 2b-d) and genetic sequencing.
Therapy for mucormycosis infections includes systemic antifungal drugs and local surgical debridement [21]. Regarding antifungal treatment azole, which are used for aspergillosis are not appropriate for invasive mucormycosis. Moreover, mucormycosis can even arise in patients receiving azole for prophylaxis or for treatment of invasive aspergillosis [22,23]. Being aware of our patient's critical illness and according to current treatment concepts, systemic treatment was started with high-dosage liposomal amphotericin B and posaconazole after confirmation of the diagnosis [18,24,25]. Possibly due to this regimen, blood inflammatory markers temporarily decreased significantly and a fever-free interval was achieved. Amphotericin B, commonly accepted as first-line treatment for invasive mucormycosis can be combined with posaconazole, which is strongly recommended for salvage treatment [18]. After a short improvement of clinical status under systemic antifungal therapy the patient's condition deteriorated again. Salvage surgery was indicated after MRIproved progression of disease at the level of the right hip. Unfortunately the local situation was out of control at that time and further fungus invasion could not be stopped. Mortality rates of 10% have been reported for localized cutaneous mucormycosis, 26% after extension to deeper structures, and 94% with disseminated disease [8]. These rates clearly emphasize the importance of early aggressive therapeutic intervention. In our patient early dissemination was present lowering chances of cure significantly from the beginning.
Conclusion
This dramatic case of rapidly progressive and ultimately fatal RM infection of the bone illustrates the diagnostic and therapeutic challenges of mucormycosis in immunocompromised hosts. Amongst others, abscess formations should always be suspicious of invasive fungal infection under these circumstances. Rapid and exact diagnosis by both morphology and molecular techniques is crucial for starting early treatment of fungal infection. Amphotericine B should be used as soon as mucormycosis is suspected. If the patient's condition is not improving, addition of posaconazole should be considered. At the same time, early and aggressive surgical debridement has to be performed first to detect the pathogen and second to establish local control. Comprehensive multimodal therapy for mucormycosis may create more opportunities to improve patient's outcome despite the very low overall survival rate in disseminated cases.
Consent
Written informed consent was obtained from the patient for publication of this case report and any accompanying images. A copy of the written consent is available for review by the editor of this journal.
|
2018-04-03T03:26:39.092Z
|
2014-09-06T00:00:00.000
|
{
"year": 2014,
"sha1": "c3c384a39bc9c0b556f143403d4c13775e5c7a2d",
"oa_license": "CCBY",
"oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/1471-2334-14-488",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c3c384a39bc9c0b556f143403d4c13775e5c7a2d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
236146018
|
pes2o/s2orc
|
v3-fos-license
|
Clinically Used Hormone Formulations Differentially Impact Memory, Anxiety-Like, and Depressive-Like Behaviors in a Rat Model of Transitional Menopause
A variety of U.S. Food and Drug Administration-approved hormone therapy options are currently used to successfully alleviate unwanted symptoms associated with the changing endogenous hormonal milieu that occurs in midlife with menopause. Depending on the primary indication for treatment, different hormone therapy formulations are utilized, including estrogen-only, progestogen-only, or combined estrogen plus progestogen options. There is little known about how these formulations, or their unique pharmacodynamics, impact neurobiological processes. Seemingly disparate pre-clinical and clinical findings regarding the cognitive effects of hormone therapies, such as the negative effects associated with conjugated equine estrogens and medroxyprogesterone acetate vs. naturally circulating 17β-estradiol (E2) and progesterone, signal a critical need to further investigate the neuro-cognitive impact of hormone therapy formulations. Here, utilizing a rat model of transitional menopause, we administered either E2, progesterone, levonorgestrel, or combinations of E2 with progesterone or with levonorgestrel daily to follicle-depleted, middle-aged rats. A battery of assessments, including spatial memory, anxiety-like behaviors, and depressive-like behaviors, as well as endocrine status and ovarian follicle complement, were evaluated. Results indicate divergent outcomes for memory, anxiety, and depression, as well as unique physiological profiles, that were dependent upon the hormone regimen administered. Overall, the combination hormone treatments had the most consistently favorable profile for the domains evaluated in rats that had undergone experimentally induced transitional menopause and remained ovary-intact. The collective results underscore the importance of investigating variations in hormone therapy formulation as well as the menopause background upon which these formulations are delivered.
INTRODUCTION
During the midlife transition to menopause, a number of symptoms that negatively impact quality of life and wellbeing may occur. Most commonly, these symptoms originate from natural changes in estrogen production by the ovaries as follicle reserve declines, leading to the onset of vasomotor symptoms (e.g., hot flashes, night sweats), dyspareunia, and urogenital indications (Hoffman et al., 2012;Al-Safi and Santoro, 2014;NAMS, 2014). Benign irregular or heavy bleeding patterns are also common during the transition to menopause (Voorhis et al., 2008;Corniţescu et al., 2011;Pinkerton, 2011). In addition, during the menopause transition many individuals report increased rates of depression and anxiety symptoms, as well as impaired cognition, particularly in the realm of working memory (Kritz-Silverstein et al., 2000;Mitchell and Woods, 2001;Weber and Mapstone, 2009;Maki et al., 2012;Weber et al., 2012Weber et al., , 2014Worsley et al., 2014;Zilberman et al., 2015;Unkenstein et al., 2016;Rentz et al., 2017;Morgan et al., 2018;Im et al., 2019).
There are a variety of U.S. Food and Drug Administration (FDA)-approved hormone therapy options available that effectively alleviate undesirable symptoms associated with menopause-related changes in the endogenous hormonal milieu (Files et al., 2011;Hoffman et al., 2012;Pinkerton, 2012;Stuenkel et al., 2015;Pinkerton et al., 2017c). If the uterus is intact, a hormone therapy regimen must include a progestogen component (i.e., natural progesterone or one of the many synthetic forms of progesterone; the latter are collectively referred to as progestins) in combination with an estrogen component (e.g., natural 17β-estradiol (E2), synthetic ethinyl estradiol, conjugated equine estrogens). This progestogen component is necessary to mitigate the risk of uterine hyperplasia and cancer (Pinkerton et al., 2017c). If a patient's primary indication for treatment is heavy, irregular, or abnormal uterine bleeding, medical professionals may prescribe a progestogen-only hormone therapy, such as an oral progestogen or an intrauterine device containing the progestin levonorgestrel, a synthetic form of progesterone (Sitruk-Ware, 2002;Marret et al., 2010;Corniţescu et al., 2011;Pinkerton, 2011;Goldstein and Lumsden, 2017). If a patient has undergone hysterectomy with or without ovary removal, they may take estrogen-only hormone therapy, as the removal of uterine tissue eliminates the need for the progestogen component (Haney and Wild, 2007;NAMS, 2014;Pinkerton et al., 2017c). Additionally, low-dose vaginal estrogen-only tablets, creams, and rings are increasing in popularity for the treatment of menopausal genitourinary syndrome even when the uterus is intact (Rahn et al., 2014;Pinkerton et al., 2017b;Biehl et al., 2018;Shifren, 2018). Thus, depending on an individual's circumstance and primary indications for menopausal hormone therapy use, there are a range of possibilities for variations in hormone therapy preparations, including estrogen-only, progestogen-only, or combined estrogen plus progestogen hormone therapy options, which in turn may have variable effects on the brain and periphery. Sex steroid hormones have been shown to impact learning and memory, although the ideal parameters for individual and combined hormone therapies have proven to be complex (for review, see: Barha and Galea, 2010;Gibbs, 2010;Luine, 2014;Frick, 2015;Bimonte-Nelson, 2015, 2017;Korol and Pisani, 2015). Depriving the female system of ovarianderived hormones leads to cognitive changes in both humans and animal models (e.g., Phillips and Sherwin, 1992;Singh et al., 1994;Bimonte and Denenberg, 1999;Nappi et al., 1999;Heikkinen et al., 2004;Wallace et al., 2006;Rocca et al., 2007;Gibbs and Johnson, 2008;Parker et al., 2009;Ryan et al., 2014). Importantly, ovarian hormone loss also results in an increased susceptibility to anxiety and depression (Parker et al., 2009;Maki et al., 2012;Weber et al., 2014;Parry, 2020;Soares, 2020;Stute et al., 2020). Under certain parameters or experimental conditions, estrogen supplementation following the surgical removal of the ovaries (ovariectomy; Ovx) reverses or attenuates detriments in cognition and affective behaviors in preclinical models (Bimonte and Denenberg, 1999;Holmes et al., 2002;Foster et al., 2003;Hiroi et al., , 2016Fernandez et al., 2008;Harburger et al., 2009;Rodgers et al., 2010;Gleason et al., 2015;Black et al., 2016Black et al., , 2018Koebele et al., 2020b). Much emphasis has been placed on exogenous E2 administration following Ovx, and reports show variable effects on cognition depending on the parameters. However, most individuals experience a natural, non-surgical transition to menopause and retain their ovaries. The ovatoxin 4vinylcyclohexene diepoxide (VCD) induces accelerated follicular atresia, which serves as a rat model of transitional menopause, wherein ovarian tissue is maintained but becomes follicle-deplete (Mayer et al., 2002(Mayer et al., , 2004Dyer et al., 2013;Koebele and Bimonte-Nelson, 2016). Using VCD, our laboratory recently demonstrated that compared to follicle-deplete rats that did not receive E2 treatment, tonic E2 had beneficial effects in the learning phase of a complex spatial working memory task. However, some working memory impairments were evident in the E2-treated rats after the rules of the task had been acquired (Koebele et al., 2020a), demonstrating the complex role of estrogens in learning and memory.
Although E2 is a common component in many FDA-approved combined hormone therapy formulations, the progestogen component varies. Progestins are used frequently as an alternative to natural progesterone due to significantly higher oral bioavailability (Sitruk-Ware et al., 1987;Schindler et al., 2003;Kuhl, 2005). All progestins exert progestogenic activity at progesterone receptors, resulting in protective mechanisms for the uterus, which is often their primary clinical application. However, depending on its molecular derivative, a given progestin can also have estrogenic, anti-estrogenic, androgenic, anti-androgenic, and/or glucocorticoid activity to varying extents (Schindler et al., 2003). These unique pharmacological profiles lead to distinct patterns of activity and actions by progestins, including variable cognitive effects (Sitruk-Ware, 2002;Schindler et al., 2003;Braden et al., 2017). Several progestins have been shown by our and other laboratories to negatively affect cognition Shumaker et al., 2003;Rosario et al., 2006;Braden et al., 2010Braden et al., , 2011Lowry et al., 2010). However, levonorgestrel, a common progestin in hormone therapy formulations and a hormone-containing intrauterine device, has been reported to have neutral, or even beneficial, effects on cognition in the surgical menopause (i.e., Ovx) rat model when administered independently (Braden et al., 2017;Prakapenka et al., 2018). Levonorgestrel may exhibit these unique effects due to its distinct pharmacodynamic properties; in contrast to natural progesterone or other progestins, levonorgestrel does not elicit glucocorticoid or anti-mineralocorticoid receptor activity, but does have some androgenic activity (Schindler et al., 2003). For example, in middle-aged Ovx rats, we have demonstrated that levonorgestrel alone produced cognitive benefits; however, when levonorgestrel was co-administered with E2, it failed to augment, and in fact attenuated, E2's favorable effects on cognition, producing impairments relative to either hormone alone (Prakapenka et al., 2018). These results highlight the importance of performing translational research in which clinical practices are accurately modeled. Whether a combined E2 + progestogen regimen exerts similar effects in a model of transitional menopause remains to be determined. This is a question of high importance, given that minor alterations in molecular structure can lead to different physiological effects of progestogens (Sitruk-Ware, 2002), and that progestogens are most often given in combination with E2 when an individual undergoing menopause has an intact uterus and ovaries (Pinkerton et al., 2017c). It is critical to methodically compare how daily administration of natural progesterone and the progestin levonorgestrel influence learning and memory independently as well as in combination with E2, and whether progestogen type matters for outcomes with transitional menopause.
To address this question, we administered VCD to permit the retention of follicle-depleted ovarian tissue and to produce a circulating hormone profile more similar to that associated with transitional menopause than would be achievable with Ovx (Koebele and Bimonte-Nelson, 2016). In the current experiment, VCD treatment began at 8 months of age, as we have done in previous publications (Koebele et al., 2020a). Three months later, when rats were middle-aged and considered to be in the early post-menopausal stage after substantial follicular depletion ensued (Lohff et al., 2005;Acosta et al., 2009;Koebele et al., 2020a), daily exogenous hormone treatment began and rats were tested on a behavioral battery assessing spatial memory, anxietylike, and depressive-like behaviors. Thus, the goals of the current experiment were manifold, as we aimed to systematically evaluate the independent and combined effects of daily E2, progesterone, and levonorgestrel on cognitive, anxiety-like, and depressive-like measures in transitionally menopausal, follicle-deplete, middleaged rats.
MATERIALS AND METHODS
See Figure 1 for a detailed experimental timeline.
Subjects
Sixty sexually inexperienced female Fischer-344-CDF rats from the National Institute on Aging colony at Charles River Laboratories (Raleigh, NC) were used in this experiment. Rats were approximately 8 months of age when they arrived at the Arizona State University vivarium facility. Rats were pair-housed upon arrival and had unrestricted access to food and water for the duration of the experiment. Rats were maintained on a 12h light/dark cycle (lights on at 7 am) and had a 1 week period of acclimation in the vivarium prior to the commencement of experimental procedures. The Institutional Animal Care and Use Committee at Arizona State University approved all procedures, which adhered to National Institutes of Health standards.
VCD Injections
All rats were administered VCD (FYXX Foundation, Flagstaff, AZ) intraperitoneally at a dose of 160 mg/kg/day in 50% dimethyl sulfoxide (DMSO)/50% sterile saline vehicle solution (Sigma-Aldrich, St. Louis, MO, United States) for a total of 15 injection days, based on established protocols (Mayer et al., 2002(Mayer et al., , 2004Lohff et al., 2005Lohff et al., , 2006Acosta et al., 2009Acosta et al., , 2010Van Kempen et al., 2011;Frye et al., 2012;Zhang et al., 2016;Koebele et al., , 2020aKirshner and Gibbs, 2018;Carolino et al., 2019). Baseline body weight (g) was recorded for all subjects prior to starting injections. VCD injection volume was calculated based on individual daily body weight. If a rat's body weight decreased by 10% or more from its baseline, VCD administration was discontinued until weight was recovered. VCD was administered on Mondays, Tuesdays, Thursdays, and Fridays. Injections were not administered on Wednesdays, Saturdays, or Sundays for weight recovery (Koebele et al., , 2020a. As such, to accommodate injection-related weight loss and recovery, the 15 VCD injections were completed over approximately 9 weeks. Two rats died during VCD injections: one from peritonitis and one from an undetermined cause, likely unrelated to injections.
Body Weights
Body weights (g) were recorded for all rats at the onset of VCD injections and periodically collected throughout the experiment until euthanasia. Body weight served as a peripheral indicator of general animal health and was used to assess whether hormone FIGURE 1 | Experimental Timeline. Following accelerated follicular depletion, rats received daily hormone treatments and were evaluated on a series of behavior tasks assessing working memory, reference memory, anxiety-like behavior, and depressive-like behavior. treatments altered body weight in an ovary-intact, follicledepleted background.
Vaginal Cytology
Vaginal smears were assessed immediately prior to behavioral testing initiation for two consecutive days, as previously published (Koebele et al., 2020a). The experimenter obtained each swab sample by gently inserting a small cotton-tipped applicator soaked in sterile saline into the vaginal opening. A light microscope (Fisher Scientific Micromaster; CAT #12-561-4B) was used to view the cells at 10× magnification. The experimenter classified samples as proestrus-, estrus-, metestrus-, or diestrus-like as our laboratory and others have previously published (Goldman et al., 2007;Koebele and Bimonte-Nelson, 2016;Koebele et al., 2019).
Behavioral Testing
After 3 weeks of daily hormone administration, 114 days after the first VCD injection, all rats (approximately 11-12 months old) were tested on a series of behavioral tasks assessing spatial working and reference memory, anxiety-like behavior, and depressive-like behavior. These assays included the water radial arm maze (WRAM) to evaluate spatial working and reference memory, the Morris water maze (MM) to assess spatial reference memory, the visible platform (VP) task to confirm motor and visual competency for swim-based tasks, the open field task (OFT) to assess locomotor activity and anxiety-like behavior, and the forced swim task (FST) to evaluate depressive-like behavior. Procedures for each task are described in detail below.
Water Radial Arm Maze
The WRAM evaluated spatial working and reference memory in a water escape paradigm . The apparatus had eight arms (38.1 cm × 12.7 cm each) and a circular center, and was filled with water maintained at 18-20 • C throughout testing. To assist with spatial navigation, prominent visual cues were placed on the walls around the maze in addition to the tables and heat lamps situated in each room. A preselected combination of platform locations was assigned to each rat, wherein hidden escape platforms were submerged 2-3 cm beneath the water's surface in four of the eight maze arms (locations counterbalanced across treatment groups); the other four arms (including the start arm) never contained platforms. Assigned platform locations remained the same across all testing days for a given rat. Black non-toxic powdered paint was added to the water to further obscure submerged platforms. Testing consisted of four trials per day across 13 consecutive days. Day 1 was considered training, days 2-12 were normal testing sessions, and day 13 included a delayed memory retention evaluation. During each daily testing session, the experimenter gently placed the rat in the non-platformed start arm. If the rat did not escape the WRAM via a hidden platform within the allotted 3-min trial time, the experimenter guided the rat to the nearest platform using a lead stick. Upon locating a platform, the rat was allocated 15 s of total platform time before being returned to its heated testing cage to reinforce platform location learning. During a 30 s inter-trial-interval (ITI), the experimenter removed the justfound platform from the maze, swept the water for debris with a net, and stirred the water to diffuse potential olfactory cues. In this way, working memory load progressively became taxed across trials within a daily testing session, as the number of locations to be recalled increased with each trial. On Day 13 of testing, a 6-h delay was implemented between trials two and three to assess delayed working memory retention. During the delay interval, rats were kept in their individual testing cages and given access to water.
Learning and memory performance on the WRAM was quantified by calculating the number of entries into nonplatformed arms prior to locating a platform on each trial within a day, which were considered errors. The experimenter logged each arm entry error manually on a testing sheet during the trials. An entry was operationally defined as the tip of the rat's snout crossing a marker 11 cm into the arm (visible on the outside of the maze, but not visible to the rat). Errors were counted and divided into subtypes. Working memory correct (WMC) errors were entries into an arm that previously contained a hidden platform within a daily testing session. Of note, WMC errors can only occur on trials 2-4, as all platforms are present in the maze during the first trial; as such, statistical analyses for WMC errors across trials are inclusive of trials 2, 3, and 4.
Reference memory (RM) errors were the first entries into an arm within a daily testing session that never contained a platform; as such, a total of four RM errors could be made within a daily testing session. Working memory incorrect (WMI) errors were subsequent entries, within a daily testing session, into an arm that never contained a platform . RM and WMI errors can be made on any trial; thus, analyses for WMI and RM errors across trials are inclusive of trials 1-4.
Morris Water Maze
Following the WRAM delayed memory retention day, rats were evaluated on the MM, a water-escape task which assesses spatial reference memory (Morris et al., 1982;Morris, 2015). The MM was a circular tub (188 cm in diameter) filled with 18-20 • C water made opaque with non-toxic black paint. One platform (11 cm diameter) was placed 2-3 cm below the surface of the water in the northeast quadrant of the tub, where it remained across all days and trials. The rats underwent four trials per day for five consecutive days. During each daily session, rats were dropped off from one of four directions (north, south, east, or west) at the start of each trial. The pattern of the four drop-off locations changed across days but was identical within a day for all rats. Path length (cm) from drop-off to the platform was recorded by a video camera and Ethovision tracking software (Noldus Instruments; Wageningen, Netherlands). Maximum trial time was capped at 1 min. If the rat did not navigate to the platform in the allotted trial time, the experimenter gently guided the rat to the platform using a lead stick. Once the rat located the hidden platform, it was required to stay there for 15 s of platform time before being returned to its heated testing cage for a ∼10min ITI, during which the other subjects were tested on that trial. On the final testing day of MM, after the fourth trial, rats completed a probe trial wherein the submerged platform was completely removed from the maze. Rats swam freely in the maze for the 1-min probe trial. The proportion of total swim distance covered within the previously platformed quadrant vs. the opposite quadrant was calculated to assess spatial localization to the previous platform location.
Visible Platform
On the day following MM, generalized visual acuity and motor competency necessary for completing swim-based escape tasks were assessed using the VP control task (Morris, 1984;Mennenga et al., 2015a). The VP was a rectangular tub (100 cm × 60 cm) filled with clear water (18-20 • C). On the north wall of the tub, a black platform (10 cm diameter) protruded approximately 4 cm above the water's surface and was easily visible to the rats. Opaque curtains surrounded the VP apparatus to obscure spatial and geometric cues within the testing room. Rats underwent six trials in 1 day. Each rat was dropped off from a fixed location in the center of the south wall of the tub. The platform position varied across trials semi-randomly in three possible locations along the north wall: left, center, and right. Each trial was capped at 90 s to reach the visible platform. The experimenter used a stopwatch to obtain latency to the platform and recorded it manually on a testing sheet after each trial. After navigating to the visible platform, the rat was required to stay on the platform for 15 s before the experimenter returned the rat to its heated testing cage outside of the opaque curtains. There was an ITI of approximately 10 min for each rat while the other subjects were tested on that trial.
Open Field Task
The day after VP, rats underwent one evaluation day in the OFT, which measured locomotor activity and anxiety-like behavior. Twenty-four hours before testing, the 100 cm × 100 cm black Plexiglas arena was thoroughly cleaned with Odormute, an enzyme cleaner, to remove potential odors from the apparatus. OFT procedures were carried out in a dark room, a protocol which has previously been found to be sensitive to changes in hormone profiles in female rats Hiroi et al., 2016). At the beginning of the testing day, rats were transferred from their home cages to single testing cages and allowed to acclimate in the anteroom of the testing area for at least 30 min. Each subject was brought into the room separately. The experimenter placed the rat into the arena along the center of the north wall and quietly exited the room. Each rat had 10 min to freely explore the arena. Trials were recorded using Samsung infrared night vision cameras connected to an iPad via the SmartCam application. Following each trial, the experimenter reentered the room, removed the rat from the arena, discarded any feces or urine in the arena, and wiped down the entire arena with tap water to distribute odor cues. The box was dried with paper towel prior to the beginning of the next subject's trial. Using an overlay of 25 evenly sized and shaped squares (20 cm × 20 cm), an experimenter blind to treatment conditions manually scored the recorded trials for time spent (s) in the corners, center, and small center of the arena, as well as line crossings into the corners, center, small center, and total line crossings.
Forced Swim Task
The day following the OFT, rats were exposed to 2 days of the FST to evaluate depressive-like behaviors (Huynh et al., 2011;Hiroi et al., 2016). Four clear Plexiglas cylinders (45 cm high and 20 cm in diameter) were filled up to 30 cm in height with fresh water (25 • C) and separated by black Plexiglas divider screens. On day one of the FST, rats were acclimated to the testing room for at least 30 min. Each rat was placed in a cylinder for 10 min before being removed, toweled dry, and placed back into a heated testing cage. Twenty-four hours later, rats were given a 5-min trial under the same conditions. Video recordings of the 5-min trial on day two were captured using a GoPro camera connected to an iPad. After the trial was completed, rats were removed from the cylinder and towel dried prior to being placed under an escapable heat lamp. Number of fecal boli were recorded after the trial. The water was drained from the clear cylinder and refilled with fresh water between each subject's trial. Recordings were scored by an independent experimenter blind to treatment conditions for latency to first immobility (s), time immobile (s), time climbing (s), time swimming (s), and number of dives. Immobility was quantified as minor movements necessary to keep the rat's head above water. Climbing was scored as rapid forearm movement to break the surface of the water or upward vertical movement to climb against the cylinder wall. Diving was defined as a rapid downward movement into the cylinder. Any other motion made by the rats during the 5-min trial was identified as swimming behavior.
Euthanasia
Rats were given 1 week of rest following the FST prior to euthanasia. At approximately 13 months old, all subjects were deeply anesthetized using inhaled isoflurane prior to cardiocentesis and decapitation. Blood was collected from the left ventricle of the heart using a 20 g needle and allowed to clot at 4 • C (Vacutainer 367986; Becton Dickinson and Company, Franklin Lakes, NJ, United States) for a minimum of 30 min. Blood vials were maintained on ice and centrifuged at 2000 rpm at 4 • C for 20 min at the end of the day. Serum was aliquoted and stored at −20 • C until analysis. Ovaries were separated from the uterine horns, trimmed of excess fat, and fixed in 10% buffered formalin for 48 h prior to being transferred to 70% ethanol until analysis. Uteri were dissected from the body cavity, trimmed of excess fat, and wet weight (g) was obtained.
Serum Hormone Measurements
All serum hormone assay processing was completed at the Core Endocrine Laboratory at Pennsylvania State University. E2 levels were detected using a double antibody liquid-phase radioimmunoassay (Beckman Coulter, Brea, CA, United States) as previously described (Acosta et al., 2010;Camp et al., 2012;Mennenga et al., 2015b,c;Koebele et al., , 2019. This RIA used estradiol-specific antibodies with a 125 I-labeled estradiol as the tracer. Inter-assay coefficients of variation for the assay averaged 10% at a mean value of 28 pg/ml. E2 assay functional sensitivity was 5 pg/ml. Androstenedione levels were evaluated via ELISA (ALPCO, Salem, NH, United States) based on the typical competitive binding scenario between unlabeled antigen (present in standards, controls, and unknowns) and the enzymelabeled antigen (conjugate) for a limited number of antibody binding sites on the microwell plate. Inter-assay coefficients of variation for the androstenedione assay averaged 9% at a mean value of 0.5 ng/ml. Functional sensitivity of the androstenedione assay was 0.1 ng/ml. Progesterone levels were also evaluated using ELISA (ALPCO, Salem, NH, United States). Progesterone ELISA inter-assay coefficients of variation averaged 13% at a mean value of 2.6 ng/ml. Functional sensitivity of the progesterone assay was 0.3 ng/ml.
Ovarian Follicle Counts
Following post-fixation at euthanasia, one ovary from each rat was randomly selected for processing and quantification. All ovarian follicle histology and quantification was carried out by FYXX Foundation (Flagstaff, AZ, United States). The oviduct was separated from the ovary prior to processing by a Leica TP1020 tissue processor. The ovary was paraffin embedded and serial sectioned at 5 µm on a semi-automatic rotary microtome. Every 10th section was placed on slides, which were stained with Gills 2 hematoxylin and counterstained with eosin Y-phloxine B, then manually cover-slipped. Tissue was scanned for analysis using a 3D HisTech DESK Scanner. Every 20th section was analyzed for viable primordial, primary, secondary and antral follicles. Viable follicles were those with no apparent signs of atresia. Atretic follicles were not counted. Criteria from Haas et al. (2007) was used to classify follicle type. Briefly, a resting-state primordial cell was classified by a single layer of squamous granulosa cells around an oocyte. Primary follicles included a single layer of cuboidal granulosa cells. Secondary follicles were identified by several layers of granulosa cells surrounding the oocyte. Antral follicles had two or more layers of granulosa cells in addition to a fluid-filled antral space within the follicle (Haas et al., 2007). The estimated total number of primordial follicles was obtained using the following formula: N t = (N 0 × S t × t s )/(S 0 × d 0 ), where N t = total follicle estimate, N 0 = number of follicles observed in the ovary, S t = total number of sections in the ovary, t s = thickness of the section (µm), S 0 = total number of sections observed, and d 0 = mean diameter of the nucleus (Gougeon and Chainy, 1987). Counts for primary, secondary, and antral follicles were summed. Corpora lutea were counted through progression of appearance across the entire sample.
Statistical Analyses
Statview statistical software was used to complete data analyses. All analyses were two-tailed (α = 0.05) and presented as means ± S.E.M. A series of two-group planned comparison repeated measures ANOVAs were completed using Treatment as the independent variable. We aimed to answer three key questions with our experimental data. We asked: (1) What role does daily E2-only treatment have with transitional menopause? For this question, the VCD-E2 group was compared to the VCD-Vehicle group.
(2) Does daily treatment with an individual progestogen impact cognition with transitional menopause, and is type of progestogen a factor for outcomes? To address this question, we compared the VCD-Vehicle group to the VCD-PROG group and to the VCD-LEVO group, as well as the VCD-PROG group to the VCD-LEVO group.
(3) What role does combination hormone therapy play for cognition with transitional menopause? The VCD-E2 group was compared to each combination group (VCD-E2 + PROG and VCD-E2 + LEVO) to assess the impact of adding a progestogen component to E2 therapy in a reproductive tract intact, but follicle-deplete, system. The VCD-PROG and VCD-LEVO groups were compared to their corresponding combination hormone treatment groups (VCD-E2 + PROG or VCD-E2 + LEVO, respectively) to understand how E2 alters progestogen-only effects in a reproductive tract intact, but follicle-deplete, system. Combination groups were also compared to the VCD-Vehicle group, and to each other to evaluate whether different progestogen components of combined hormone therapy matter for cognitive outcomes. Statistically significant two-group comparisons are reported herein, while select non-significant comparisons key to the highlighted questions are provided for context.
Water radial arm maze data were divided into three phase blocks, as previously published (Mennenga et al., 2015c;Braden et al., 2017;Prakapenka et al., 2018;Koebele et al., 2019). Day 1 was considered training and was excluded from the analysis. Days 2-5 were the Early Acquisition Phase, Days 6-9 the Late Acquisition Phase, and Days 10-12 the Asymptotic Phase. Each phase block was analyzed separately, and each error type was analyzed separately for each phase block, with WMC, WMI, and RM errors as the dependent measures. The three trials for WMC, or four trials for WMI and RM, were nested within days within each phase block (Early Acquisition Phase Block 1: 4 days, Late Acquisition Phase Block 2: 4 days, Asymptotic Phase Block 3: 3 days) as the repeated measures. Thus, these analyses consisted of two-group ANOVAs with Treatment as the independent variable, and two repeated measures variables of trials within days (Trials), and days within block (Days). Separate a priori two-group analyses were run for Trial 3 + Trial 4, the high working memory load trials, for WMC and WMI errors on each block based on prior age-and hormone-mediated effects found in our laboratory (Bimonte and Denenberg, 1999;Bimonte-Nelson et al., 2003Acosta et al., 2010;Mennenga et al., 2015b,c;Koebele et al., , 2019Koebele et al., , 2020bPrakapenka et al., 2018). Delayed memory retention data were analyzed independently for each treatment group by comparing WMC errors on Trial 3 on the last day of regular testing to Trial 3 on Day 13, the first post-delay trial on the Delay Day.
Morris water maze analyses were completed using the same two-group comparison structure. Swim Distance to the Platform (cm) was the dependent measure, and the four trials per day were nested within the 5 days of the task as the repeated measures. Performance was assessed across all 5 days of the task as well as across the four regular (non-probe trial) trials on Day 5 alone. Probe trial data were analyzed for each treatment group using Proportion Total Swim Distance in the NE (target) vs. SW (opposite) quadrants.
Visible platform analyses were completed for individual treatment groups. Analyses comparing performance on Trial 1 to Trial 6 were compared within each group. Latency to Platform (s) was the dependent measure, and the first and last trials were repeated measures.
Open field task analyses were completed for each two-group comparison. ANOVA was used to analyze total time (s) spent in the corners, center, and small center of the arena, as well as total number of entries made into the corner, center, and small centers of the arena to assess anxiety-like behavior. The total number of line crossings were assessed to evaluate locomotor activity during the task. The number of fecal boli produced during the 10 min trial was quantified.
Forced swim task analyses were completed for each twogroup comparison. ANOVA was used to analyze latency to first immobility (s), total immobility duration (s), total swimming duration (s), total climbing duration (s), and number of dives as measures of depressive-like behaviors, as well as the number of fecal boli produced during the trial.
Body weights, uterine weights, serum hormone levels, and ovarian follicle counts were analyzed using ANOVA. For each two-group comparison, Treatment was the independent variable and body weight (g), uterine weight (g), hormone levels (pg/mL or ng/mL), or follicle counts were the dependent measures. An additional set of analyses for ovarian follicle counts were carried out post hoc to include a comparison group of ovary-intact, non-VCD treated rats from an independent data set in our laboratory quantified by FYXX Foundation (n = 10). This ovary-intact group received the respective Vehicle injection (50%DMSO/50%Saline) for VCD injections to provide additional context for the VCD-induced follicular depletion in the current study. Unless otherwise noted, the number of subjects per treatment group in the reported analyses was as follows: VCD-Vehicle n = 10, VCD-E2 n = 10, VCD-PROG n = 9, VCD-LEVO n = 9, E2 + PROG n = 10, and E2 + LEVO n = 10. Early Acquisition Phase (Days 2-5) What role does daily E2-only treatment have in spatial learning and memory with transitional menopause?
Water Radial Arm Maze
The VCD E2 vs. VCD-Vehicle groups did not differ for WMC, WMI, or RM errors during the Early Acquisition Phase, suggesting that daily E2 treatment at the given dose did not affect early task learning in a model of transitional menopause compared to follicle-depleted rats that did not receive hormone treatment.
Does daily treatment with an individual progestogen impact cognition with transitional menopause, and does type of progestogen impact outcomes?
There were no differences between the VCD-Vehicle group and the VCD-PROG group or the VCD-LEVO group, nor between the VCD-PROG vs. VCD-LEVO groups for WMC, WMI, or RM during the Early Acquisition Phase. This suggests that with transitional menopause, daily progestogen treatment does not influence early task learning as compared to no hormone treatment, nor does type of progestogen differentially impact outcomes during learning.
What role does daily combination hormone therapy play for spatial learning and memory with transitional menopause?
For RM errors, there was a main effect of Treatment for the VCD-E2 vs. VCD-E2 + LEVO comparison [F (1 , 18) = 4.54, p < 0.05], where follicle-deplete rats treated with a combination of E2 and levonorgestrel made fewer RM errors compared to those treated with E2-only (Figure 3). For the VCD-E2 + PROG group vs. VCD-E2 + LEVO group, there was also a main effect [F (1 , 18) = 9.78, p < 0.01], where follicle-deplete rats treated with a combination of E2 plus levonorgestrel made fewer RM errors than those treated with a combination of E2 plus progesterone during the Early Acquisition Phase. Thus, a daily regimen of E2 plus levonorgestrel combined with transitional menopause may confer benefits to spatial reference memory performance during learning (Figure 3).
Late Acquisition Phase (Days 6-9)
There were no significant Treatment differences in WMC, WMI, or RM errors for any two-group comparison during the Late Acquisition Phase. For all error types, Day 1 was considered Training and was excluded from data analysis. The Early Acquisition Phase was defined as Days 2-5, the Late Acquisition Phase was defined as Days 6-9, and the Asymptotic Phase was defined as Days 10-12. Performance for each error subtype was analyzed separately. The n/group for all WRAM two-group analyses were: VCD-Vehicle n = 10, VCD-E2 n = 10, VCD-PROG n = 9, VCD-LEVO n = 9, VCD-E2 + PROG n = 10, and VCD-E2 + LEVO n = 10.
Asymptotic Phase (Days 10-12) What role does daily E2-only treatment have in spatial learning and memory with transitional menopause?
There were no significant differences in WMC, WMI, or RM errors for the VCD-E2 vs. VCD-Vehicle group comparison during the Asymptotic Phase of testing (Figures 5, 6), suggesting that daily E2 treatment at the given dose did not significantly affect memory maintenance with transitional menopause compared to counterparts that did not receive hormone treatment.
Does daily treatment with an individual progestogen impact cognition with transitional menopause, and is type of progestogen a factor for spatial learning and memory?
During the Asymptotic Phase, there were no main effects of Treatment for WMC errors. There was a Trial × Treatment interaction present for WMC errors where follicle-deplete rats treated with progesterone performed worse than those treated with levonorgestrel (VCD-PROG vs. VCD-LEVO: F (2 , 32) = 3.76, p < 0.05; Figure 4A), indicating that progestogen type has an impact on the ability to handle an increasing working memory load. No significant differences in WMI or RM errors were detected for this comparison in the Asymptotic Phase.
For WMI, there was a main effect of Treatment [F (1 , 17) = 5.26, p < 0.05; Figure 5A] and a Trial × Treatment interaction [F (3 , 51) = 2.87, p < 0.05; Figure 5B] whereby follicle-deplete rats treated with progesterone made more WMI errors compared to those without subsequent hormone treatment. When Trial 3 + Trial 4, the highest working memory load trials, were evaluated for WMI errors, there was a main effect of Treatment [F (1 , 17) = 5.21, p < 0.05; Figure 5C], again indicating that follicle-deplete rats treated with progesterone made more WMI errors when working memory load was burdened compared to transitionally menopausal rats that did not receive subsequent hormone treatment. No differences between WMC or RM errors were present for this comparison.
What role does daily combination hormone therapy play for spatial learning and memory with transitional menopause?
During the Asymptotic Phase of testing, there was a Trial × Treatment interaction for WMC errors within the VCD-PROG group vs. VCD-E2 + PROG group comparison [F (2 , 34) = 3.42, p < 0.05; Figure 4B]; when the Trial 3 + Trial 4, the high working memory load trials, were probed for this comparison, there was a main effect of Treatment for WMC errors [F (1 , 17) = 4.66, p < 0.05; Figure 4C], where rats treated with E2 plus progesterone made fewer errors than progesteroneonly counterparts. Similarly, for WMI errors, there was a main effect of Treatment for the VCD-PROG vs. VCD-E2 + PROG comparison [F (1 , 17) = 6.64, p < 0.05; Figure 5A], indicating that the addition of E2 to progesterone treatment enhanced performance compared to progesterone alone on WMI errors across all trials; a Trial × Treatment interaction [F (3 , 51) = 3.17, p < 0.05; Figure 5D] was also present for this comparison. When Trial 3 + Trial 4, the high working memory load trials, were probed for WMI errors, a main effect of Treatment persisted [F (1 , 17) = 6.67, p < 0.05; Figure 5E], where combined E2 plus progesterone treatment enhanced performance compared to progesterone-only treatment, particularly when memory load was highly burdened for WMI errors. A main effect of Treatment was also present for RM errors between VCD-PROG and VCD-E2 + PROG groups [F (1 , 17) = 7.56, p < 0.05; Figure 6A]. As such, across all error types, a daily combination treatment of E2 plus progesterone treatment enhanced spatial memory performance compared to progesterone-only treatment in transitionally menopausal rats in the Asymptotic Phase. When E2-only treatment was compared to this combination of daily E2 plus progesterone, a Trial × Treatment interaction for RM errors was present [F (3 , 54) = 5.72, p < 0.01; Figure 6B] with a higher mean error score for the VCD-E2 treated group as compared to the combined VCD-E2+PROG treated group on Trial 4, suggesting a potential benefit for the VCD-E2 + PROG group's spatial reference memory at the highest working memory load compared to E2-only treatment as well, although RM performance across trials should be interpreted with caution given a cap of four possible RM errors. Collectively, when ovaries remained structurally intact but were follicle-deplete, combined E2 plus progesterone treatment improved spatial memory performance compared to treatment with E2 alone or progesterone alone.
Six-Hour Delay
Treatment groups were analyzed separately for delayed memory retention assessment. WMC errors committed on the first postdelay trial (Trial 3) were compared to errors on Trial 3 on the last day of baseline testing. There was a main effect of Delay Day for the VCD-Vehicle group [F (9 , 1) = 10.76, p < 0.01; Figure 7A], VCD-E2 group [F (9 , 1) = 21.00, p < 0.01; Figure 7B], VCD-E2 + PROG group [F (9 , 1) = 7.36, p < 0.05; Figure 7E], and VCD-E2 + LEVO group [F (9 , 1) = 19.29, p < 0.01; Figure 7F], where most groups made more errors when an extended delay occurred, regardless of hormone therapy regimen. Analyses did not reach statistical significance for the VCD-PROG (Figure 7C) or VCD-LEVO group (Figure 7D), suggesting that the progestogen-only treatments promoted some level of memory retention across the delay period. There were no Treatment effects across all 5 days of the task or on Day 5 alone between VCD-Vehicle and VCD-E2 groups, indicating that daily E2 treatment at the given dose did not alter spatial reference memory compared to follicle-deplete rats that did not receive subsequent hormone treatment.
Morris Water Maze
Does Daily Treatment With an Individual Progestogen Impact Cognition With Transitional Menopause, and Is Type of Progestogen a Factor for a Simple Spatial Reference Memory Task?
There were no Treatment effects for any planned comparison including the progestogen-only groups across all 5 days of the task or on Day 5 alone. Figure 8C], where follicle-deplete rats treated with only E2 swam less distance to the platform compared to follicle-deplete rats administered a combination hormone therapy treatment. Thus, the addition of an exogenous progestogen, whether it was an endogenous-like progesterone or the synthetic progestin levonorgestrel, in combination with E2 impaired performance compared to E2 administration alone at the end of this simple spatial reference memory task.
Probe Trial
Probe trial analysis demonstrated that each treatment group effectively learned to use a spatial strategy to solve the MM task (Figures 8D-I). Indeed, when the platform was removed from the maze, each treatment group spent a greater proportion of total swim distance in the previously platformed target
Open Field Task
One subject from the VCD-E2 + PROG group was excluded from OFT analyses due to a technical error. Figure 10A provides a schematic of the OFT with boxes overlaid to operationally define the Corners, Center, and Small Center within the arena. anxiety-like behavior when E2-only hormone therapy is given after follicular depletion as compared to no hormone therapy given after follicular depletion ( Figure 10B). There were no effects present for time in the Center or Small Center for this comparison, nor were there differences in entries into the Corners, Center, or Small Center.
Does Daily Treatment With an Individual Progestogen Impact Anxiety-Like Behavior With Transitional Menopause, and Is Type of Progestogen a Factor for Outcomes?
Regarding Corner Time (s), transitionally menopausal rats treated with daily progesterone alone spent less time in the corners of the OFT when compared to counterparts without hormone treatment [Treatment main effect VCD-Vehicle vs. VCD-PROG comparison: F (1 , 17) = 4.80, p < 0.05], suggesting a decrease in anxiety-like behavior for the progesterone-treated group ( Figure 10B). There were no effects present for time in the Center or Small Center for these comparisons, nor were there differences in entries into the Corners, Center, or Small Center. In both analyses, the VCD-E2 + LEVO group showed increased entries into the corners (Figure 10F). A Treatment effect was also indicated within the VCD-E2 vs. VCD-E2 + LEVO comparison for Center entries [F (1 , 18) = 7.14, p < 0.05] ( Figure 10G) and Small Center entries [F (1 , 18) = 22.59, p < 0.001] ( Figure 10H). Increased Small Center entries were also evident Figure 10I). Transitionally menopausal rats treated with E2-only produced more fecal boli compared to rats without hormone therapy treatment (VCD-Vehicle vs. VCD-E2: F (1 , 18) = 8.27, p < 0.05) and compared to rats treated with a combination of E2 plus progesterone (VCD-E2 vs. VCD-E2 + PROG: [F (1 , 18) = 8.87, p < 0.01]) during the 10 min trial ( Figure 10E).
Forced Swim Task
What Role Does Daily E2-Only Treatment Have in Depressive-Like Behaviors With Transitional Menopause?
Latency to Immobility, Total Immobility Duration, Total Swimming Duration, Total Climbing Duration, Number of Dives, or Number of Fecal Boli did not differ between rats treated with E2 only compared to counterparts not administered subsequent hormone treatment (Figure 11).
Does Daily Treatment With an Individual Progestogen Impact Depressive-Like Behavior With Transitional
Menopause, and Does Type of Progestogen Have an Impact?
No differences were found in Latency to Immobility, Total Immobility Duration, Total Swimming Duration, Total Climbing Duration, Number of Dives, or Number of Boli for any planned comparison including the VCD-Vehicle group compared to the VCD-PROG or VCD-LEVO group, nor did VCD-PROG and VCD-LEVO groups differ from one another (Figure 11). In all comparisons, transitionally menopausal rats treated with combined E2 plus progestogen hormone treatment regimens had longer latencies to immobility, indicating that the addition of either natural progesterone or the synthetic progestin levonorgestrel to E2 treatment yields antidepressant-like behavior compared to E2-only treatment or no hormone treatment following transitional menopause ( Figure 11A). Furthermore, Total Immobility Duration was increased in the VCD-Vehicle group compared to the VCD-E2 + PROG group [F (1 , 18) = 4.55, p < 0.05], and compared to the VCD-E2 + LEVO group [F (1 , 18) = 6.94, p < 0.05]. In both comparisons, the groups treated with combined E2 plus progestogen hormone regimens spent less total time immobile, indicating that combined hormone therapy regimens induce antidepressant-like behavior compared to no hormone treatment with transitional menopause (Figure 11B). Additionally, VCD-LEVO vs. VCD-E2 + LEVO differed for Total Immobility Duration [F (1 , 17) = 8.65, p < 0.01], where rats treated with levonorgestrel alone spent more time immobile compared to counterparts treated with a combination of E2 plus levonorgestrel ( Figure 11B). Although Total Swimming Duration did not differ for any comparison (Figure 11C), rats treated with a combination of E2 plus levonorgestrel spent more time presenting with climbing behavior compared to counterparts that did not receive hormone therapy after follicular depletion (VCD-Vehicle vs. VCD-E2 + LEVO: [F (1 , 18) = 6.62, p < 0.05]) ( Figure 11D). Taken together, these results suggest that a combined hormone therapy regimen, particularly a combination of E2 and levonorgestrel, results in antidepressant-like effects compared to no hormone treatment, E2-only treatment, or progestogen-only treatment after transitional menopause.
Vaginal Cytology
Across two consecutive days of vaginal cytology monitoring, most VCD-Vehicle-treated rats exhibited mixed cytology resembling metestrus-like smears, suggesting disrupted estrous cyclicity, which is expected following accelerated follicular depletion without subsequent hormone therapy treatment. Rats that received E2 only displayed primarily cornified cells resembling estrus-like smears, which was expected as a result of daily E2 administration. Rats treated with progesterone only or levonorgestrel only had primarily metestrus-or diestruslike smears, indicative of a relatively higher ratio of circulating progesterone to estrogen levels. The VCD-E2 + PROG group presented with cytology mostly resembling metestrus-like smears, and some diestrus-like smears, while the VCD-E2 + LEVO group showed estrus-and metestrus-like smears. Based on prior data from our and other laboratories, normal estrous cyclicity is disrupted approximately 4 months after VCD injection administration, and vaginal cytology can be modified by a given hormone therapy regimen (Koebele et al., 2020a).
Serum Hormone Levels
One VCD-Vehicle rat, all VCD-E2, and all VCD-E2 + LEVO rats were excluded from the androstenedione analyses because the measured serum hormone level was below the detectable limit of the assay. Additionally, one VCD-Vehicle rat was excluded from the E2 analyses due to insufficient serum volume needed to run the assay. The n per group for each steroid hormone assay is included in the Figure 12 caption summarizing serum hormone levels.
How Does Daily E2-Only Treatment Affect Serum Hormone Profiles With Transitional Menopause?
Transitionally menopausal rats treated with daily E2 had increased circulating E2 levels compared to the Vehicle-treated counterparts, as expected [F (1 , 17) = 10.82, p < 0.01] ( Figure 12A). Progesterone levels did not differ between VCD-Vehicle and VCD-E2 groups ( Figure 12B). Lastly, all subjects within the VCD-E2 group had undetectable levels of androstenedione, and thus the comparison could not be carried out between VCD-Vehicle vs. VCD-E2 groups ( Figure 12C).
How Does Daily Treatment With Progesterone or Levonorgestrel Affect Serum Hormone Levels With Transitional Menopause, and Does Type of Progestogen Impact Outcomes?
Treatment with progesterone or levonorgestrel did not alter circulating E2 levels compared to transitionally menopausal counterparts that did not receive hormone treatment or compared to each other ( Figure 12A). The VCD-PROG group had higher circulating progesterone levels than the VCD-Vehicle group [F (1 , 17) = 70.95, p < 0.0001] and the VCD-LEVO group [F (1 , 16) = 71.26, p < 0.0001] (Figure 12B). Rats treated with levonorgestrel had similar circulating progesterone profiles compared to transitionally menopausal rats that did not receive hormone therapy, suggesting that this synthetic progestin did not alter endogenous progesterone levels in follicle-deplete ovaryintact rats. Interestingly, the VCD-PROG group had higher androstenedione levels compared to the VCD-Vehicle group [F (1 , 16) = 20.53, p < 0.001], and compared to the VCD-LEVO group [F (1 , 16) = 21.49, p < 0.001] (Figure 12C), suggesting that follicle-deplete rats with exogenous administration of natural progesterone experience increased circulating androgen levels compared to follicle-deplete rats without hormone treatment, or compared to those treated with the synthetic progestin levonorgestrel. On the other hand, treatment with levonorgestrel alone did not impact circulating androstenedione levels compared to counterparts that did not receive hormone therapy. differ between VCD-E2 and VCD-E2 + PROG groups or VCD-E2 and VCD-E2 + LEVO groups, indicating that the addition of a progestogen to E2 treatment was insufficient to alter circulating E2 levels, at least at the given doses. Likewise, rats treated with either type of progestogen independently had less circulating E2 compared to their respective combined hormone therapy group (VCD-PROG vs. VCD-E2 + PROG [F (1 , 17) = 16.83, p < 0.001]; VCD-LEVO vs. VCD-E2 + LEVO [F (1 , 17) = 23.44, p < 0.001]). The VCD-E2 + PROG vs. VCD-E2 + LEVO groups did not differ in circulating E2 levels; thus, the type of progestogen (i.e., natural progesterone or synthetic progestin levonorgestrel) did not impact circulating E2 levels when the hormone therapy was administered in a combined estrogen plus progestogen fashion. Overall, the E2 component is likely the primary driver in determining circulating E2 levels in a given group ( Figure 12A). The VCD-E2 + PROG group had increased circulating progesterone levels compared to the VCD-Vehicle group [F (1 , 18) = 103.78, p < 0.0001], the VCD-E2 group [F (1 , 18) = 62.29, p < 0.0001], the VCD-E2 + LEVO group [F (1 , 18) = 74.99, p < 0.0001], and, interestingly, the VCD-PROG alone group [F (1 , 17) = 9.36, p < 0.01]; the outcome from this latter comparison indicates that combined E2 plus progesterone therapy may have a synergistic effect on increasing circulating progesterone levels compared to progesterone-only treatment. Circulating progesterone levels did not differ between VCD-Vehicle vs. VCD-E2 + LEVO groups, VCD-E2 vs. VCD-E2 + LEVO groups, or VCD-LEVO vs. VCD-E2 + LEVO groups, suggesting that the synthetic progestin levonorgestrel does not influence endogenous progesterone production itself, at least at the dose given in this experiment ( Figure 12B).
How Does Daily Combination Hormone Therapy
All subjects in the VCD-E2 + LEVO group had undetectable levels of circulating androstenedione, and thus could not be evaluated relative to respective comparison groups. Because all subjects treated with E2 only likewise had undetectable androstenedione levels, this group also could not be compared to the VCD-E2 + PROG group. The VCD-E2 + PROG group did not differ in androstenedione levels from the VCD-Vehicle group. Androstenedione levels differed between VCD-PROG and VCD-E2 + PROG groups, whereby the combination hormone therapy regimen yielded reduced androstenedione levels compared to progesterone treatment alone [F (1 , 17) = 62.90, p < 0.0001] (Figure 12C).
Ovarian Follicle Counts
Two subjects from the VCD-Vehicle group, two subjects from the VCD-LEVO group, one subject from the VCD-E2 + PROG group, and one subject from the VCD-E2 + LEVO group were excluded from follicle analyses due to poor tissue quality. Thus, the n/group for all follicle analyses was the following: VCD-Vehicle n = 8, VCD-E2 n = 10, VCD-PROG n = 9, VCD-LEVO n = 7, VCD-E2 + PROG n = 9, and VCD-E2 + LEVO = 9. The independent ovary-intact Vehicle reference group n = 10.
How Does Daily E2-Only Treatment Affect Ovarian Follicle Profiles With Transitional Menopause?
Compared to the VCD-Vehicle group, the VCD-E2 group had significantly fewer primordial follicles [F (1 , 16) = 6.10, p < 0.05] and fewer primary follicles [F (1 , 16) = 9.89, p < 0.01] (Figures 13A,B), an effect we have previously observed in follicle-depleted rats with tonic E2 treatment (Koebele et al., 2020a). Secondary follicles, antral follicles, and corpora lutea counts did not differ between VCD-Vehicle and VCD-E2 groups, although both groups exhibited substantial follicle decline, indicating successful (F) The VCD-E2 + LEVO group made more entries into the corner compared to the VCD-E2 group as well as the VCD-E2 + PROG group. (G) The VCD-E2 + LEVO group made more entries into the center compared to VCD-E2 treatment alone. (H) The VCD-E2 + LEVO group made more entries into the small center compared to VCD-Vehicle group, VCD-E2 group, and VCD-E2 + PROG group. (I) Total Line Crossing analyses indicate that the VCD-E2 + LEVO group moved more in the OFT compared to VCD-Vehicle, VCD-E2, and VCD-E2+PROG groups. Significance: * = p < 0.05, * * = p < 0.01, * * * = p < 0.001. VCD-induced follicular depletion. In fact, there were no detectable antral follicles for any subject treated with E2 only (Figures 13C-E).
How Does Daily Treatment With Progesterone or Levonorgestrel Affect Ovarian Follicle Profiles With Transitional Menopause, and Does Type of Progestogen Matter?
There were no Treatment group differences in primordial follicles, primary follicles, secondary follicles, antral follicles, or corpora lutea counts in the VCD-Vehicle group vs. the VCD-PROG group or vs. VCD-LEVO group, nor did the VCD-PROG and VCD-LEVO groups differ from each other, indicating that progestogen treatment alone does not impact the composition of the ovarian follicle pool in an accelerated follicular depletion model (Figures 13A-E).
How Does Daily Combination Hormone Therapy Affect Ovarian Follicle Profiles in a Model of Transitional Menopause?
Estimated primordial follicle counts did not differ for VCD-Vehicle rats compared to the VCD-E2 + PROG group or compared to the VCD-E2 + LEVO group. Compared to transitionally menopausal rats treated with E2 only, transitionally menopausal rats treated with E2 plus levonorgestrel had more primordial follicles [F (1 , 17) = 4.86, p < 0.05] (Figure 13A), suggesting that this combined hormone treatment protects remaining healthy follicles in the ovarian reserve during this menopause transition time point compared to treatment with E2 alone. Estimated primordial follicle counts, primary follicles, secondary follicles, and antral follicles did not differ for combined hormone therapy groups compared to their respective progestogen counterparts, nor did they differ from each other. In addition, the VCD-E2 + PROG group had more corpora lutea compared to the VCD-E2 group [F (1 , 17) = 6.93, p < 0.05], indicating that rats treated with E2 plus progesterone may have occasional ovulatory cycles during the menopause transition, although both groups were all significantly depleted and categorized as infertile ( Figure 13E).
Confirmation of Follicular Depletion in VCD-Treated Groups: Comparison to an Ovary-Intact Vehicle Reference Group
Overall, groups treated with VCD showed substantial ovarian follicle loss in comparison to normally aging ovary-intact rats that did not receive exposure to VCD. To confirm that VCD treatment depleted the ovarian follicle reserve in all treatment groups in the current study, we utilized an independent data set of ovarian follicle counts collected in our laboratory from rats that received the complementary Vehicle injection for VCD administration, similar to a comparison procedure we have published previously (Koebele et al., 2020a). This ovaryintact Vehicle reference group was compared to each VCDtreated group in the current study (Figures 13A-E; specific comparisons below), with analyses showing that each VCD group had fewer primordial follicles, secondary follicles, antral follicles, and corpora lutea than this ovary-intact Vehicle reference group. E2 was elevated in VCD-E2, VCD-E2 + PROG, and VCD-E2 + LEVO groups compared to VCD-Vehicle rats. Additionally, combination hormone therapy groups had elevated E2 compared to their respective progestogen-only groups. E2 analysis n/group: VCD-Vehicle n = 9; VCD-E2 n = 10; VCD-PROG n = 9; VCD-LEVO n = 9; VCD-E2 + PROG n = 10; VCD-E2 + LEVO n = 10. (B) Progesterone was elevated in the VCD-PROG group and the VCD-E2 + PROG group compared to the VCD-Vehicle group, VCD-E2 group, and VCD-LEVO group. The combination hormone group had higher progesterone levels compared to the VCD-PROG group alone. Progesterone analysis n/group: VCD-Vehicle n = 10; VCD-E2 n = 10; VCD-PROG n = 9; VCD-LEVO n = 9; VCD-E2 + PROG n = 10; VCD-E2 + LEVO n = 10. (C) All subjects in the VCD-E2 group and VCD-E2 + LEVO group had undetectable levels of androstenedione. Androstenedione was elevated in the VCD-PROG group compared to VCD-Vehicle, VCD-LEVO, and VCD-E2 + PROG groups. Androstenedione analysis n/group: VCD-Vehicle n = 9; VCD-E2 n = 0 [undetectable]; VCD-PROG n = 9; VCD-LEVO n = 9; VCD-E2 + PROG n = 10; VCD-E2 + LEVO n = 0 [undetectable]. Significance: * * = p < 0.01, * * * = p < 0.001, * * * * = p < 0.0001.
Body Weights
Body Weight measurements across the experiment are illustrated in Figure 14A.
How Does Daily E2-Only Treatment Affect Body Weight With Transitional Menopause?
As we have previously observed (Koebele et al., 2020a), there were no body weight differences between the VCD-Vehicle group and the VCD-E2 group at euthanasia, indicating that daily E2 treatment was insufficient to alter body weight compared to a reproductive tract intact, but follicle-deplete, rat not treated with hormone therapy (Figure 14B).
How Does Daily Treatment With Progesterone or Levonorgestrel Affect Body Weight With Transitional Menopause, and Does Type of Progestogen Matter?
There were no differences in body weight between the VCD-Vehicle and the VCD-PROG group, or the VCD-LEVO group, at euthanasia. VCD-PROG vs. VCD-LEVO groups did not differ in average body weight either. Overall, this indicates that in reproductive tract intact, follicle-deplete rats, daily progestogen treatment alone did not alter body weight compared to counterparts not treated with hormone therapy. Moreover, body weights from progestogen-only groups did not differ from each other (Figure 14B).
How Does Daily Combination Hormone Therapy Affect Body Weight With Transitional Menopause?
The VCD-E2 + PROG group weighed less than the VCD-Vehicle group [F (1 , 18) = 6.12, p < 0.05] as well as less than the VCD-PROG group [F (1 , 17) = 11.39, p < 0.01] at euthanasia. The VCD-E2 + LEVO group weighed less than LEVO-only treated counterparts as well [F (1 , 17) = 7.84, p < 0.05]. However, there were no weight differences indicated between the VCD-Vehicle vs. VCD-E2 + LEVO group at euthanasia. The combination hormone therapy regimens did not have an impact on body weight compared to E2-only treatment, nor did they differ from each other. Overall, these data suggest that a combined FIGURE 13 | Ovarian Follicle Counts. An independent ovary-intact reference group (n = 10) is included to assess successful follicular depletion following VCD treatment. The letter "a" indicates that this ovary-intact reference group was significantly different from each VCD-treated group. (A) Estimated primordial follicle counts were decreased in the VCD-E2 group compared to the VCD-Vehicle group and the VCD-E2 + LEVO group. (B) Primary follicles were decreased in the VCD-E2 group compared to the VCD-Vehicle group, replicating prior work. (C) Secondary follicle counts were significantly depleted in VCD-treated groups, indicating successful accelerated follicular atresia. (D) Antral follicle counts were significantly depleted in VCD-treated groups, indicating successful accelerated follicular atresia. (E) The VCD-E2 + PROG group had more corpora lutea compared to the VCD-E2 group, suggesting occasional ovulatory cycles in this group during the transition to reproductive senescence. Significance: * = p < 0.05, * * = p < 0.01. hormone therapy regimen, particularly one containing natural progesterone, may lead to weight loss with a follicle-deplete background ( Figure 14B).
Uterine Weights
How Does Daily E2-Only Treatment Affect Uterine Weight With Transitional Menopause?
The VCD-Vehicle and VCD-E2 groups did not differ in uterine weight ( Figure 14C). Although we have previously reported an increase in uterine weight with E2-only treatment in a VCD model, that experiment administered E2 tonically using Alzet osmotic pumps (Koebele et al., 2020a). It is possible that transitionally menopausal rats given a low dose of E2 via daily injection is insufficient to induce persistent changes in uterine weight compared to transitionally menopausal rats not receiving hormone therapy treatment.
How Does Daily Treatment With Progesterone or Levonorgestrel Affect Uterine Weight With Transitional Menopause, and Does Type of Progestogen Matter?
While VCD-Vehicle vs. VCD-LEVO groups did not differ in uterine weights, the VCD-PROG group had decreased uterine weights compared to the VCD-Vehicle group [F (1 , 17) = 8.14, p < 0.05] and compared to the VCD-LEVO group [F (1 , 16) = 6.92, p < 0.05], suggesting that daily natural progesterone treatment attenuates uterine weight in reproductive tract-intact but follicledeplete rats (Figure 14C).
How Does Daily Combination Hormone Therapy Affect Uterine Weight With Transitional Menopause?
Neither combination hormone therapy regimens, E2 plus progesterone nor E2 plus levonorgestrel, had an impact on uterine weight as compared to transitionally menopausal FIGURE 14 | Peripheral markers of overall health and uterine stimulation. (A) Body weight changes across the experimental timeline (B) At the end of the experiment, the VCD-E2 + PROG group weighed less than the VCD-Vehicle group and the VCD-PROG group, suggesting combination hormone therapy promotes weight maintenance compared to no hormone therapy treatment or progesterone treatment alone. The VCD-E2 + LEVO group also weighed less than its VCD-LEVO alone counterpart, again suggesting combination hormone therapy promotes weight maintenance. (C) PROG treatment reduced uterine weight compared to VCD-Vehicle, VCD-LEVO, and VCD-E2 + PROG groups. VCD-E2 + PROG uterine weight was attenuated compared to VCD-E2 treatment along, suggesting progesterone blocked uterine proliferation. The VCD-E2 + LEVO group uteri weighed more than those in the VCD-E2 + PROG group, indicating less progestin-induced attenuation of uterine stimulation compared to natural progesterone when in a combined hormone therapy regimen. Significance: * = p < 0.05, * * = p < 0.01, * * * * = p < 0.0001. rats without hormone therapy. The combination of E2 plus progesterone decreased uterine weights compared to E2-only treatment [VCD-E2 group vs. VCD-E2 + PROG group: F (1 , 18) = 5.43, p < 0.05], while the combination E2 plus levonorgestrel did not yield this decrease compared to E2-only treatment. Progesterone-only treatment also reduced uterine weights compared to combined E2 plus progesterone treatment [F (1 , 17) = 31.58, p < 0.0001]. Uterine weights did not differ between rats treated with levonorgestrel alone and counterparts treated with a combination of E2 plus levonorgestrel. However, when E2 was administered with levonorgestrel, this combination resulted in higher uterine weights than when E2 was combined with natural progesterone [F (1 , 18) = 4.627, p < 0.05] (Figure 14C).
DISCUSSION
Using the VCD accelerated follicular depletion model of transitional menopause, this experiment evaluated independent and combined effects of daily E2, progesterone, and levonorgestrel treatment on several aspects of cognition, including spatial memory, anxiety-like, and depressive-like behaviors in middle-aged, ovarian follicle-deplete female rats. Endocrine and ovarian follicular profiles were reported in conjunction with general health measures to provide the first comprehensive report of cognitive outcomes associated with independent and combined menopausal hormone therapy regimens in a transitional menopause model. Until now, preclinical investigations into combined hormone therapy regimens have been conducted in Ovx rats (Gibbs, 2000;Simone et al., 2015;Prakapenka et al., 2018), and evaluations of hormone effects utilizing the VCD model have been limited to estrogen-only (Acosta et al., 2010;Pestana-Oliveira et al., 2018;Long et al., 2019;Koebele et al., 2020a). Divergent cognitive, anxiety-like, and depressive-like profiles were observed dependent upon the type of clinically relevant, daily hormone regimen administered. Overall, under the current experimental parameters, progesterone-only treatment produced detrimental impacts on spatial working memory, while combined E2 plus progestogen treatments resulted in beneficial cognitive effects spanning spatial memory, anxiety-like measures, and depressivelike measures, as well as favorable body and uterine weight profiles in a follicle-deplete, ovary-intact transitional menopause model. Collectively, these findings demonstrate that the presence of follicle-depleted ovarian tissue and the specific formulation of hormone treatment not only yield unique behavioral phenotypes, but are critical considerations when interpreting outcomes in both preclinical and clinical evaluations.
Regarding spatial memory performance, daily E2 treatment in follicle-deplete rats had a neutral effect on working and reference memory compared to counterparts without subsequent hormone treatment. Mode of hormone administration could impact cognitive outcomes in a transitional menopause background. Indeed, we have recently shown that tonic, chronic administration of E2 via a subcutaneous Alzet osmotic pump had beneficial learning effects, and some detrimental memory effects, in follicle-deplete rats of the same age (Koebele et al., 2020a). Thus, although the age of the rats, as well as the VCD treatment, hormone dose, and behavior protocol were constant across studies, varying the drug administration route from a tonic exposure to a daily injection likely altered spatial working memory outcomes.
We also report here that the combined hormone therapy regimen containing E2 plus the synthetic progestin levonorgestrel improved spatial reference memory during task acquisition in the WRAM compared to E2-only treatment. This suggests a unique and broad benefit in the transitionally menopausal model that was not observed in a surgical menopause model, wherein combined E2 plus levonorgestrel treatment attenuated the beneficial effects of E2 alone after Ovx (Prakapenka et al., 2018). Thus, the presence or absence of follicle-deplete ovarian tissue in middle-age plays a role in the cognitive outcomes of E2 plus levonorgestrel combination hormone treatment. Rats that received E2 plus levonorgestrel treatment also had improved reference memory during task acquisition compared to rats treated with E2 plus progesterone concomitantly, suggesting a unique cognitively beneficial role for levonorgestrel when combined with E2 to enhance learning on a complex spatial working memory task. Reference memory benefits observed for the rats treated with daily E2 plus levonorgestrel treatment did not carry over into MM, indicating that the presence of a working memory component in a task alters outcomes on the reference memory measure, an effect we have previously shown in normally aging, ovary-intact rats without hormone treatment (Bernaud et al., 2021). During the latter portion of WRAM testing, transitionally menopausal rats treated with only progesterone showed working memory impairments when working memory was taxed compared to counterparts without hormone treatment, with levonorgestrel treatment, or with E2 plus progesterone treatment. This progesterone-only induced cognitive impairment has been observed in past work from our laboratory and others using the Ovx menopause model (Chesler and Juraska, 2000;Bimonte-Nelson et al., 2006;Harburger et al., 2007;Lowry et al., 2010;Sun et al., 2010;Braden et al., 2015). On the MM, transitionally menopausal rats administered E2-only had significantly better performance on the last day of the task compared to both combination treatment groups, such that in the case of a simple spatial reference memory-only task, the combination of progesterone or levonorgestrel with E2 attenuated performance compared to E2 treatment alone. However, regardless of treatment, all rats spatially localized to the previously platformed area during the probe trial, indicating the effective use of a spatial strategy in the MM. Taken together, the cognitive effects resulting from exogenous hormone treatment may be specific to memory domain, task complexity, and menopause type (Koebele et al., in press). It is also of note that hormone therapy regimens in this study began after follicular depletion was substantial, and cognitive outcomes could have been impacted by the timing of the hormone therapy administration relative to the extent of follicular depletion.
Regarding anxiety-like behavior as measured by the OFT, transitionally menopausal rats treated with a combination of E2 plus levonorgestrel demonstrated less anxiety-like behavior as defined by more time and entries into the open field center compared to E2-only treatment, as well as more entries into the smallest center designation compared to transitionally menopausal rats without hormone therapy, or those given E2-only, or E2 plus progesterone. The E2 plus levonorgestrel group also had increased Total Line Crossings in the OFT, suggesting increased overall locomotor activity with this hormone treatment combination. Increased time in the corners of the open field in the VCD-Vehicle group indicates that the endogenous hormone profile associated with transitional menopause without subsequent hormone therapy increases anxiogenic behavior compared to the profile of transitional menopause with E2-only or progesterone-only administration. This observation corresponds to clinical literature showing increased de novo affective disorders during midlife and the transition to menopause, and calls for further evaluations of midlife-aged individuals given these hormone therapies Maki et al., 2012;Weber et al., 2014;Soares, 2019;Parry, 2020;Stute et al., 2020). Overall, the combination of E2 and levonorgestrel produced a favorable profile of reduced anxiety-like behaviors compared to other groups. This is particularly noteworthy, as E2-only therapy has been shown to alleviate affective symptoms during the menopause transition, but not in the post-menopausal life stage (Lokuge et al., 2011); perhaps combined hormone regimens could be a novel pathway to alleviate anxiety symptoms in individuals who are reproductive-tract-intact but ovarian follicle-depleted. Regarding depressive-like behavior quantified in the FST, transitionally menopausal rats given combined hormone therapy regimen, irrespective of progestogen type, exhibited longer latencies to immobility and spent less time immobile overall. This suggests that combined hormone therapy regimens, particularly those containing levonorgestrel, produce advantageous outcomes for depressive-like behaviors with a follicle-deplete, ovary-intact background. It is important to acknowledge that traditional FST measures have more recently been discussed within the context of responsiveness or coping after a severe acute stressor, rather than a pure measure of persistent depressive-like behavior (Commons et al., 2017), and that immobility could be an adaptive response rather than a despair-like behavior (Molendijk and de Kloet, 2015). In the future, it will be important to capture the impact of variations in hormone therapy regimens on additional tasks that encompass varied expressions of anxiety-like and depressive-like behavior in rodents.
In terms of physiological measures, all groups treated with E2 had elevated circulating E2 levels compared to groups that were not treated with E2. Circulating progesterone was increased in groups treated with progesterone. Of particular interest, transitionally menopausal rats treated with a combination of E2 and natural progesterone displayed elevated serum progesterone levels compared to counterparts treated with progesterone alone, which may point to a mechanism by which the combined hormone treatment containing E2 plus progesterone increased natural progesterone production to a greater extent than did the exogenous progesterone treatment alone. Circulating androstenedione levels were undetectable in rats treated with E2 alone or in combination with levonorgestrel, suggesting a potential role of exogenous E2 in mediating endogenous androstenedione production, which is synthesized in the interstitial ovarian tissue. Rats treated with progesterone had elevated circulating androstenedione levels compared to counterparts without hormone treatment, with synthetic levonorgestrel, or with combined E2 plus progesterone regimens, indicating that exogenous progesterone alone promotes the synthesis of endogenous androstenedione.
With regard to ovarian follicle counts, we report that the VCD-E2 treated group had significantly fewer primordial and primary follicles compared to the VCD-Vehicle group, corresponding to recent work from our laboratory showing similar effects with tonically administered E2 (Koebele et al., 2020a). This is a novel phenomenon observed within the middle-aged VCD model, wherein exogenous E2-only treatment may further accelerate follicular depletion by a yet-unknown mechanism. One possibility is that exogenous E2-associated rapid follicular depletion may be moderated, in part, by interactions with estrogen receptor-beta (Chakravarthi et al., 2020). Moreover, a recent report in adult ovary-intact mice revealed that administration of the synthetic estrogen ethinyl estradiol downregulated estrogen receptor expression and oxytocin receptor expression in ovarian tissue, with all receptor downregulation persisting even after treatment was discontinued (Garbett et al., 2020), pointing to a role for exogenous estrogen treatment in accelerated follicular depletion in rodents. Interestingly, the VCD-E2 + PROG group had statistically more corpora lutea present compared to the VCD-E2 alone group, such that the group administered E2 only was largely anovulatory, whereas other groups may have had an occasional ovulatory cycle during depletion, as has been observed in individuals during the human menopause transition (O'Connor et al., 2009;Burger, 2011), resulting in quantifiable corpora lutea at the time of evaluation.
The addition of the ovary-intact vehicle reference group confirmed that primordial, secondary, and antral follicles, as well as corpora lutea, were sufficiently depleted in the VCD-treated groups, regardless of subsequent hormone therapy treatment. In contrast to our previously published findings (Koebele et al., 2020a), the ovary-intact vehicle reference group had significantly lower primary follicles counts compared to VCD-treated groups. This may be due to a rat strain difference since the F344-NIH strain utilized in our previously published work has since been retired and replaced with the F344-CDF strain. Six single nucleotide polymorphisms (SNPs) that differ between the strains have been detected, although the effect of these SNPs on the F344-CDF phenotypes is not well defined (National Institute on Aging, 2019). Because primary ovarian follicles are not steroidogenic or responsive to gonadotropins, it is unlikely that there would be a major biologically or behaviorally relevant consequence to the increased primary follicle counts observed in the VCD-treated groups herein. It is notable that the extremely low or undetectable numbers of secondary and antral follicles in all VCD-treated groups demonstrate that the ovatoxin successfully halted any remaining primary follicles from transitioning into later stages of growth, and was thus successful at inducing a transitional menopause model.
Combined hormone therapy regimens containing both an estrogen and progestogen appear to reduce or maintain body weight during the menopause transition. Moreover, natural progesterone-only treatment consistently promoted inhibitory effects of uterine proliferation at the dose given. Follicledeplete rats administered combined E2 plus progesterone therapy showed decreased uterine weights compared to E2-only therapy, again suggesting that natural progesterone administered exogenously attenuated endometrial growth; of note, we also found that progesterone decreased the uterine weight when combined with E2, while the synthetic progestin levonorgestrel did not. A higher dose of levonorgestrel may prevent uterine weight increases with transitional menopause. Of particular clinical relevance, uterine weights from rats treated with either combination hormone regimen did not differ from transitionally menopausal rats without hormone treatment; thus, the tested combined regimens did not yield substantial E2-induced uterine hyperplasia overall.
Collectively, this experiment demonstrates the remarkable variability that hormone therapy options can have on outcomes associated with memory, anxiety, depression, endocrine, body weight, and reproductive tract profiles during the transition to menopause. In accordance with medical societies providing recommendations for care during the menopause transition, our data support the tenet that hormone therapy is not a one-sizefits-all solution (Neves-E-Castro et al., 2015;Stuenkel et al., 2015;Baber et al., 2016;Pinkerton et al., 2017a). Primary indications for treatment and individual health risk factors must be taken into account when prescribing hormone therapy; it is clear that formulation and presence of an intact reproductive tract are key to this equation, despite being historically understudied. The neurobiological, pharmacological, and behavioral effects of E2alone, progestogen-alone, and combined hormone therapy are complex and, in some cases, task-specific. That levonorgestrel has some androgenic receptor activity, but does not have glucocorticoid or anti-mineralocorticoid activity like natural progesterone or other clinically used progestins (Schindler et al., 2003) may play a role in the behavioral phenotypes observed herein. This is particularly important because progesterone-alone had several negative effects on working memory performance in this evaluation, replicating a well-documented effect in the literature in Ovx rats (Chesler and Juraska, 2000;Bimonte-Nelson et al., 2004Harburger et al., 2007;Sun et al., 2010;Braden et al., 2015). Moreover, progesterone-only, but not levonorgestrel-only, treatment increased androstenedione levels in the current experiment. Given that androstenedione has been shown to detrimentally impact spatial memory in Ovx rats, likely via its aromatization to estrone (Camp et al., 2012;Mennenga et al., 2015c), interactive effects of levonorgestrel with androgen receptors in conjunction with lower circulating androstenedione levels than seen with progesterone treatment may be a putative mechanism through which levonorgestrel mitigates or prevents negative cognitive effects. Moreover, levonorgestrel has also been shown to have some unique effects on insulin secretion when combined with the synthetic estrogen, ethinyl estradiol (Sitruk-Ware and Nath, 2011), indicating that independent or combined administration may alter biological and behavioral outcomes. Levonorgestrel remains a popular progestin prescribed in intrauterine devices, combined oral contraceptives, emergency contraception, and menopausal hormone therapy formulations; the results described here are promising findings, as a favorable hormone therapy regimen should not compromise cognitive health for the individual (and optimally would provide benefits) while fulfilling its function to alleviate other non-cognitive, unwanted menopause symptoms. Continued exploration into the biological underpinnings of levonorgestrel's unique effects on the brain and periphery will provide critical insight for improving health outcomes across multiple stages in the lifespan. Future investigations should consider additional clinically relevant hormone formulations that take into account a more holistic approach to understanding cognitive-behavioral outcomes, including menopause type (Edwards et al., 2019) and individual life history, with the goal to improve healthy life expectancy outcomes.
DATA AVAILABILITY STATEMENT
The data will be made available by the authors upon reasonable request. Requests to access the datasets should be directed to HB-N, bimonte.nelson@asu.edu.
ETHICS STATEMENT
The animal study was reviewed and approved by Arizona State University Institutional Animal Care and Use Committee.
|
2021-07-21T13:22:50.471Z
|
2021-07-21T00:00:00.000
|
{
"year": 2021,
"sha1": "dc25409d6bbf00b20b5d0b4750485dc8aa8f12a9",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnbeh.2021.696838/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dc25409d6bbf00b20b5d0b4750485dc8aa8f12a9",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
182504913
|
pes2o/s2orc
|
v3-fos-license
|
Study on Safe and Anti-collision Measures Towards Expressway of Underpass
With combining manage features of expressway and express railway, this paper analyzes safe and Anti-collision measures towards expressway of underpass. In order to ensure safety of the pier and prevent wrecked vehicles from crashing the piers of elevated express railway, Anti-collision of expressway and express railway will be researched. And this paper proposes that expressway of underpass should increase safety and anti-collision measures and Anti-collision measures and makes a checking calculation for intensity of Anti-collision of the piers. This research provides a practically analyzing thought for the related projects.
Introduction
With the rapid advancement of China's high-speed railway construction, there are more and more cross-over phenomena between expressways and high-speed railway bridges. At the same time, the economic development of the local area has also increased rapidly, the demand for transportation has been continuously increased, the traffic flow of the expressway has been increasing, the traffic accidents on the expressway have also increased. If a traffic accident occurs on a section of the expressway that runs through the high-speed rail bridge, it will pose a threat to the high-speed Railway Piers beside the expressway. The advantages of high-speed railway have fast speed and heavy traffic density. In order to ensure the safe operation of high-speed railway trains, the safety anti-collision measures and the collision prevention grade of high-speed railway bridge piers are particularly important.
Through the comprehensive analysis of domestic and foreign related literature, the main research includes: research on the structure of vehicle impact piers [1], [2], research on the structure of vehicles impacting piers and columns [3], [4], and the anti-side of reinforced concrete beam-column members research on impact performance [4], [5], [6], [7], [8] and anti-vehicle impact of piers [9], [10], [11],12]. Analysis and research show that the current research on anti-vehicle impact of piers is still in its infancy, and there are many key problems that have not been solved, such as the calculation method of vehicle impact force. In this paper, the anti-collision measures of the high-speed rail bridge under a new construction project are set up, the anti-collision measures of the expressway anti-collision wall and the high-speed rail bridge pier are combined for comprehensive research and analysis, and the self-compiled program is used for checking calculation to ensure the safe operation of the expressway and high-speed rail and the reliability of anti-collision measures.
Vehicle Impact Force
When a new project of an expressway crosses the high-speed railway bridge, the pier of no. 709, no. 710 and no. 711 of the high-speed rail bridge pier has the possibility of being hit by the accident car. When it is hit by a car, a solid and reliable protection project should be set up according to the actual situation, such as adopting baffle, anti-impact frame, anti-collision guardrail and other measures to prevent the piers from being hit. When the protection project cannot be set, the impact force of the car on the pier must be considered. According to the calculation of article 4.4.7 of "the Basic Code for Design of Railway Bridges and Culverts", the impact force is 1000KN in the direction of driving, 500KN in the direction of transverse vehicle. It acts at 1.20m above the road surface.
Anti-Collision Analysis of Pier and Piers
The internal force F of the pier can be obtained when the pier is subjected to the collision force along the direction of 1000KN. According to the influence line loading principle, when the pier is subjected to the collision action of a vehicle with a size of P (KN), the internal force F' generated by the collision force at the pier is Then calculate the internal force of the pier according to the following formula. .1 According to the concrete strength damage calculation of pier.
According to Article 5.3.5 of "Design Specification for Highway Reinforced Concrete and Prestressed Concrete Bridges and Culverts" (JTG D62-2004), the calculation of the compressive bearing capacity of the normal section of the eccentric compression member with rectangular section shall be in accordance with the following provisions: .2 Calculated according to the shear capacity of the pier. When masonry components or concrete components are directly sheared, they shall be calculated according to the following formula: Vehicle impact force is a special load, and its load is only combined with the main force, and the corresponding increase coefficient is considered when the main force is combined with the special load.
In working condition 1: natural working condition, and working condition 2: considering the car impact caused by the heavy weight plus 20 years of heavy rain, the displacement of the pier top of the high-speed railway passenger bridge is shown in Table 2. Table 3 represents the calculation results of the bottom section of the pier. According to the calculation of pier cross-section, the strength, eccentricity, longitudinal and transverse elastic displacement of pier pier cross-section are all within the allowable range of the specification when considering the impact force of automobile.
Collision Prevention Analysis of Pier Foundation
The foundation of 409-711 of the High-speed Railway Bridge should be checked for the bearing capacity of the foundation when considering the impact force of the vehicle. The basic method also uses the above method to convert the impact force to the load behind the bottom of the pier, and this force is included as a special load. Experience shows that the pier foundation of no. 709-711 high-speed railway bridge has no special load (Automobile impact force) control in all design schemes. Therefore, under the action of car impact force, the pier foundation can meet the requirement of bearing capacity.
Protective Design of Protective Wall
Through the above analysis on the anti-collision strength of bridge pier, the foundation of bridge pier can meet the requirement of bearing capacity under the action of vehicle impact force. However, for the sake of safety, the anti-collision wall is set on both sides of the lower section of high-speed rail bridge, which can effectively protect the high-speed rail pier from car impact Table Ⅳ lists the anti-collision levels of bridge guardrails specified in the "Code for Design of Highway Traffic Safety Facilities" (JTG D81). Ø1.0m borehole protective piles are set on the shoulder of both sides of the roadbed adjacent to the railway bridge pier at the junction of the public railway. The piles are 1.2m or 1.5m apart, and the pile length is 15m. The pile body is made of C50 concrete. The crown of the pile is connected with crown and beam. The crown beam is 1.4m wide and 1.0m high. It is cast in C45 concrete. According to the "Code for Design of Highway Traffic Safety Facilities" (JTG D81), SS-level anti-collision guardrails are installed on the crown beam.
A highway is divided into two sections from the pier of no. 709~710 and no. 710~711 of the high-speed railway bridge. SS-level anti-collision wall is set on both sides of bridge pier no. 709~711, which can effectively prevent runaway vehicles from striking high-speed railway bridge pier outside the outlet road, as shown in Fig. 3 and Fig. 4. .1 Necessary speed limit, height limit and lane separation signs shall be set for the lower railway bridge section, and the distance between vehicles shall be confirmed as the mark line and the color deceleration belt mark (as shown in Fig. 5). Meanwhile, reflective warning marks shall be set for both sides of bridge pier no. 709~711 and the side of beam.
. 2 The expressway passes through the high-speed rail section of the expressway, and all vehicles are prohibited from crossing the high-speed railway section. It is forbidden for large vehicles to overtake in the high-speed railway section.
Conclusion
1 Through theoretical analysis and calculation, the pier of no. 709-711 high-speed railway bridge can meet the safety requirements under the action of vehicle impact force.
2 The section of the high-speed rail bridge under the expressway is conducive to the safety of the bridge by setting SS level anti-collision wall (the direction of the incoming vehicle is not less than 30m), lane marking (the solid line of the lane line), speed limit signs, color speed reduction belts and other safety facilities.
|
2019-06-07T20:44:05.521Z
|
2019-01-01T00:00:00.000
|
{
"year": 2019,
"sha1": "6eaccfc97f85ff909f44770d19d96c8c364875ee",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.2991/cnci-19.2019.101",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "56a77a5a16a74e10a768d60932b62ff6d2a3b1b1",
"s2fieldsofstudy": [
"Business",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
88517947
|
pes2o/s2orc
|
v3-fos-license
|
Probability matrices, non-negative rank, and parameterizations of mixture models
In this paper we parameterize non-negative matrices of sum one and rank at most two. More precisely, we give a family of parameterizations using the least possible number of parameters. We also show how these parameterizations relate to a class of statistical models, known in Probability and Statistics as mixture models for contingency tables.
Introduction
The study of non-negative matrices with fixed rank has recently attracted a great deal of work both theoretical and applied. One of the main problems in this field is the so-called "non-negative matrix factorization problem", which can be shortly stated as follows. Given a non-negative matrix A ∈ R I×J + (where R + denotes the set of real non-negative numbers), one has to find an approximation of A as a linear combination of k dyadic products c i r t i , where the c i 's and r i 's are vectors with non-negative entries, i.e. c i ∈ R J + and r i ∈ R I + . The rank of a matrix gives the numbers of rank one matrices, i.e. dyadic products, needed to write the matrix as a sum of dyads. But there are no non-negative conditions on the vectors of the dyads. The non-negativity constraints make the situation more complex and one has to work with the non-negative rank of the matrix (see e.g. Cohen and Rothblum (1993)), which is in general bigger than the ordinary rank. Therefore, it is not possible in general to decompose a rank k matrix into the sum of exactly k dyadic products c i r t i where c i and r i are non-negative vectors. We will review the main results about non-negative rank in the next section.
In recent literature, a number of results and algorithms for non-negative matrix factorization have been published, see e.g. Lee and Seung (2000). In Catral et al. (2004) special techniques for symmetric tables are presented, while in Ho and Van Dooren (2008) the case of fixed row and column sums is analyzed, with applications to stochastic matrices. In Finesso and Spreij (2006), the authors discuss some connections between the factorization problem and the notion of I-divergence, which has a well known statistical role, see e.g. Dacunha-Castelle and Duflo (1986) and Pardo (2005).
From the point of view of Probability, non-negative matrices are a natural tool in the analysis of two-way contingency tables. A two-way contingency table A = (a i,j ) collects data from two categorical random variables measured on n subjects. Let us suppose that the first variable X has I levels 1, . . . , I and the second variable Y has J levels 1, . . . , J. The element a i,j is the count of subjects with X = i and Y = j. Therefore, A is an I × J matrix with non-negative integer entries.
A joint probability distribution for the pair (X, Y ) is a probability matrix with I rows and J columns P = (p i,j ) of non-negative real numbers such that i,j p i,j = 1. A statistical model M for I × J contingency tables is a set of probability distributions, i.e. a subset of the simplex (1) One of the most widely used models for two-way contingency tables is the independence model, see e.g. Agresti (2002). It is defined through the vanishing of all 2 × 2 minors of the generic matrix, i.e. by the equations thus, the points of the independence model are rank 1 matrices. Recent developments in Statistics have shown the relevance of probability models whose points are matrices of rank at most 2. One example in this direction, based on a special symmetric matrix, is the so-called "100 Swiss francs problem", see Sturmfels (2008). This problem comes from Computational Biology, where it is useful to analyze the alignment of DNA sequences, see Pachter and Sturmfels (2005). Although this particular problem has been solved in Gao et al. (2008), the study of fixed-rank probability matrices is mainly unexplored.
As the sum of k matrices with rank 1 has rank at most k, the matrices which can be written as the sum of k dyadic products encode the notion of mixture of k distributions from independence models.
In Probability and Statistics it is interesting not only to study the approximation problem mentioned above, but also to have a parametrization of the models. While for rank 1 matrices the parametrization is easy, see e.g. Agresti (2002), the problem becomes difficult in the case of higher nonnegative ranks. Already for k = 2, in Fienberg et al. (2010) it is shown that the model is not identifiable, meaning that different parameter values lead to the same probability distribution.
This issue is a well known problem in statistical modelling called "parameter redundancy", see Catchpole and Morgan (1997) and Catchpole et al. (1998). The detection of parameter redundancy has a major relevance in maximum likelihood estimation, where the parameters of a statistical models are estimated through the maximization of a real-valued function called "likelihood function", see e.g. Agresti (2002). In the papers mentioned above, the authors propose a purely analytical technique to detect the parameter redundancy of a statistical model, by computing the rank of the Jacobian matrix of a specific function. The redundancy is checked through Symbolic Algebra computations and the problem of redundancy is overcome via additional linear constraints on the parameters.
In this paper, we propose a method which uses linear algebra to make the maximization problem simpler by reducing the number of parameters involved. Then the usual analytic techniques can be used in a more effective way.
The paper is organized as follows: in Section 2 we introduce some definition an we recall some basic facts. In Section 3 we study the problem of parameters redundancy form a geometric point of view. In Section 4 we show a possible application of our results.
Definition and background material
Let P = (p i,j ) be a probability matrix with I rows and J columns, i.e. P ∈ ∆. In order to simplify the formulae, let us suppose that I ≤ J. Let k be an integer, 1 ≤ k ≤ I.
Definition 2.1. A probability matrix P is the mixture of k independence models if it can be written in the form: where for all h = 1, . . . , k • α h ∈ R + and h α h = 1; • r h ∈ R J + and i r h (i) = 1; • c h ∈ R I + and j c h (j) = 1. Definition 2.1 contains a simple parametric form of the probability distribution which has an intuitive probabilistic counterpart. Let us suppose that we have k pairs of dice, say (D 1,r , D 1,c ), . . . , (D k,r , D k,c ), where D h,r has J facets and distribution r h and D h,c has I facets and distribution c h . We choose a pair of dice with probability distribution α = (α 1 , . . . , α k ) and we roll the selected pair of dice. The resulting distribution is just a mixture distribution as in Eq. (3).
As a Linear Algebra counterpart, the definition above is strictly related with the notion of non-negative rank of a matrix. For more on non-negative rank see, e.g., Cohen and Rothblum (1993). We recall here some useful facts.
Definition 2.2. Given a matrix P with real non-negative elements, the nonnegative rank of P is the smallest number of non-negative column vectors v 1 , . . . , v k of P such that each column of P has a representation as a linear combination of v 1 , . . . , v k with non-negative coefficients. The non-negative rank of a matrix P is denoted with rk + (P ).
The definition above has an equivalent formulation in terms of linear combinations of row vectors. In the following proposition we summarize the main properties of the non-negative rank. The reader can refer to Cohen and Rothblum (1993) for proofs and further details. The non-negative rank is of special relevance for Probability and Statistics. In fact, rk + (A) is the number of dyadic products of non-negative vectors that we can use to represent A. Proposition 2.3. Let P , Q be two non-negative matrices with I rows and J columns.
show that the non-negative rank has properties similar to the classical rank. In general, the rank and non-negative rank are different, as shown by the following matrix which has rank 3 but non-negative rank 4.
Among the cases where the rank and the non-negative rank coincide, there are the following special classes of matrices Cohen and Rothblum (1993).
Proposition 2.4. Let P be a non-negative matrix with I rows and J columns.
(a) If rk(P ) ≤ 2 then rk + (P ) = rk(P ); In what follows we will heavily use part (a) of Proposition 2.4. Hence, for the convenience of the reader, we produce a self contained proof of this fact for probability matrices.
Proof. If rk(P ) = 1 then the proof is trivial; thus we will assume rk(P ) = 2. Denote with C i , i = 1 . . . , J the columns of P . We will show that there exist two columns, sayC andC, such that C i = t iC + s iC for all i and the coefficients t i 's and s i 's are non-negative.
Clearly, as P has rank at most two, all columns are linear combinations of two fixed ones. Without loss of generality, we may assume that C 1 and C 2 are linearly independent. Thus for any other column we have If all the pairs (t i , s i ) are non-negative we are done. Otherwise, consider in the plane R 2 the rays spanned by the pairs (t i , s i ) and let (t,s) and (t,s) be the extremal rays and denote byC andC the corresponding columns. We recall that the extremal rays are the minimal generators of the convex cone spanned by the the pairs (t i , s i ). Now consider the angle φ between the extremal rays containing at least one positive semi-axis. If φ < π radiants then we are done by using the addition rule for vectors in the plane and all the columns are non-negative linear combinations ofC andC. If φ = π radiants we get the contradiction asC +C = 0 and hence C 1 and C 2 would be proportional. If φ > π we get again a contradiction. In fact, a non-negative combination of the extreme rays would be in the negative quadrant. Hence, a non-negative linear combination ofC andC would be non-positive and hence equal to zero being P non-negative. Thus, C 1 and C 2 would be proportional again.
Parameters and parameterizations
Often in Probability and in Statistic models are described using parameters. This description can be easily expressed in geometric terms. Given the variety representing the model we look for a surjective function into it. More precisely, if M is the model, a surjective function U ⊆ R n −→ M gives a parametrization of M. If the function we found is described by rational functions and its image is dense in the model, we say that the map is dominant and we describe the model up to a measure zero set.
Given a model M there are two basic questions: Does there exist a dominant map R n −→ M? What is the smallest n for which such a map exists? Answering the first question is a deep and difficult problem in Geometry called "the unirationality problem", see (Harris, 1992, page 87). The second question is difficult too, but we can easily give a bound on n using the dimension of M, namely we must have n ≥ dim M.
When we have a parametrization of a model M such that n = dim M we say that the parametrization is non-redundant, or that the parameters are non-redundant. It is not always possible to find a non-redundant parametrization. But, in some interesting situations, it is possible to decompose the model M as union of subvarieties and for each of this one can find a non-redundant parametrization. We will give examples of these phenomena in the case of rank k and rank 2 mixture models.
A parametrization for the rank k matrices
Given natural numbers I ≤ J we consider the following family of matrices with rank at most k: As the elements of M k have rank at most k, they can be written as a linear combination of at most k rank one probability matrices. More precisely, if P ∈ M k then for a choice of scalars α ′ i s and of column vectors c i 's and r i 's. Hence, we can represent elements of M k using k(I + J) + k parameters. In other words, (4) gives a surjective polynomial map R k(I+J)+k −→ M k .
We recall that a map between algebraic varieties, say V 1 −→ V 2 , can be a parametrization, only if dim V 1 ≥ dim V 2 . To know whether the parameters we are using are necessary or redundant, we need to know the dimension of M k and compare it with k(I + J) + k.
Proposition 3.1. With the notation above, we have dim M k ≤ k(I + J) − k 2 − 1 .
Proof. The dimension of the family of complex I × J matrices of rank at most k is well known to be k(I + J) − k 2 , see Harris (1992). Imposing that the sum of al the entries is 1 and taking real matrices give the bound.
Proposition 3.1 shows that the parametrization (4) is redundant and we are using more parameters than the best possible value. Actually, it is not possible to use k(r + s) − k 2 − 1 parameters to get all the elements of M k . In the case of k = 2 we will show how to decompose M k in open subsets which can each be described using the optimal number of parameters.
Non-redundant parameterizations of probability models for k = 2
In this section we only deal with matrices of rank at most two. Hence we fix k = 2 and we set In this situation, dim M ≤ 2I + 2J − 5 and we will use this number of parameters to describe M, hence finding a non-redundant parametrization. Set D = 2I + 2J − 5. We will construct maps for 1 ≤ j 1 < j 2 ≤ J, with the property that the union of the images of the f j 1 ,j 2 is the whole M, i.e. Im(f j 1 ,j 2 ) = M. Each map is constructed in such a way that Im(f j 1 ,j 2 ) is contained in the open subset of the matrices with the j 1 -th and the j 2 -th columns linearly independent. We give an explicit description only for f 1,2 , the other cases being completely analogous. We set To define f j 1 ,j 2 one simply moves element in the row vectors. In the first row vector the 1 − b i element is moved in position j 1 and the 0 is moved in position j 2 ; similarly for the second row vector.
Remark 3.2. With standard computations one can easily check that for all j 1 and j 2 , j 1 < j 2 . Now we analyze the functions f j 1 ,j 2 in order to derive some useful properties. We work with f 1,2 and all the results trivially extend to the other functions.
Lemma 3.3. Let P ∈ M be the following matrix where the coefficients x i , y i , s i and t i are non-negative. If the first two columns of P are non-zero, we set and also α = 1, and c i = d i = 0 for all i. If and also α = 0 and a i = b i = 0 for all i.
Proof. The definition of P ′ and the condition on the entries of P yield that P ′ ∈ U ′ 1,2 . A straightforward computation shows that f 1,2 (P ′ ) = P . The two expressions for the parameter α coincide as P is a matrix with sum one.
Finally we can show that the maps f j 1 ,j 2 give a parametrization of M.
Corollary 3.4. The variety M is covered by the images of the functions f j 1 ,j 2 , more precisely Im(f j 1 ,j 2 ) = M.
Proof. Let P ∈ M, by Lemma 2.5 we know that P can be written as in the statement of Lemma 3.3 for some columns C j 1 and C j 2 and hence P ∈ Im(f j 1 ,j 2 ).
An application
It is often interesting to find maxima and minima of a function over a variety. As an example consider the well known likelihood function. We will use the parametrization we found in the previous sections to propose a strategy to study extremal points on M. The advantage of this approach is that we are going to study functions involving the least possible number of variables as the parametrization we found is non-redundant.
Remark 4.1. Given a function F : M −→ R we consider the composite functions F • f j 1 ,j 2 . Consider a point P = f j 1 ,j 2 (P ′ ) ∈ M such that P is in the interior of Im(f j 1 ,j 2 ). Then P is a maximum/minimum for F if and only if P ′ is a maximum/minimum for F • f j 1 ,j 2 .
Using Remark 4.1 we can apply the usual gradient and Hessian matrix approach to detect extremal points of F lying in the interior of one of the Im(f j 1 ,j 2 ). Hence it useful to have the following: Lemma 4.2. If P ′ is in the interior of U ′ j 1 ,j 2 then f j 1 ,j 2 (P ′ ) is in the interior of Im(f j 1 ,j 2 ).
Proof. We produce a proof for j 1 = 1 and j 2 = 2 but a completely analogous argument works in the general situation. Given P ′ we compute P = f 1,2 (P ′ ) and thus we write P as in the statement of Lemma 3.3. Moreover, as P ′ is in the interior of U ′ 1,2 the coefficients t i and s i in P are strictly positive. Now consider a neighborhood U of P . Given a matrix Q ∈ U we can write it in the form of Lemma 3.3 by computing the coefficients t i and the s i . This is done by solving linear systems of equations having the elements of Q as coefficients. Hence, it is possible to choose a suitable U such that for all the matrices in U the coefficients t i and s i are strictly positive. In conclusion, the formulae of Lemma 3.3 produce a map g 1,2 : U −→ U ′ 1,2 . It is straightforward to see that g 1,2 is a continuous map on U and that the map f 1,2 • g 1,2 is the identity map. Now we take a neighborhood of P ′ , say U ′ ⊂ f −1 1,2 (U ). Then we get a neighborhood of P g −1 1,2 (U ′ ) ⊂ Im(f 1,2 ) and we are done.
Lemma 4.2 shows that we only have to worry about points of M which are images of boundary points of U j 1 ,j 2 . Thus it is useful to have the following description: and let P = f j 1 ,j 2 (P ′ ). Then the following hold: 1. if any of the coefficients a i or c i is zero then P is a point of the boundary of M; 2. if a i = 1 or b i = 1 then P is a point of the boundary of M; 3. if α = 0 or α = 1 then P is a rank one matrix; 4. if any of the coefficients b i or d i is zero then is P has at least two proportional columns; 5. if a i = 1 or b i = 1 then P has at least two proportional columns.
Proof. For (1) and (2) it is enough to notice that P has some zero element. Hence a neighborhood of P contains matrices with negative entries. Thus P is on the boundary of M. The other cases are obtained by direct computations.
By Lemma 4.3 we see that the composite map F • f j 1 ,j 2 will detect maxima and minima of F if these extremal points do not have rank one or if they have rank two and do not have two proportional columns. In many situation of interest rank one matrices can be efficiently treated, e.g. for the likelihood function. Rank two matrices with proportional columns can be treated using our parametrization in a subtler way.
Lemma 4.4. Let P = f j 1 ,j 2 (P ′ j 1 ,j 2 ) be a rank two matrix with at least two proportional columns. Then a neighborhood of P in M can be covered using images of neighborhoods of P ′ j 1 ,j 2 in U ′ j 1 ,j 2 for different pairs (j 1 , j 2 ).
Proof. Given P , choose two independent columns, say the j 1 -th and the j 2 -th. As P has proportional columns, when written as in Lemma 3.3 some of the coefficients t i and s i vanish. Hence, in each neighborhood of P there will be matrices requiring negative values of the coefficients t i or s i . Then there is no neighborhood where the formulae of the Lemma can be applied to get and inverse on f j 1 ,j 2 and hence we can not reproduce the argument of Lemma 4.2. But we can find a neighborhood of P ′ j 1 ,j 2 , say W ′ j 1 ,j 2 ⊂ U ′ j 1 ,j 2 , such that there exists an inverse of f j 1 ,j 2 on f j 1 ,j 2 (W ′ j 1 ,j 2 ), but this is not a neighborhood of P . By Lemma 2.5 we see that the f j 1 ,j 2 (W ′ j 1 ,j 2 ) cover a neighborhood of P as (j 1 , j 2 ) varies and we are done.
We can now describe our strategy. Given a function F : M −→ R we can look for maxima and minima of F in following way: 1. study F on rank one matrices using an ad hoc method. When F is the likelihood function, the problem is quite simple, see e.g. Agresti (2002); 2. consider the functions F •f j 1 ,j 2 and compute their maxima and minima on U ′ j 1 ,j 2 for all 1 ≤ j 1 < j 2 ≤ J (notice that these computation are as simple as they could be as the least number of variable is involved); let Q be one of the point we found; 3. if Q is in the interior of one of the U ′ j 1 ,j 2 then f j 1 ,j 2 (Q) is a maximum or minimum of F ; 4. if Q lies on the boundary of one of the U ′ j 1 ,j 2 and f j 1 ,j 2 (Q) is on the boundary of M, then f j 1 ,j 2 (Q) is a maximum or minimum of F ; 5. if Q lies on the boundary of one of the U ′ j 1 ,j 2 and f j 1 ,j 2 (Q) has rank one we already treated this case in the first step; 6. if Q lies on the boundary of one of the U ′ j 1 ,j 2 and f j 1 ,j 2 (Q) has two proportional columns, then Q will lie on the boundary of at least two of the U ′ j 1 ,j 2 ; for each each pair (j 1 , j 2 ) such that Q is on the boundary of U ′ j 1 ,j 2 we have to compare the extremal behavior of the functions F •f j 1 ,j 2 , if these behavior agree then f j 1 ,j 2 (Q) is a maximum/minimum of F otherwise it is not.
In this paper we only considered matrices of rank at most two. For higher values of the rank the situation gets much more involved and almost impossible to treat. For example, it is not even known how to effectively compute the non-negative rank of a matrix. But, some preliminary results in Dong et al. (2009) suggest that matrices with non-negative rank different from the ordinary rank are exceptional, i.e. they form a zero-measure set. This observation can be of some help to try and extend our approach.
|
2009-11-09T22:41:21.000Z
|
2009-11-02T00:00:00.000
|
{
"year": 2009,
"sha1": "9e22cc4d9703e1cc3f3ddc05b474eb86d16767a5",
"oa_license": "elsevier-specific: oa user license",
"oa_url": "https://doi.org/10.1016/j.laa.2010.03.010",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "9e22cc4d9703e1cc3f3ddc05b474eb86d16767a5",
"s2fieldsofstudy": [
"Mathematics",
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
16492223
|
pes2o/s2orc
|
v3-fos-license
|
Inhibition of pituitary tumors in Rb mutant chimeras through E2f4-loss reveals a key suppressive role for the pRB/E2F pathway in urothelium and ganglionic carcinogenesis
The retinoblastoma protein pRB suppresses tumorigenesis largely through regulation of the E2F transcription factors. E2F4, the most abundant E2F protein, is thought to act in cooperation with pRB to restrain cell proliferation. In this study, we analyze how loss of E2f4 affects the tumorigenicity of pRB-deficient tissues. Since Rb-/-;E2f4-/- germline mice die in utero, we generated Rb-/-;E2f4-/- chimeric animals to allow examination of adult tumor phenotypes. We found that loss of E2f4 had a differential effect on known Rb-associated neuroendocrine tumors. It did not affect thyroid and adrenal glands tumors but partially suppressed lung neuroendocrine hyperplasia. The most striking effect was in the pituitary where E2F4-loss delayed the development, and reduced the incidence, of Rb mutant tumors. This tumor suppression increased the longevity of the Rb-/-;E2f4-/- chimeric animals allowing us to identify novel tumor types. We observed ganglionic neuroendocrine neoplasms, lesions not previously associated with mutation of either Rb or E2f4. Moreover, a subset of the Rb-/-;E2f4-/- chimeras developed either low or high-grade carcinomas in the urothelium transitional epithelium supporting a key role for Rb in bladder cancer.
INTRODUCTION
The retinoblastoma tumor suppressor gene RB is mutated in approximately 30% of all human cancers and in more than 90% of retinoblastomas, osteosarcomas and small cell lung carcinomas (Weinberg, 1995). pRB belongs to the family of pocket proteins that includes p107 and p130. These two proteins share structural and functional similarities with pRB, but are rarely mutated in human tumors (Du and Pogoriler, 2006;Wikenheiser-Brokamp, 2006). A large part of the tumor suppressor activity of pRB derives from its ability to interact with the E2F transcription factors and, together with the other pocket proteins, control the balance between quiescence and proliferation (Harbour and Dean, 2000). E2Fs control the expression of genes crucial for cell cycle re-entry, DNA replication and mitosis. pRB binds to the E2Fs in its active under-phosphorylated form, and inhibits the transcription of E2F target genes through two distinct mechanisms (Dyson, 1998;Trimarchi and Lees, 2002). The first involves sequestration of E2F1, 2, and 3, and inhibition of their transcriptional activity, thereby preventing progression from the G1 to the S phase of cell cycle. The second involves formation of E2F4-or E2F5-pocket protein complexes that bind to E2F-responsive promoters and actively repress their transcription, thereby promoting quiescence. Consistent with these dual roles, Rb -/-;p107 -/-;p130 -/- (Dannenberg et al., 2000;Sage et al., 2000) and E2f4 -/-;E2f5 -/- (Gaubatz et al., 2000) mouse embryonic fibroblasts (MEFs) fail to respond to a variety of growth inhibitory signals, while MEFs lacking E2f1, E2f2, and/or E2f3 have impaired proliferative capacity (Humbert et al., 2000b;Wu et al., 2001). While these results have led to the designation of E2F1, E2F2 and E2F3 as "activators" and E2F4 and E2F5 as "repressors", accumulating evidence suggest that this division is not so clear-cut especially with regard to E2F4.
E2F4 is ubiquitously expressed throughout cell cycle, and accounts for most of the E2F endogenous activity (Moberg et al., 1996). E2F4 has a transactivation domain but it is primarily localized to the cytoplasm in its free form due to the presence of strong nuclear export signals and thus its transcriptional activity is restrained (Gaubatz et al., 2001;Verona et al., 1997). In the G0/G1 phase of cell cycle E2F4, by virtue of its interaction with the pocket proteins, accumulates in the nucleus where chromatin immunoprecipitation studies suggest that these E2F/pocket protein complexes play a major role actively repressing E2Ftarget genes by recruiting histone deacetylases Ren et al., 2002). In agreement with a function of E2F4 in the G0/G1 phase of cell cycle, E2f4 null mice often die shortly after birth with defects in terminal differentiation including craniofacial, respiratory epithelium abnormalities and altered hematopoietic lineages that may result from an inability to establish quiescence (Humbert et al., 2000a;Rempel et al., 2000). Concordantly, E2f4 -/-;E2f5 -/-MEFs have a normal proliferation capacity but are unable to arrest in G1 in response to growth inhibitory signals (Gaubatz et al., 2000). These observations all fit with the hypothesis that E2F4 is a repressive E2F. However, analysis of E2F's role in the context of Rb mutant tumors challenges this conclusion. Rb +/mice die from intermediate lobe pituitary tumors and develop c-cell thyroid tumors at high frequency (Clarke et al., 1992;Jacks et al., 1992). Loss of E2f4 suppresses development of both tumor types and thus, significantly expands the lifespan (Lee et al., 2002). There are a number of possible explanations for E2F4 apparent oncogenic activity. First, E2F4 could behave as a trascriptional activator in the context of these tumor cells. Second, there is evidence that E2F4 may influence tumorigenesis in a indirect manner: we found that the absence of E2F4 in Rb -/cells allows p107 and p130 to associate with E2F1, E2F2 and E2F3 and presumably substitute for pRB in preventing these activator E2Fs from promoting proliferation and therefore tumorigenesis (Lee et al., 2002). Third, as the Rb +/-;E2f4 -/germline mouse model requires Rb-loss of heterozygosity (LOH) for tumorigenesis, it is possible that E2f4deficiency reduces the frequency of Rb LOH or the viability of the resulting Rb -/-;E2f4 -/-cells. Finally, in the Rb +/-;E2f4 -/animals, tumor onset is analyzed not in a wildtype context, but in tissues lacking E2f4, a situation that does not reflect the normal tumor environment and may cause non-cell autonomous effects on Rb-deficient tumor growth. These four models are not mutually exclusive.
To analyze tumorigenesis induced by combined loss of E2f4 and Rb, in this study we generate Rb -/-;E2f4 -/chimeric mice. This model system overcomes the lethality of the Rb -/-;E2f4-/-germline animals and therefore eliminates the requirement for Rb LOH. Moreover, the stochastic nature of the chimeric system allows for studying the effects of gene mutation in a wildtype context and importantly, due to its mosaic nature, to identify phenotypes that would unlikely be uncovered by the use of a tissue-specific conditional system. Consistent with previous observations in Rb +/-;E2f4 -/germline animals, we found that loss of E2f4 suppresses formation of pituitary tumors that are the predominant cause of death of the Rb -/chimeras (Parisi et al., 2007;Williams et al., 1994) and therefore greatly expands the lifespan. This extended longevity allowed the identification of two novel tumor types, ganglionic neuroendocrine neoplasms and papillary urethelial carcinomas. Altogether our observations indicate that E2F4 and pRB functionally interact in specific neuroendocrine tissues and establish a role for these proteins in the urogenital epithelium.
Isolation of mutant E2f4 -/-;Rb -/-ES cells and analysis of their contribution to embryonic and adult tissues
We have previously shown that E2f4-loss suppresses the formation of pituitary and thyroid tumor in Rb +/germline animals (Lee et al., 2002). To study the contribution of E2f4 to the tumors that arise in Rb null tissues, we generated E2f4 -/-;Rb -/chimeric mice. To obtain these animals, we intercrossed E2f4 +/-;Rb +/-Rosa26 germline mutant animals and isolated E2f4 -/-;Rb -/-Rosa26 embryonic stem (ES) cells by de novo derivation. Two E2f4 -/-;Rb -/-Rosa26 ES cell lines were injected into blastocysts giving rise to E2f4;Rb chimeras. We screened E18.5 stage embryos for the presence of β-galactosidase activity (expressed from the Rosa26 allele) and found that both cell lines were able to contribute to the entire embryo with high efficiency indicating that the ES cells were pluripotent ( Figure 1a, data not shown). E2f4 -/-;Rb -/-ES cells also contributed to a variety of adult tissues (Figure 1b, data not shown). However, we observed that on average, our cohort of adult Rb -/-;E2f4 -/chimeric animals, had a lower degree of chimerism than the Rb -/chimeric mice as judged by coat color (Table 1, Parisi et al., 2007). Since our Rb -/-;E2f4 -/-ES cells are pluripotent, and we know that E2f4 -/animals can be viable (Humbert et al., 2000a;Rempel et al., 2000), these findings suggest that additional loss of E2f4 decreases the viability of Rb -/cells in chimeric mice.
E2f4-loss suppresses pituitary tumors in Rb -/chimeric mice
To study the effects of E2f4-loss in Rb tumorigenesis we aged our cohort of Rb -/-;E2f4 -/chimeric animals and compared them to Rb -/chimeric mice that we generated in parallel (Parisi et al., 2007). We found a striking difference in the lifespan of the Rb -/-;E2f4 -/versus Rb -/chimeras. Rb -/chimeric animals die by the age of 9 months due to pituitary tumors (Parisi et al., 2007). In stark contrast, after nine months more than 70% of the Rb -/-;E2f4 -/chimeras were still alive (Figure 2a, b). Accordingly, we found that less than half of the Rb -/-;E2f4 -/chimeric mice developed pituitary tumors and the affected animals were all greater than ten months of age ( Figure 2b, Table 1). Histologically, the pituitary tumors that do develop in the Rb -/-;E2f4 -/animals are similar to the ones found in the Rb -/chimeras. They originate from the pars intermedia of the pituitary, have the morphology of adenocarcinomas, and all derive from Rb -/-;E2f4 -/mutant cells as shown by β-galactosidase staining (Figure 3a, b). Thus, E2f4-loss does not change the nature of the pituitary tumors that arise in the Rb -/chimeric mice, but rather delays their development and decreases their incidence. Analysis of pituitary tumor samples recovered from Rb -/and Rb -/-;E2f4 -/chimeras at necropsy showed that there was a general absence of both apoptotic and proliferative cells (as judged by staining for TUNEL and Ki67 respectively) in both genotypes (data not shown). Thus, it remains an open question how E2F-4 loss suppresses pituitary tumor development. Since we detect pituitary tumors in Rb-deficient mice with chimerism as low as 5% (Parisi et al., 2007), we can be sure that the suppression of tumor development in Rb -/-;E2f4 -/chimeras is not merely the result of insufficient contribution of Rb -/-;E2f4 -/cells to the pituitary. Thus, we can now conclude that the effect of E2f4-loss on the development of tumors in this organ is not due to a change in the propensity of Rb LOH.
Differential effects of E2f4-deficiency in other Rb-dependent neuroendocrine tumors
E2f4-loss has been shown to completely suppress c-cell carcinomas of the thyroid in Rb +/-;E2f4 -/germline mice (Lee et al., 2002). These tumors arise at high frequency in Rb +/germline mice, but are less commonly detected in Rb -/chimeras where only a minority of the animals shows these neoplastic lesions (Table 1, Parisi et al., 2007). When we screened Rb -/-;E2f4 -/chimeric animals, we found that they also bore thyroid tumors at a similar frequency to Rb -/chimeras ( Figure 4a, Table 1) leading us to conclude that E2f4 is dispensable for the development of c-cell tumors in Rb -/chimeric thyroids. This is in contrast to the suppression of thyroid tumors that occurs in the germline Rb +/-;E2f4 -/animals suggesting that this latter phenotype is likely due to a change in the rate of Rb LOH, or to non-cell autonomous effects caused by the lack of E2f4 in other tissues.
Rb -/chimeras develop additional neuroendocrine lesions in adrenal glands and in the lung (Parisi et al., 2007;Williams et al., 1994). We therefore also examined these organs in the Rb -/-;E2f4 -/chimeric mice at the histological level. We observed that these animals presented uni-or multifocal foci of hyperplasia in the adrenal medulla (pheochromocytoma) at a similar frequency to Rb -/chimeras ( Figure 4b, Table 1), indicating that E2f4 is also dispensable for this tumor type. Notably, we observed a very different effect in the lung. We have previously observed neuroendocrine cell hyperplasia in the lung of 10 out of 13 Rb -/chimeras (Parisi et al., 2007). We found that Rb -/-;E2f4 -/chimeric animals also presented hyperplastic areas in the lung, but the severity of these lesions as well as the percentage of animals bearing them, was much lower when compared to than seen in the Rb -/chimeras ( Figure 4c, Table 1). Thus, our findings suggest that E2f4-loss has differential effects in a variety of Rb-deficient neuroendocrine tumors. It is dispensable for adrenal gland and thyroid tumors, while playing an essential role in pituitary tumors and in Rb -/pulmonary neuroendocrine cells.
Novel tumor phenotypes in Rb -/-;E2f4 -/chimeras
The increased lifespan of the Rb -/-;E2f4 -/animals due to suppression of pituitary tumors, together with the ability to generate adult tissues simultaneously lacking Rb and E2f4, gave us the unique opportunity to identify new tumor types that might be modulated by these proteins. We performed necropsy and whole histology on all the Rb -/-;E2f4 -/chimeric mice and found a distinct tumor spectrum in the older animals (Table 1). Six mice, all but one older than ten months, showed neoplastic lesions within the ganglia in different areas of the body such as the neck, kidney region and testes. These lesions appeared to be composed by groups of very dark cells embedded within neurons whose morphology was reminiscent of neuroendocrine cells (Figure 5a). To confirm that these cells were indeed of neuroendocrine origin, we performed immunohistochemistry for the markers GCRP and Synaptophysin and found that all of them were positive ( Figure 5a). As neuroendocrine cells are not normally detected within ganglia we looked for any sign of metastases originating from other neuroendocrine tumors present in these animals, but were unable to find any. This suggests that these cells were indeed arising from within the ganglia. We did not detect any TUNELpositive cells in the neoplastic lesions of the Rb -/-;E2f4 -/chimeras or the normal ganglia of either Rb -/or Rb -/-;E2f4 -/chimeras (data not shown). Thus, we have no evidence that E2f4deficiency acts to suppress the induction of apoptosis by Rb-loss, but these negative data do not rule this out. We also did not detect any Ki67-positive cells in either the neoplastic or normal Rb -/-;E2f4 -/chimeric ganglia (data not shown). Thus, it remains an open question whether the ganglionic tumors result from a direct effect (e.g. via differences in proliferative and/or apoptotic capacity of Rb -/-;E2f4 -/versus Rb mutant tissues) or an indirect effect (the extended lifespan of the Rb mutant chimeras) of E2f4 loss.
In addition to these ganglionic neoplasms, we also identified tumors in the urogenital epithelium of the Rb -/-;E2f4 -/chimeric mice (Table 1). We noticed that three animals, 16 months of age or older, had polyps within the uroepithelium that lines the ureter. This papillary form of urothelial cancer was hyperplastic non invasive in two animals, but highly dysplastic and invasive in a third animal (Figure 5b). The severity of these lesions prevented normal exchange of urine between the kidney and the bladder causing nephrosis (not shown). Urothelial cancer does not spontaneously arise in mice and there is no evidence of this tumor in Rb -/chimeras (Parisi et al., 2007;Williams et al., 1994). Histological comparison of tumorigenic versus normal urothelium in the Rb -/-;E2f4 -/chimeras, revealed no apoptosis, but high levels of proliferation specifically within circumscribed regions of the tumors but not the normal urothelium (data not shown). Thus, as above, these studies provide no evidence for, but also do not rule out, the possibility that E2f4-loss plays a direct role in promoting formation of these tumors.
Finally, we wished to determine if urothelium and ganglionic neoplasms were cell or noncell autonomous. As we could not take advantage of β-galactosidase staining due to problems of high background (kidney) or poor penetrability of the dye (ganglion) in the tissues surrounding the tumors, we assessed the contribution of mutant cells to the novel tumors by screening the tumor areas for the presence of the E2f4 mutant allele. The mutant band was clearly present in both neoplasms and its intensity, relative to the wildtype band, correlated well with the heterogeneity of the tumors (Figure 5c). Specifically, the mutant band was relatively weak in the urothelium neoplasm which had a high non-epithelial component and much stronger in the ganglionic neoplasm which was more homogenous. Although correlative, these results strongly suggest that these novel tumor types are derived from the Rb -/-;E2f4 -/cells. Similar to the ganglionic neurendocrine neoplasms, urothelial cancer was found at a time when the vast majority of Rb -/chimeric animals are dead. Thus, it is an open question whether these tumor types arise as a consequence of loss of both Rb and E2f4 or whether they can arise with long latency through inactivation of Rb in an E2f4independent manner. Nevertheless, the finding that Rb -/-;E2f4 -/chimeric mice develop a papillary form of urothelial cancer represents the first direct evidence of a role for Rb in this cancer type and is concordant with the high incidence of Rb mutation in urothelial carcinoma of the bladder.
DISCUSSION
pRB's prominent role in tumorigenesis is due, at least in part, to its ability to bind the E2Fs and control cell division. Nevertheless, the extent to which the interaction of pRB with the E2Fs influences its role in cancer remains unclear due in part to the embryonic lethality of Rb -/and Rb -/-;E2f -/mutant animals. In this study we generated Rb -/-;E2f4 -/chimeric mice to analyze the interplay between pRB and E2F4. These animals are viable at levels of chimerism up to 60% thus allowing us to study the contribution of E2f4 to Rb-dependent tumorigenesis. Rb -/chimeric mice die of pituitary tumors between 2 and 9 months of age (Parisi et al., 2007;Williams et al., 1994). We found that loss of E2f4 increased the lifespan of these animals by delaying the development and decreasing the incidence of pituitary tumors. Therefore, our studies show conclusively that E2f4-loss inhibits pituitary tumors independently of the rate of Rb LOH. Furthermore, the ability to generate mice with a wide range of chimerism allows us to extrapolate that the suppressive effect of E2f4 on this tumor type is cell autonomous as we could detect pituitary tumors in Rb null mice with chimerism as low at 5% (Parisi et al., 2007). The function of E2F4 in cell division is thus still unclear; in vitro E2F4 acts as a proliferation inhibitor, while in vivo E2F4 functions to promote pituitary tumorigenesis. We believe that two models may reconcile these results. The first, called the "pocket protein reshuffling" model, stems from our finding that p107 and p130, normally unable to bind to the activator E2Fs, associate with these proteins in Rb;E2f4 deficient tissues (Lee et al., 2002). These novel repressor complexes could prevent E2F1, E2F2 and E2F3 from activating target genes responsible for cell proliferation thus inhibiting tumor formation in Rb +/-;E2f4 -/mice. The second model proposes that E2F4 functions as a transcriptional activator in certain contexts, including Rb mutant tumors, and this explains its oncogenic activity. E2F4 does in fact have a strong activation domain and it is certainly capable of activating E2F responsive promoters when localized to the nucleus by overexpression or addition of nuclear localization signals (DeGregori et al., 1997;Verona et al., 1997). Notably, we have recently found that E2F4 associates with the promoter of E2Fresponsive genes in tumor cells, concordant with their transcriptional activation, raising support for this hypothesis (Iaquinta and J. A. L., unpublished). Based on our in vivo observations (Lee et al., 2002;Iaquinta and J. A. L., unpublished), we suspect that both the pocket protein reshuffling and transcriptional activation mechanisms contribute to E2F4's oncogenic properties. Our chimeric system does not address these two possibilities, and additional experiments will be required to tease out the underlying mechanisms.
Our data also show that Rb -/and Rb -/-;E2f4 -/chimeric mice developed thyroid tumors at a similar frequency. This is in contrast to what previously found in Rb +/-;E2f4 -/germline animals where the thyroid tumors are fully suppressed (Lee et al., 2002). It is hard to envisage how either the reshuffling or transcriptional activation mechanisms could account for tumor suppression in Rb +/-;E2f4 -/germline mice, but not in Rb -/-;E2f4 -/chimeras. Thus, two possibilities remain to explain the specific suppression of tumors in the Rb +/-;E2f4 -/thyroids. One is that E2f4-deficiency may decrease the rate of Rb LOH in the c-cells of Rb +/mice. The other is that cell non-autonomous effects may operate in Rb +/-;E2f4 -/germline. These current and past findings highlight the utility of employing chimeric mouse models to identify direct requirements for particular E2Fs in tumorigenesis. Interpretation of data generated in traditional germline mice is confounded by possible noncell autonomous effects as well as a requirement for Rb LOH. In fact, while previous Rb;E2f germline mutant mice assigned opposing roles for E2f3 and E2f4 in the development of thyroid tumors (Lee et al., 2002;Ziebold et al., 2003) our studies in chimeric mice suggest that these E2fs are both fully dispensable for this tumor type (Parisi et al., 2007).
Irrespective of the mechanisms that operate in the pituitary and the thyroid, Rb -/-;E2f4 -/chimeric mice allowed the examination of the effects that E2f4-loss has on other tumor types present in Rb -/chimeras. These animals develop neuroendocrine lesions in the adrenal gland and in the lung (Parisi et al., 2007;Williams et al., 1994) and we find that E2f4 differentially affects these tumor types. It is dispensable for the adrenal gland tumors, while is required for the hyperplasia of the neuroendocrine cells. The latter lesions are believed to be the precursors of small cell lung carcinomas, a highly metastatic and therefore lethal tumor type (Meuwissen et al., 2003). Therefore, our findings indicate that E2f4 may play a role in the hyperplastic stage of small cell lung carcinomas. Interestingly this property is not unique to E2F4 as Rb -/-;E2f3 -/chimeric mice have no detectable signs of neuroendocrine lung hyperplasia (Parisi et al., 2007). Thus, this suggests that E2F3 and E2F4 play a prominent role in the hyperplastic stage of small cell lung carcinoma. As with the pituitary, The E2F4's oncogenic function could result from the "pocket protein reshuffling" and/or the "activating" function mechanisms described above.
The extended longevity of Rb -/-;E2f4 -/chimeric mice due to suppression of pituitary tumors allowed us to identify two novel tumor types in these mice, ganglionic neuroendocrine neoplasms and urethelial cancer. The presence of ectopic neuroendocrine cells within ganglions confirms once more that Rb plays a key role in neuroendocrine lineage, which include all the tumor types listed above, and has very interesting implications for the biology of these cells. During development neuroendocrine and neuronal cells share a common progenitor, called sympathoadrenal cells that originate from the neural crest (Huber, 2006). This progenitor cell migrates from the dorsal aorta to its final destination where it adopts either a neuronal or neuroendocrine fate. The mature neuroendocrine cells maintain the expression of markers specific of the immature common progenitors while the mature neurons downregulate these markers. Thus, we speculate that the neuroendocrine-like neoplastic cells that we found within the ganglia of Rb -/-;E2f4 -/chimeras are not derived from neurendocrine cells, but represent immature, neural crest-like stem cells that either failed to differentiate into neurons or represent a physiological population of neuronal stem cells that proliferate inappropriately. Unfortunately, to our knowledge there are no markers that discriminate between a blastic stage neuronal cell and a differentiated neuroendocrine cell, and would therefore allow us to show that the lesions found in the ganglia of Rb -/-;E2f4 -/chimeras are neuronal stem cell-like cells. Nevertheless, as the determinant factors that govern the fate choice are still not well understood, our findings suggest that Rb and E2f4 play a role in the development of the neuronal-neuroendocrine cells and may potentially add another tile to the complex mosaic that specifies the sympathoadrenal lineage.
The other novel tumor in Rb -/-;E2f4 -/chimeric animals is urothelial transitional cell carcinoma. Urothelial carcinomas are tumors of the urogenital tract and represent the fifth most common cancer type in humans (Dinney et al., 2004). They manifest as two variants, papillary or non-papillary. The papillary form accounts for about 80% of urethelial cancers, is generally low grade and has been documented to give rise to invasive transitional cell carcinoma in 15% of cases of human bladder cancers. The non papillary transitional cell carcinoma, is high grade, accounts for the remaining 20% cases of human urethelial carcinomas, and originates de novo or from preexisting carcinoma in situ. The observation that this non papillary form is present in patients with no previous history of papillary carcinomas, and the fact that the non papillary and papillary forms have a distinct genetic signature, has lead to the hypothesis that these two types of transitional cell carcinoma are unrelated (Wu, 2005;Schulz, 2006). There is a strong correlation between Rb mutations and urothelial cancer. In humans Rb has been found inactivated in about 60% cases of human bladder cancer, although there is still a debate as to whether Rb mutation is associated with the low grade, non papillary form (Dinney et al., 2004;Wu, 2005). Despite the strong association between Rb mutations and urothelial cancer, only one animal model has investigated the role of pRB in this tumor type. In this mutant mouse Rb and p53 are both inactivated in urothelial cells by transgenic expression of the SV40 Large T antigen (Zhang et al., 1999), and this causes invasive carcinoma in situ. In Rb -/-;E2f4 -/chimeras we found both invasive and non invasive papillary carcinomas, suggesting that Rb mutation may facilitate the switch from low grade to dysplastic, high grade tumors. Thus, together the Large T antigen and Rb;E2f4 mouse models recapitulate the non papillary and papillary variants of human transitional cell carcinomas and implicate a key role for Rb in the development of these tumors.
To conclude, we have learnt from the analysis of Rb -/-;E2f4 -/chimeric animal that E2f4 plays a role in the pituitary tumors as well as in lung neuroendocrine hyperplasia caused by loss of Rb, but not in thyroid and adrenal gland tumors. In addition our chimeric mouse model gives us the opportunity to identify pRB and E2F unknown functions. We have shown here that E2f4 and Rb have a role in derivatives of neural crest cells and in the urogenital epithelium. We are now pursuing the significance of these findings by generating E2f4 conditional mice. We will use these mice in combination with the Rb conditionalurethelial or neural crest specific Cre-Recombinase expressing mice to dissect the relative roles of Rb and E2f4 in these tissue types and to generate novel cancer mouse models.
Histology, immunohistochemistry and X-gal staining
Tissues were processed for X-gal staining or directly fixed in phosphate-buffered formalin as previously described (Parisi et al., 2007). To visualize the pituitary glands, adult heads were fixed for a week in Bouin's fixative. 5μm sections of paraffin embedded tissues were stained with H&E, or processed for immunohistochemistry. For antibody staining sections were processed as previously described (Parisi et al, 2007) with 1:500 mouse antisynaptophysin (Chemicon, clone SY38), and 1:5000 rabbit anti-neuroendocrine cell marker calcitonin-related peptide (anti-CGRP, Sigma). Antigen-antibody complexes were detected with diaminobenzidine (DAB). Each chimeric animal is represented by a bar reporting the percent chimerism, and is arranged according to the time of death (from youngest to oldest, grey line, second Y axis). Animals presenting pituitary or thyroid tumors are indicated with (P) and (T) respectively. Note that the pituitary tumors manifest only after 10 month of age. Rb -/chimeric mice. Polyps originating from the urogenital epithelium protrude into the lumen of the ureter to give rise to non invasive (top panel) and invasive papillary carcinomas (middle panel) where the epithelium has penetrated the muscle wall (asterisks). The urothelium of Rb -/chimerics is completely normal (bottom panel). (c) The novel tumors derive from Rb -/-;E2f4 -/cells. Top: PCR analysis of genomic DNA isolated from the ganglionic and urothelium tumors (shown below) and assayed for the presence of the mutant and wildtype allele of E2f4. Note that the ganglionic lesion is very homogeneous compared to the urothelium polyps where the epithelium represents only a small fraction of the tumor. Tumorigenic phenotypes in Rb-/-versus Rb-/-;E2f4-/-chimeric mice.
|
2017-11-08T01:32:42.588Z
|
2008-10-16T00:00:00.000
|
{
"year": 2008,
"sha1": "fccf75b33b87233b9dc24d5364e8ffa3838e343b",
"oa_license": null,
"oa_url": "https://www.nature.com/articles/onc2008406.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "0e5c3c3805882a0d65a46ee843e3191eccad32dd",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
15291692
|
pes2o/s2orc
|
v3-fos-license
|
Involvement of Iron-Containing Proteins in Genome Integrity in Arabidopsis Thaliana
The Arabidopsis genome encodes numerous iron-containing proteins such as iron-sulfur (Fe-S) cluster proteins and hemoproteins. These proteins generally utilize iron as a cofactor, and they perform critical roles in photosynthesis, genome stability, electron transfer, and oxidation-reduction reactions. Plants have evolved sophisticated mechanisms to maintain iron homeostasis for the assembly of functional iron-containing proteins, thereby ensuring genome stability, cell development, and plant growth. Over the past few years, our understanding of iron-containing proteins and their functions involved in genome stability has expanded enormously. In this review, I provide the current perspectives on iron homeostasis in Arabidopsis, followed by a summary of iron-containing protein functions involved in genome stability maintenance and a discussion of their possible molecular mechanisms.
Open Access
This process is highly dependent on the expression of FRO2 (ferric reduction oxidase 2, At1g01580) and IRT1 (iron-regulated transporter 1, At4g19690). [23][24][25] FRO2 functions as a root ferric-chelate reductase, and IRT1 is required for the transport of ferrous iron across the plasma membrane [ Figure 1]. [24,26] Both of them specifically function in root iron uptake under iron-deficient conditions. [24,26] The expression of FRO2 and IRT1 is rapidly induced in iron-deficient conditions, whereas it is dramatically diminished in iron-sufficient conditions via posttranslational mechanisms. [23,27] The Arabidopsis genome also encodes an IRT1 paralog, namely, IRT2 (At4g19680). Studies have indicated that IRT2 cooperates with IRT1 and FRO2 to maintain iron homeostasis in root epidermal cells. [25] Interestingly, the expression of many iron-regulated genes is induced by FIT (Fer-like Deficiency Induced Transcription Factor, At2g28160), which is a transcription factor that regulates iron uptake responses. [28] The fit mutant accumulates less iron in root and shoot tissues in comparison with wild-type plants. [28] Further studies indicate that FIT can regulate ferric chelate reductase activity and iron transport into plant roots. [28] This process is achieved by regulating FRO2 expression and by controlling protein accumulation of the IRT1. [28] Moreover, the Ib subgroup of the basic helix-loop-helix (bHLH) gene family (AtbHLH38, AtbHLH39, AtbHLH100, and AtbHLH101) in Arabidopsis also has been reported to participate in the regulation of iron uptake. AtbHLH38 and AtbHLH39 can interact with FIT, directly activating the expression of FRO2 and IRT1. [29] Recently, AtbHLH100 and AtbHLH101 have also been identified to interact with FIT. [30] Overexpression of FIT and AtbHLH101 in plants results in the constitutive expression of FRO2 and IRT1 in the roots, and accumulates more iron in the shoots. [30] However, the expression of FRO2 and IRT1 in roots and the iron content in shoots dramatically decrease in the triple knockout mutant of AtbHLH39, AtbHLH100, and AtbHLH101. [30] Moreover, the mediator subunit 16 (MED16, At4g04920) is reported to function in the regulation of iron uptake gene expression in Arabidopsis. [31] Lesion of MED16 significantly reduces the expression of FRO2 and IRT1 in Arabidopsis roots. [31] MED16 can interact with FIT and improves the binding of the FIT/Ib bHLH complex to FRO2 and IRT1 promoters under iron-deficient conditions. [31] Shk1 binding protein 1 (SKB1/AtPRMT5, At4g31120) is also reported to be involved in iron homeostasis in Arabidopsis. [32] The chromatin immunoprecipitation (ChIP) and genome-wide ChIP-seq results show that SKB1 associates with the chromatin of the Ib subgroup bHLH genes. [32] In addition, SKB1 can catalyze the symmetric dimethylation of histone H4R3 (H4R3sme2), and the level of H4R3sme2 positively corresponds to the iron status of plants. [32] These results indicate that SKB1-mediated H4R3sme2 regulates iron homeostasis in Arabidopsis. [32] Iron deficiency may increase the disassociation of SKB1 from bHLH genes in chromatin and decrease the level of H4R3sme2, thereby elevating the expression of bHLH genes and enhancing iron uptake. [32] In addition, some studies suggest that there exist additional root-derived signals to control iron uptake. [23,33,34] Split-root experiments indicate that Fe(III) reductase activity is higher in the roots supplied with iron, [23,34] implying that the systemic signal generated by iron-deficient shoots is further modulated by a local, root-derived signal. [33]
Intracellular iron transport
Iron is required to be compartmentalized into different cellular organelles, such as chloroplasts, vacuoles, and mitochondria. [35] However, to date, much less is known about intracellular iron transport by surface reductase FRO2. Iron uptake is carried out at the plasma membrane by iron transporter IRT1. When iron enters the cytosol, it can be delivered into the chloroplast, vacuole, and mitochondria by iron transporters. In the chloroplast, FRO7 is the main iron transporter. Moreover, PIC1 can mediate iron transport across the inner envelope of chloroplasts. The import iron is mainly stored in ferritin proteins (AtFER1-AtFER4). In the vacuole, iron accumulation and storage are controlled by the VIT1, AtNRAMP3, and AtNRAMP4 proteins. In the mitochondria, FRO3 and FRO8 are proposed to be required for iron transport. The ABC transporter protein AtABCB25 functions in iron efflux to the cytosol
Genome Integrity
Vol. 6: 2, 2015 Open Access in Arabidopsis. Ferrous iron in roots is transported, it is suggested, into xylem via Ferroportin-1 (FPN1, At2g38460), where it forms a complex with citrate. [36] The complex is loaded into the phloem and further forms a complex with nicotianamine (NA), which is synthesized from S-adenosyl methionine by nicotianamine synthase (NAS). [36] Recent studies indicate that transcription factors MYB10 (At3g12820) and MYB72 (At1g56160) function in the iron-deficiency regulatory cascade to drive NAS4 gene expression. [36] In addition, some proteins are probably involved in intracellular iron homeostasis according to the predicted organelle localization. For instance, FRO7 (At5g49740), a paralog of FRO2, might be involved in transporting iron into the chloroplast [ Figure 1]. [37] The expression of three members of the natural resistance-associated macrophage protein (NRAMP) metal transporter family, including AtNRAMP1 (At1g80830), AtNRAMP3 (At2g23150), and AtNRAMP4 (At5g67330), is regulated by the iron status. [38,39] Functional analyses have demonstrated that all three proteins are capable of transporting Fe and Mn. [40,41] AtNRAMP3 and AtNRAMP4 localize in the vacuole, where they can export metal ions into the cytosol. [38,42] Interestingly, AtNRAMP3 and AtNRAMP4 function, it is suggested, in the long-distance transport of metals due to their expressions in the stele of roots and in the vasculature of leaves and stems. [38,42] The vacuolar iron transporter 1 (VIT1, At2g01770) mediates iron accumulation in vacuoles and controls the localization of Fe in seeds. [43,44] A plastid protein PIC1 (permease in chloroplast 1, At2g15290) can regulate iron transport across the inner envelope of chloroplasts. [35] Iron storage In Arabidopsis, iron is mainly stored in the chloroplast, vacuole, and mitochondria, where it can be utilized by numerous iron-containing proteins. [18] The majority of iron is stored in chloroplast ferritin proteins. [44] Arabidopsis contains four ferritin members, namely, AtFer1-AtFer4 (At5g01600, At3g11050, At3g56090, and At2g40300, respectively) [ Figure 1]. [45] All these four ferritin proteins are predicted to target to the chloroplast, as they contain transit peptides required for delivering iron to the plastid. [46] Importantly, the expression of AtFer1, AtFer3, and AtFer4 is induced when plants are treated with excess iron. [45][46][47] In addition, iron can also be stored in the vacuole, where it is mainly controlled by AtNRAMP3, AtNRAMP4, and VIT1 iron transporters [ Figure 1]. [38,42,43] Moreover, AtABCB25 (also named AtSTA1, At5g58270), an ATP-binding cassette (ABC) transporter, has been shown to be important in iron efflux from the mitochondria to the cytosol [ Figure 1]. [48]
Iron translocation
Most iron acquired by the roots is ultimately delivered to above-ground portions of the plant via the xylem. [49] During this process, iron needs to cross several different membrane barriers. [23] Plants utilize a sophisticated system to deliver iron from root epidermal cells to leaf cells. [50] A variety of transporters have been identified to be involved in the iron translocation process in Arabidopsis, including (ferric reductase defective 3, At3g08040) FRD3 and the yellow stripe-like (YSL) family of proteins (YSL1-8). FRD3 facilitates citrate efflux into the xylem, and the mutation of FRD3 results in Fe localizing to the central vascular cylinder of the roots and failing to transport it to the aerial parts. [51] The YSL family of proteins transport, it is suggested, metals complexed with NA. [52] In Arabidopsis, YSL1 (At4g24120) and YSL3 (At5g53550) are important for iron transport and also responsible for loading Fe, Cu, and Zn from leaves into seeds. [53] YSL4 (At5g41000) and YSL6 (At3g27020) are suggested to control iron release from the chloroplast, [54] and they are also involved in iron transport and metal mobilization into seeds. [54] The transport functions of YSL1 and YSL2 (At5g24380) are shown to partially overlap with the function of YSL3 in vegetative structures, but they are distinct in the reproductive organs. [52] The functions of YSL7 and YSL8 in iron translocation have not been characterized.
The effects of iron homeostasis on protein functions
Maintaining iron homeostasis is critical for assembling prosthetic groups such as heme and Fe-S clusters. [55] Much less is known about the effects of iron homeostasis on protein stability in Arabidopsis, whereas studies in mammals have elucidated that iron homeostasis is governed in part through the regulated proteolysis of ferroportin (iron exporter), hypoxia-inducible factor (HIF), iron-regulatory proteins (IRPs), and an F-box/leucine-rich repeat protein (FBXL5). [56] Obviously, the critical roles played by iron in both enzymatic catalysis and protein structure contribute to the stability of iron regulation proteins and iron-containing proteins. [56]
Fe-S Cluster Proteins and Genome Stability
Numerous Fe-S cluster proteins are reported to function in genome stability maintenance in Arabidopsis. These proteins mainly include DNA polymerases, DNA helicases, DNA glycosylases, the large subunit of DNA primase, and components of the CIA system.
DNA polymerases and DNA primases
Eukaryotes mainly utilize three conserved polymerases (Polα, Polδ, and Polε) to build DNA blocks. [57] Polα is associated with DNA primases to synthesize short RNA primers, which are subsequently utilized by Polε and Polδ to synthesize the leading and lagging strands, respectively. [2,58,59] All the three DNA polymerases and the large subunit of primase contain a Fe-S cluster. [2] Arabidopsis has a DNA polymerase α (POLA2, At1g67630), three members of polymerase δ (POLD2-At2g42120, POLD3-At1g78650, and POLD4-At1g09815), two catalytic subunits of Polε (POL2A-At1g08260 and POL2B-At2g27120), and a large subunit of DNA primase (At1g67320). Although the functions of Arabidopsis DNA polymerases are less characterized, their amino acid sequences share a high degree of similarity to their corresponding homologues in yeast and mammals, implicating their conserved functions in DNA replication. [60] In rice, the expression of DNA polymerase δ catalytic subunit (POLD1, Os11g0186400) can be detected in mature leaves and is induced under ultraviolet (UV) irradiation treatment, [61] supporting the conclusion that POLD is required for DNA replication in plants. [62] In Arabidopsis, both POL2A and POL2B contain the conserved domains that are present in other eukaryotic homologues. [63] Mutations of POL2A result in DNA replication defects, [64] while pol2b mutants have no visible phenotypic effects. [63] These results suggest that POL2B is not essential for DNA replication or that POL2B functions redundantly with other DNA polymerases.
DNA helicases
DNA helicases are highly conserved enzymes that function to unwind DNA in order to provide a single-stranded DNA for replication, RNA transcription, DNA repair, and recombination. [65,66] Defects of helicases are generally associated with genomic instability in yeast and mammals. [66] The Arabidopsis genome encodes numerous proteins exhibiting helicase activities, and these proteins include AtFANCM (At1g35530), AtINO80 (At5g57300), AtMER3 (or AtRCK, At3g27730), AtRAD54 Open Access (according to TAIR). Among the proteins listed above, only RAD3, the homologues of FANCJ helicase, and the homologue of ChlR1 helicase contain a Fe-S cluster. The Arabidopsis RAD3 (also known as UVH6) is a homologue of the human XPD and yeast RAD3. Both XPD and yeast RAD3 are essential helicases, with roles in the repair of damaged DNA through the nucleotide excision repair (NER) mechanism. [67,68] The Fe-S cluster in yeast RAD3 is essential for the coupling of adenosine triphosphate (ATP) hydrolysis to DNA translocation and for targeting the helicase to the DNA junction. [69] The uvh6-1 mutant is hypersensitive to both UV-C and UV-B irradiation, implicating its important role in DNA repair. [70] The human FANCJ helicase has been identified to catalyze the unwinding of duplex DNA and G-quadruplex structures in an ATP hydrolysis-dependent manner to ensure genomic stability. [71] The Arabidopsis genome encodes three FANCJ-like proteins, but their functions are still not characterized. ChlR1 belongs to the FANCJ-like DNA helicase family and it contains a DEAH/DEAD box, [71] which is required for unwinding nucleic acids and is involved in various aspects of RNA metabolism. [71] The deletion of ChlR1 in mammalian cells can lead to DNA damage accumulation, suggesting its important role in efficient DNA repair during DNA replication. [72] However, the function of Arabidopsis ChlR1-like helicase remains unclear.
DNA glycosylases
DNA glycosylases can recognize and excise mismatched or altered bases through the base excision repair (BER) mechanism. [73] Generally, all glycosylases function in a similar way; they cleave the N-glycosylic bond between the target base and the sugar-phosphate backbone of the DNA, [74] thereby releasing a free base and leaving an apurinic/ apyrimidinic (AP) site. [75] Arabidopsis has 26 DNA glycosylases that extensively function in the DNA repair process; [76,77] of them, only the DEM (DNA glycosylase DEMETER, At5g04560), DML1 (DEMETER-LIKE 1, also known as AtROS1, At2g36490), DML2 (At3g10010), and DML3 (At4g34060) proteins contain a Fe-S cluster. These iron-containing DNA glycosylases extensively function in DNA methylation. [78][79][80] The DME gene is mainly expressed in the central cells before fertilization and is required for the DNA demethylation of the maternal allele in the endosperm that establishes gene imprinting. [81,82] DME also functions, it has been suggested, in a protein complex, providing promoter specificity for base excision and DNA nicking of the maternal genome. [83] Interestingly, AtROS1 possesses both DNA glycosylase and lyase activities against methylated DNA, but not unmethylated DNA. [78] The atros1 knockout mutant is hypersensitive to genotoxic stresses such as methyl methanesulfonate (MMS) and hydrogen peroxide (H 2 O 2 ), suggesting that ROS1 is involved in the DNA repair process by repressing homology-dependent transcriptional gene silencing via the demethylation of target promoter DNA. [78] Moreover, the Arabidopsis genome encodes two additional paralogs of AtROS1, namely, DML2 and DML3. [80] Hypermethylation of cytosine residues has been observed in dml2 and dml3 mutants relative to wild-type plants. [80] These results indicate the important roles of DML2 and DML3 in removing DNA methylation from improperly-methylated cytosines. [80] Components of CIA machinery The assembly of Fe-S cluster is carried out via ISC (iron-sulfur cluster, in mitochondria), CIA, and SUF (sulfur mobilization, in plastid) machineries in plants. [84] Eukaryotes share conserved mechanisms for the synthesis of Fe-S clusters and their insertions into apoproteins. [85] The biogenesis of Fe-S proteins contains two major steps: 1. A Fe-S cluster is assembled on a scaffold complex, and 2. The Fe-S cluster is dislocated from the scaffold and transferred to specific apoproteins. [1,85] Some of the CIA components have been found to function in DNA replication and repair processes in different organisms. However, little is known about the roles of ISC and SUF members in genome stability.
The CIA machinery members in Arabidopsis include NAR1 (At4g16440), CIA1 (At2g26060), NBP35 (At5g50960), AE7 (At1g68310), MET18 (also named MMS19, At5g48120), DRE2 (At5g18400), TAH18 (also named ATR3, At3g02280), ATM3 (At5g58270), and ERV1 (At1g49880). [86] The yeast CIA pathway proteins NAR1, CIA1, CIA2, and MET18, and their corresponding human homologues IOP1, CIAO1, MIP18, and MMS19, it is proposed, transfer Fe-S clusters to the target proteins. [1,16,17,87] More importantly, both the human and yeast MMS19 proteins interact with numerous Fe-S proteins, including Polδ, DNA primase, Dna2, XPD, RTEL1, and FANCJ. [16] Similarly, the Arabidopsis AE7-CIA1-NAR1-MET18 complex has been indicated to facilitate the transfer of Fe-S clusters to the target apoproteins such as ACO (aconitase) and ROS1. In addition, the Arabidopsis CIA pathway has also been suggested in the maintenance of nuclear genome integrity through Fe-S proteins involved in DNA metabolism. [88] Mutations of CIA members including AE7 and ATM3 lead to the accumulation of DNA damage and the increase of homologous recombination (HR) rates. [88] Therefore, it is highly possible that genomic integrity defects in ae7 and atm3 mutants result from inefficient assembly of the Fe-S cluster proteins involved in DNA replication and repair. [88] Taken together, the CIA pathway plays a critical role in maintaining genome integrity, due to the importance of the Fe-S proteins in DNA replication and repair.
Hemoproteins and DNA Stability
Heme is a diverse cofactor and it participates in a wide range of chemical reactions, such as electron transfer, oxygen activation, and gene regulation. [97] Heme-containing proteins, also termed hemoproteins, are essential for the physiology and viability of living organisms, and contribute to diverse functions including respiration, oxygen carriage, cellular signaling, and apoptosis. [98] Mutations of hemoproteins are always associated with the induction of reactive oxygen species (ROS), [83,99] which can damage lipids, proteins, and DNA. The Arabidopsis hemoproteins involved in genome stability mainly include CYP reductase (CPR), Cb5, and CYTc.
CPRs
The CYP superfamily proteins utilize heme as a cofactor and function in the oxidation/reduction of endogenous or exogenous compounds. [100] Genome Integrity Vol. 6: 2, 2015
Open Access
CYPs require a CPR to transfer electrons from reduced nicotinamide adenine dinucleotide phosphate (NADPH) to their substrates. [101] The NADPH-dependent CPR generally localizes in the endoplasmic reticulum (ER) membrane and serves as the electron donor of CYPs. [102,103] CPR contains multidomains such as three cofactor-binding domains (flavin mononucleotide [FMN], flavin adenine dinucleotide [FAD], and NADPH) and a linker domain situated between the FMN and FAD/NADPH domains. [104] CPR is the most imperative redox partner of CYPs. [101] Human CPR and NADPH-dependent CPR can act as sources of endogenous oxidative DNA damage and are required for genome stability. [105] The Arabidopsis genome encodes 246 P450 genes, which can be grouped into 72 families. [106] Their biological functions range from the synthesis of macromolecules, hormones, and signaling molecules, and to the metabolism of xenobiotics. [106] However, the functions of numerous P450 genes remain unclear. The Arabidopsis genome encodes two authentic and one putative CPR genes, namely, ATR1 (At4g24520), ATR2 (At4g30210), and ATR3 (At3g02280), respectively. ATR1 is required for electron transfer from NADP to CYPs in microsomes. [107] In particular, it can provide electrons to heme oxygenase and Cb5. [107] ATR2 contributes to the first oxidative step of the phenylpropanoid general pathway. [108] ATR3 serves as a diflavin reductase and is essential for Arabidopsis embryo development. [109] Interestingly, ATR3 exhibits CYTc reductase activity, but not P450 reductase activity. [109] The yeast two-hybrid screening has identified that ATR3 can interact with two Fe-S proteins, the human CIAPIN1 and the yeast Dre2 protein. [109] These results suggest that ATR3 may function in genome stability either by ways similar to CYTc or by interacting with Fe-S proteins.
Cb5
Cb5s are ubiquitous hemoproteins and typically associate with the ER and outer mitochondrial membranes. [110] In higher eukaryotes, Cb5 functions as an electron donor for the desaturation of acyl-CoA fatty acids (FAs), sphingolipid long-chain base hydroxylation and desaturation, FA hydroxylation, sterol desaturation, and cytochrome P450-mediated reactions. [110] In plants, Cb5 plays a primary role in providing electrons for the synthesis of the polyunsaturated FAs linoleic acid (18:2) and a-linolenic acid (18:3), [111] which contributes to the integrity of cellular membranes. [112] Interestingly, previous studies have demonstrated that some Cb5-like proteins play critical roles in genome stability maintenance. For instance, a Cb5-like protein Dap1 is required for resistance to DNA damage agent methyl MMS in yeast. [113] Moreover, Irc21 protein also contains a Cb5-like domain and has been revealed to function in checkpoint control, DNA repair, and genome stability. [114] The Arabidopsis genome encodes five Cb5 members (CB5A-E, referring to At1g26340, At2g32720, At2g46650, At5g48810, and At5g53560, respectively) and one Cb5-like protein (At1g60660). [115] Four of them including, CB5A, -B, -D, and -E are predicted to localize in the ER membrane. [115,116] However, little is known about the specific functions of individual Cb5 proteins. These proteins share a high degree of amino acid sequence similarities, suggesting the high possibility that they function redundantly. Two Cb5 reductases, namely, CBR1 (At5g17770) and CBR2 (At5g20080) are present in the endoplasmic membrane and the inner mitochondrial membrane, respectively. [117] CBR1 is essential for a functional male gametophyte. [117] Although both CBR1 and CBR2 are also predicated to function as FAD/NAD-binding oxidoreductases, further studies are still needed to uncover their biological roles.
CYTc
CYTc is a small hydrophilic hemoprotein that is extensively present in the mitochondrial inner membrane. [118] CYTc participates in many biological processes, such as respiration, apoptosis, cell death, oxidative stress, DNA damage, energetic metabolism, protein folding, and translational regulation. [119,120] In Arabidopsis, CYTc is encoded by CYTC-1 (At1g22840) and CYTC-2 (At4g10040). The knocking-out of both genes causes lethality in plants, whereas the individual mutants have no visible phenotype. [118] These results suggest that CYTC-1 and CYTC-2 function redundantly. Moreover, plants with decreased CYTc exhibit developmental delay, the alteration of stress-responsive gene expression, and the reduction of ROS levels, [118] implying that CYTc in Arabidopsis also functions as conservatively as other eukaryotes.
Other Iron-Containing Proteins and DNA Stability
Besides Fe-S proteins and hemoproteins, Arabidopsis also contains many other iron-containing proteins involved in genome stability, such as dioxygenases, SODs, and RNRs.
Dioxygenases
Some dioxygenases can utilize iron to incorporate into active sites for the assembly of holoproteins. [121,122] These iron-containing dioxygenases have been identified to be involved in the DNA repair process. For instance, the Fe(II)/2-oxoglutarate-dependent dioxygenase alpha-ketoglutaratedependent dioxygenase (AlkB) is extensively present in Escherichia coli and in mammals. [123] The AlkB protein performs a conserving function in the oxidative removal of damaged DNA via alkylation. [124] Failure to remove damaged DNA generally leads to cytotoxicity or mutagenesis during DNA replication. [124] The Arabidopsis genome encodes several AlkB homologues, including ALKBH2 (At2g22260), At3g14160, and At1g11780. In them, the ALKBH2 protein can protect Arabidopsis against DNA methylation damage. [124] The alkbh2 knockout mutants are hypersensitive to the MMS. [123] However, functions of the other two genes have not been characterized. Additionally, another nine genes are also annotated as iron-containing dioxygenases, and these are At2g26400, At3g01420, At3g49620, At4g03050, At4g14710, At4g14716, At4g15093, At4g29890, and At5g43850. Of them, At3g01420, also termed DOX1, encodes a α-dioxygenase that protects plants against oxidative stress and cell death. [125] Recently, it has been shown to be an important component that positively regulates programmed cell death (PCD). [126] The At4g14716 gene encodes an acireductone dioxygenase 1 (ARD1) and functions as an effector of the β subunit of heterotrimeric G protein, which may participate in the synthesis of ethylene. [127] The At4g15093 gene, or AtLigB, is annotated as an extradiol ring-cleavage enzyme, and contributes to arabidopyrone (AP) biosynthesis. [128] The functions of the other six putative iron-containing dioxygenases have not been characterized.
SODs
Numerous environmental stresses can result in the abnormal induction of superoxide within plant tissues. [129] Plants commonly utilize SOD to detoxify the excess ROS. [129] The Arabidopsis genome encodes one manganese SOD (MSD1, At3g10920), three copper/zinc SODs (CSD1-At1g08830, CSD2-At2g28190, and CSD3-At5g18100), and three iron SODs (FSD1-At4g25100, FSD2-At5g51100, and FSD3-At5g23310). The iron-dependent FSD1 is abundantly present in the plasma membrane, mitochondrial membrane, and different fractions of chloroplast such as stroma, envelope, and the peripheral thylakoid. [130] The stromal localization implies its important role in photosynthesis due to the ability to scavenge ROS in the water-water cycle. [131] Notably, the expression of FSD1 is dramatically induced in low Cu levels. [132] FSD2 and FSD3 play essential roles in early chloroplast development. [130] The fsd2-1fsd3-1 double mutant plants exhibit a severe albino phenotype and are hypersensitive to oxidative stress. [130] In vivo and in vitro studies have
Genome Integrity
Vol. 6: 2, 2015 Open Access confirmed that FSD2 and FSD3 proteins can form a heteromeric protein complex in the chloroplast, suggesting that the FSD2-FSD3 complex functions in the scavenging of ROS to facilitate the maintenance of early chloroplast development by protecting it against ROS. [130] RNRs RNRs are critical enzymes that catalyze a rate-limiting step in the synthesis of deoxyribonucleotides (dNTPs), thereby generating the precursors needed for DNA replication and repair. [133,134] Eukaryotic RNRs comprise the large subunits (α or R1) and small subunits (β or R2), of which only R2 subunits utilize iron to sustain a diferric tyrosyl radical (Fe III 2 -Y•) cofactor. [2] Previous studies in yeast and mammals have revealed that the defects of RNRs result in imbalanced dNTP pools in vivo, which generally leads to increased DNA mutations, DNA breaks, cell death, and p53-dependent apoptosis. [2,135] The Arabidopsis genome encodes three R2-like proteins, including TSO2 (At3g27060), RNR2A (At3g23580), and RNR2B (At5g40942). Interestingly, the individual genes contribute to unique aspects of the cellular response to DNA damage. For instance, the expression of RNR2A and RNR2B is specifically activated by the replication-blocking agent hydroxyurea (HU) but not by DNA double-strand break inducer bleomycin (BLM). [136] On the other hand, the transcription of TSO2 is only induced in response to BLM. [136] The tso2 single and tso2rnr2a double mutants show extreme sensitivity to UV-C light. [134] Importantly, these mutants exhibit increased DNA damage and PCD relative to the wild-type. [134] These results further indicate that the R2 subunits of RNRs function critically in genome stability maintenance.
Summary
The Arabidopsis iron-containing proteins involved in genome stability share a high degree of functional conservation with mammals and yeast. Interestingly, some proteins, such as DNA polymerase δ and ε, FANCJ helicase homologues, iron SODs, and small subunits of RNR, evolve several copies in Arabidopsis relative to mammals and yeast. As a result, some of them may function redundantly. Therefore, their single mutants have no visible phenotype in DNA damage stresses, resulting in difficulties to characterize their functions. Moreover, plants also evolve numerous CYP proteins that function in the oxidation/reduction of endogenous or exogenous compounds. However, the functions of individual genes in this superfamily are poorly understood, especially in genome stability maintenance.
Although studies have extensively demonstrated that iron-containing proteins are required for genome stability, limited information is available regarding their functional mechanisms. Several iron-containing proteins such as DNA helicase RAD3 and small subunits of RNR directly participate in DNA replication and repair. Some iron-containing DNA glycosylases are involved in genome stability due to their participation in DNA methylation. Some CIA components of Fe-S proteins are involved in genome stability possibly because they can transfer Fe-S clusters to target proteins such as Polδ, DNA primase, Dna2, XPD, RTEL1, and FANCJ. Importantly, numerous hemoproteins in Arabidopsis function in electron transport; as such, defects in them generally cause the induction of ROS, which can damage DNA, proteins, and lipids. Taken together, although considerable progress has been made in the past years, further studies are still needed to uncover the functions of these iron-containing proteins, especially in genome stability maintenance.
|
2016-08-09T08:50:54.084Z
|
2015-01-01T00:00:00.000
|
{
"year": 2015,
"sha1": "aecde21ce49757d8f18cf3427a9ab8634b77647f",
"oa_license": "CCBYNCSA",
"oa_url": "https://europepmc.org/articles/pmc4911903?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "aecde21ce49757d8f18cf3427a9ab8634b77647f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
202464452
|
pes2o/s2orc
|
v3-fos-license
|
Multi-storey wooden house building
. The study solves the following tasks: analysis of construction technologies that prevent environmental pollution, the development of wooden housing construction objects, the wooden housing construction strategy and the term of its implementation, and a comparison of Russian and international experience in the development of wooden housing construction. According to the results of the study, the technology of construction of multi-storey wooden houses can be a very promising component for improving the quality of life of the population, moreover, a priority from an economic point of view and effective for the development of facilities that contribute to the improvement and compliance with environmental safety requirements. The most effective type of construction in this industry is the construction of LVL timber and CLT panels. Minimum construction time, environmental friendliness and availability of this technology will accordingly improve the living conditions of the population and thus the overall environmental condition.
Introduction
The set of environmental problems is characteristic of any territory in which enterprises of various industrial nature and population are concentrated. To the greatest extent, these problems are manifested in the conditions of the city, and the composition and their severity depend on many factors. This can include: The scale of the city -its area, composition and population; Features of the building-its number of storeys, density, location in relation to the elements of the terrain; Reliability and condition of engineering networks and communications that provide the city with electricity, water, as well as communication systems providing the necessary information, etc.; The nature and the extent of emissions of substances that pollute the atmosphere, water sources and soils of the urban area. Professional construction organizations often see the tightening of environmental requirements as something completely unnecessary that increase the level of construction costs. In our time of development of the construction industry, when society has moved away from natural building materials and has turned to artificially created, the taken measures may well be relevant and justified. Enterprises that are engaged in the production of building materials emit an enormous amount of waste, dust, and combustion products into the environment. Construction needs are provided by cars and vehicles, the environmental safety of which often does not meet some requirements. To be honest, the construction work itself is often carried out without observing any measures to preserve the landscape and the environment in general. That is why the requirements of ecology during construction have recently become tougher, more and more attention is being paid to such an issue as eco-development -the construction of eco-houses and so-called active houses, which is usually denoted by the term "green building". One of the variants of this construction is multi-storey wooden housing construction with the aim of improving the ecological state of the environment. Wood is an eco-friendly material, the use of which in construction not only provides buildings with the necessary tightness and makes them visually attractive, but also improves the quality of indoor air while allowing for greater flexibility in planning solutions.
Thus, the relevance of this scientific work is to create a promising and environmentally justified activity on the example of the construction of multi-storey wooden houses in order to improve the ecological state of the environment.
This article discusses the key study tasks: analysis of construction technologies that prevent environmental pollution; development of wooden housing construction objects; the effect of the strategy of wooden housing and the timing of its implementation; comparison of Russian and international experience in the development of the construction of wooden housing facilities.
Research methods
A study, as a result of which it will be possible to draw a conclusion on the rationality and development prospects of construction in this direction, should be conducted to determine the need for the development of wooden multi-storey housing construction in order to improve the ecological state. This article uses data analysis of the total number of wooden houses, their percentage in different countries, as well as the advantages of wood compared to other materials as a method for the study. According to the published OECD Green Growth Indicators 2017 report, progress in improving the efficiency of using natural resources and reducing the burden on the environment in the world is too slow. Moreover, the report notes that the level of air pollution remains dangerously high. In particular, less than 1/3 of OECD countries comply with WHO standards. The level of emissions of harmful substances in Russia is significantly higher than in many large countries, which leads to increased risks for the population. Based on this, the Russian Federation is an example of a study. It is also necessary to analyze the possibility of using wood as a building material without causing damage to the ecological environment. We should make an assessment of the countries' provision of this type of building material or the possibility of purchasing wood for use in construction.
Research results
The research results show that all over the world work is underway to develop systems and technologies that implement environmentally friendly and reliable material -wood. An example of a study in the article is Russia and is compared with international countries. According to the Association of Wooden House Building (WHA) of the Russian Federation, the world experience of low-rise wooden construction is shown in the diagrams: After analyzing the data presented in the diagram, we can conclude that in Russia the share of use of products for low-rise wooden housing construction is lagging behind the indicators of other countries. In the Russian Federation traditionally exclusively low-rise individual housing is developed, but even in comparison with other countries the share of construction of low-rise wooden houses is in 2 or more times less and comparing the stocks of wood Russia is one of the leaders. Meanwhile, in the world the development in the field of wooden construction does not stop and multi-storey wooden house building is actively developing along with low-rise building.
It would seem that there was nothing special in wooden house building. There are a number of advantages of wood as a building material to answer this question: Environmental friendliness -development of settlements in landscape zones and high utilization of wood in production; Renewable resource; Construction time of wooden houses is much faster; Lightweight of construction -6 times lighter than concrete; Seismic resistance, static strength in all directions; Non-shrinkage, non-freezing, impermeability of walls and joints Ability to resist high temperatures for a long time without deterioration of the strength characteristics. The construction of high-rise residential buildings and business centers which are made of wood is a promising direction, which is gaining increasing popularity in the countries of America and Europe. The technologies of wooden housing construction are constantly being improved, and the fact that building from wood is profitable, reliable, fast and safe is becoming obvious. Methods for the construction of wooden houses are developing and do not stand still. There are new opportunities and records are setting for the construction of multi-storey buildings using LVL glued timber and CLT wood panels. Structures that are made of LVL timber are used as vertical and horizontal elements of the bearing frames of buildings. This technology is a multi-layered sheets of coniferous wood. The fiber layers are arranged in parallel, the thickness of each of the layers is about 3 mm. The span of beams, which does not require supports, from a LVL beam can reach 36 m, and farms -42 m and more. This bar does not shrink, and also remains geometrically unchanged throughout its service life. Unlike conventional wood, LVL technology is not affected by microorganisms, is not deformed due to dampness, and is resistant to chemical aggression. Modern manufacturing technologies provide a high degree of availability of building structures, and the assembly of a wooden frame at the construction site is performed by analogy with the installation of pre-fabricated buildings from steel. The technology of CLT plates and panels is a composite material produced by the method of cross-gluing wood layers. This technology is used as floor slabs, enclosing structures and building coatings. The construction of CLT plates is distinguished by lightness, fire safety, high strength, as well as heat and sound insulation. Currently CLT plates are available in thicknesses from 60 to 400 mm. The installation speed of the presented wooden structures is significantly higher compared with other construction technology, and is due to the technical ability to transport finished blocks and modules to the work site. Modern equipment is capable of producing CLT panels with a length of up to 24 m and a width of up to 3.5 m, which makes it possible to install the building envelope of the entire floor of the building in one go. An 18-storey, 53-meter-high Brock Commons dormitory in Canada which is made of wooden structures can be considered as an example. The Brock Commons building was erected in 40 days at a speed of assembly of one floor in two working days, which is about four months faster than with traditional construction methods. At the same time, the construction process was so environmentally efficient that it can be equated in scale to the disappearance from city roads of about 500 cars for a whole year. About 22,000 tons of timber was used to build the Brock Commons. But wood is a renewable resource; forests of Canada and America renew this wood in just six minutes. The Russian Federation has 23% of the world's forest resources. This indicator provides an opportunity for development in the field of wood construction, but not everything is as simple as it seems. It is necessary to solve a number of key problems that hamper the wider use of wooden housing construction in the Russian Federation to enable the development of multi-storey wooden construction. Among them: outdated standards for wooden structures; lack of building codes for multi-storey buildings (over 3 floors); imperfection of the regulatory framework in terms of fire safety.
According to The Environmental Performance Index for 2018, Russia is in 52nd place with an environmental index of 63.79 in the ranking of eco-friendly countries in the world. For comparison, we present a graph with the environmental indexes of international countries:
Conclusion
Ecology is an integral part of our life. Modern human society needs to recognize the participle of the ecosystem. The main task at this stage of life is to prevent the aggravation of the general environmental situation, to treat environmental problems and requirements with the utmost responsibility. Based on the foregoing, currently systems are being actively developed that improve the ecological state in general.
In this article studies were conducted with the main goal -to improve the ecological state of the environment. The object of research was the technology of construction of multi-storey wooden housing. Building wooden houses will help to improve the ecological state of the environment as a whole and to provide comfortable living conditions for the population. This type of construction will ensure the rational use of wood as a building material. The prospects of the presented direction are in the ecological use of wood in construction.
Based on the results of the study wooden multi-storey housing construction is gradually developing and progressing in countries with a high environmental index. Despite the fact that the Russian Federation has extensive resources in terms of forestry, the construction technology of multi-storey wooden housing construction is not receiving proper attention today. According to the rating of the most environmentally friendly countries Russia is not included in the TOP -50. The technology of multi-storey wooden construction will help the Russian Federation to increase the environmental index, thereby improving the living conditions of the population. The technology described in this article has already earned interest in countries such as the USA, Canada, and Sweden, Finland and others, which determines the future prospects for improving the ecological state of the environment.
|
2019-09-11T14:13:01.524Z
|
2019-01-01T00:00:00.000
|
{
"year": 2019,
"sha1": "9a8a3cf5db0a50289facb55d92e534bf66b24650",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2019/36/e3sconf_spbwosce2019_01035.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "53e3013f7602e7d486036b526c4b38fcfd03e527",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
258914764
|
pes2o/s2orc
|
v3-fos-license
|
Multiple Inositol Polyphosphate Phosphatase Compartmentalization Separates Inositol Phosphate Metabolism from Inositol Lipid Signaling
Multiple inositol polyphosphate phosphatase (MINPP1) is an enigmatic enzyme that is responsible for the metabolism of inositol hexakisphosphate (InsP6) and inositol 1,3,4,5,6 pentakisphosphate (Ins(1,3,4,5,6)P5 in mammalian cells, despite being restricted to the confines of the ER. The reason for this compartmentalization is unclear. In our previous studies in the insulin-secreting HIT cell line, we expressed MINPP1 in the cytosol to artificially reduce the concentration of these higher inositol phosphates. Undocumented at the time, we noted cytosolic MINPP1 expression reduced cell growth. We were struck by the similarities in substrate preference between a number of different enzymes that are able to metabolize both inositol phosphates and lipids, notably IPMK and PTEN. MINPP1 was first characterized as a phosphatase that could remove the 3-phosphate from inositol 1,3,4,5-tetrakisphosphate (Ins(1,3,4,5)P4). This molecule shares strong structural homology with the major product of the growth-promoting Phosphatidyl 3-kinase (PI3K), phosphatidylinositol 3,4,5-trisphosphate (PtdIns(3,4,5)P3) and PTEN can degrade both this lipid and Ins(1,3,4,5)P4. Because of this similar substrate preference, we postulated that the cytosolic version of MINPP1 (cyt-MINPP1) may not only attack inositol polyphosphates but also PtdIns(3,4,5)P3, a key signal in mitogenesis. Our experiments show that expression of cyt-MINPP1 in HIT cells lowers the concentration of PtdIns(3,4,5)P3. We conclude this reflects a direct effect of MINPP1 upon the lipid because cyt-MINPP1 actively dephosphorylates synthetic, di(C4:0)PtdIns(3,4,5)P3 in vitro. These data illustrate the importance of MINPP1′s confinement to the ER whereby important aspects of inositol phosphate metabolism and inositol lipid signaling can be separately regulated and give one important clarification for MINPP1′s ER seclusion.
Full-length MINPP1 has an N-terminal ER targeting sequence [21], and in rat liver cells and mouse embryonic fibroblasts, most endogenous MINPP1 is restricted to the ER lumen [27,28]. In 3T3 cells, ectopically-expressed MINPP1 is also concentrated in ER [28]. Nevertheless, overexpression of full-length MINPP1 does reduce cellular levels of Ins(1,3,4,5,6)P 5 and InsP 6 [28], and very recent work suggests these are also targets of the endogenously expressed enzyme [29]. Therefore, it is generally anticipated that inositol phosphate substrates must enter the ER. Or, under appropriate circumstances, MINPP1 may exit the ER. There is currently no explanation for this apparently circuitous method of regulating inositol polyphosphates normally resident in the cytosolic compartment. It is likely that the degradation of higher inositol polyphosphates to lower forms has some still-unknown purpose in the ER compartment, and/or the restriction of MINPP1 in the ER prevents unwanted metabolic events in the cytosol. In regard to the latter suggestion, the literature demonstrates a number of examples in which enzymes that metabolize inositol lipids can also metabolize inositol phosphates, including PTEN. Interestingly, although PTEN and MINPP1 have no sequence homology, even at their active sites, both proteins can dephosphorylate Ins(1,3,4,5)P 4 and Ins(1,3,4,5,6)P 5 in vitro at least [19,21]. As a model for studying the consequences for cells in which MINPP1 is no longer constrained to the ER, a truncated phosphatase lacking its ER-targeting sequence has been expressed in 3T3 cells [28]. In these experiments, MINPP1 inhibited cell growth [28]. This growth inhibition was previously ascribed to the degradation of the higher inositol polyphosphates, which are its principal substrates [20,21]. In this study with the insulin-secreting HIT cells as a model, we report that cyt-MINPP1 inhibits cell growth because the enzyme hydrolyzes PtdIns(3,4,5)P 3 . Thus, the ER confinement of MINPP1 allows the separation of inositol polyphosphate metabolism from the regulation of an important inositol lipid signal and explains, in part, the necessity of MINPP1 s ER location.
Preparation of GFP-Linked MINPP1 Constructs
Plasmids: Construction of pCI.cyt-MINPP1 and pCI.GFP-cyt-MINPP1 was described elsewhere [23]. Encoded by plasmid pCI.hr MINPP1 is a chimera made up of the 1st 39 amino acids of human MINPP1 (this includes the signal peptide, that is, amino acids 1-30). This is fused to the rat MINPP1 sequence, which is identical to that of cyt-MINPP1 without the FLAG tag. For the generation of pCI.hrMINPP1, we initially introduced a SalI site into the cDNA of human MINPP1. Codons were exchanged for amino acids V39A40 from GTG GCC to GTC GAC. We then introduced a SalI site into the spacer linking the DNA sequence encoding the N-terminal FLAG-tag and that encoding rat MINPP1. This was achieved by exchanging nucleotides GGC GCC versus GTC GAC in pCI.cyt-MINPP1. Lastly, we swapped the DNA sequences encoding the FLAG-tag in pCI.cyt-MINPP1 by the DNA sequence that encodes the first 39 amino acids of human MINPP1, combining both sequences in-frame using the created SalI sites. To generate pCI.GFP-hrMINPP1, we subcloned the DNA sequence encoding the amino acids 1-39 of human MINPP1 in-frame in front of the GFP-r MINPP1 cDNA via SalI sites. The Quikchange Kit (Clontech/TakaRa, Göteborg, Sweden) was used to perform all nucleotide exchanges. The appropriate oligonucleotides were obtained from Proligo France SAS (Evry Courcouronnes, France). DNA sequence analysis was used to verify all vector constructions.
Cell Culture, Transfection and Labelling Protocols
HIT M2.2.2. cells were normally maintained in RPMI 1640 medium with 10% fetal bovine serum, glutamine and penicillin/streptomycin (Thermo Fisher Scientific/Life Technologies, Stockholm, Sweden) as described previously [30]. A modified RPMI 1640 medium, RPMI 1640-M, was used for experiments. This revised media included additional MgSO 4 (to give a final concentration of 0.8 mM), 50 µM inositol and 5.5 mM glucose. The 10% serum in this medium had been dialyzed (1000 MW cut-off, Spectro Por, Thermofisher Scientific, Stockholm, Sweden) to remove the high inositol present in fetal serum.
HIT M2.2.2 cells were plated at about 10% of their confluent density into 35 or 92 mm dishes and grown in the RPMI-1640-M medium above for 24 h. The cells were then changed into the transfection medium, DMEM, with the same inositol concentration as the experimental RPMI-1640-M. Cyt-MINPP1 expression plasmid (pCI.FLAG-MINPP1) or vector alone were transfected into the cells using a Ca 2+ phosphate precipitation technique [30]. The next day the cells were washed and reintroduced into the RPMI-1640-M medium and maintained for a further 24 h. To establish cell number and approximate volume, cells were trypsinized from the dish, counted and the volume estimated by measuring the diameter of spherical cells with calibrated graticule viewed under a microscope. An alternative method for determining cell number was used in the experiments involving green fluorescent protein (GFP)-tagged MINPP1 constructs. Fluorescent cells were identified using a laser scanning TCS-SP2 confocal microscope (Leica Microsystems CMS GmbH, Mannheim, Germany) and counted.
For measurements of PtdIns(3,4,5)P 3 , cells were grown in 92 mm dishes with 20 µCi/mL [ 3 H]myo-inositol for the 46 h period following transfection, which was sufficient to label the lipids to an apparent isotopic equilibrium [31].
Confocal Microscopy and Colocalization
To confirm the GFP-tagged MINPP1 construct's localization, confocal microscopy was carried out. Colocalization with the fluorescent ER marker Brefeldin A, BODIPY 558/568 conjugate (Molecular Probes) was determined. Laser scanning confocal microscopy was performed using the Leica TCS SP2 microscope above. This was equipped with a Leica HCX Pl Apo x63/1.20/0.17 UV objective lens as previously described [32]. The following settings were used: for GFP and Brefeldin A, BODIPY 558/568 conjugate fluorescence, excitation wavelength 488 nm (Ar laser) and 546 nm (HeNe laser), a 488/543 double dichroic mirror and detection at 505-525 nm for GFP and 565-595 nm for Brefeldin A, BODIPY 558/568 conjugate.
Inositol Lipid Analysis
For the determination of PtdIns(3,4,5)P 3 , an established quantitative lipid extraction protocol was used [31]. Only siliconized glass or plasticware and pipette tips were used during the extraction. Media was removed from the cells, and 1 mL of ice-cold 1 M HCl was added. A 10 µL aliquot of a Sigma Phosphoinositide mix (25 mg/mL) was immediately added as a lipid carrier, and the plate was left on ice for 20 min. Cells were then removed by a cell scraper, and the plates were washed with 2.73 mL of a solution comprising 0.6 mL of 1 M HCl, 5 mM of tetrabutyl ammonium sulphate, and 2.13 mL of methanol. The mixture was then split into 2 phases by the addition of chloroform (4.27 mL). The mixture was vortexed, and the phases were divided by centrifugation. The lower phase contained the inositol lipids and was added to tubes containing 1.43 mL of synthetic upper phase. The phases were mixed and centrifuged, and the lower phase was removed into clean tubes.
Both the initial upper phase and the synthetic upper phase (left after removing the first synthetic lower phase wash) were sequentially re-extracted with 2.23 mL of synthetic lower phase, mixed and separated centrifugally. This last lower phase was combined with the originally washed lower phase, the tube filled with N 2 and the lipid extract was stored at −20 • C. In order to determine PtdIns(3,4,5)P 3, the lipid extract was dried under N 2 , deacylated with methylamine exactly as described by Anderson et al., 1999 [31] and the products were stored at -20 • C until resolved by HPLC.
Unless otherwise stated, assays were carried out exactly as described by Caffrey et al. [24]. Both a wild-type cyt-MINPP1 and a catalytically-compromised version were used (H370A). Assays were implemented in 70 µL of a buffer consisting of 50 mM HEPES pH 7.2/0.1 mg/mL BSA/50 µM di(C4:0)-PtdIns(3,4,5)P 3 (Echelon Research labs, Salt Lake City, UT, USA). Reaction rates were indistinguishable at 25 and 100 µM, indicating assays were carried out under V max conditions. The assay is not sensitive enough to determine Km. Assays were performed at 37 • C for 0, 15, 30 and 45 min and contained 0.027 µg of MINPP1. They were terminated by placing plates on ice. The Pi release was measured using a colorimetric assay in a microplate reader [38].
Effect of cyt-MINPP1 on Cell Growth
Cyt-MINPP1 expression significantly reduced cell growth compared to mock-transfected controls ( Figure 1A,B) [28]. The reduction in cell growth, whilst significant, is not dramatic. However, the data is on the entire population of cells, and our transfection efficiency is only about 30%. Therefore, we are likely underestimating the impact of expressing MINPP1 in the cytosol. To overcome this issue, we made a cyt-MINPP1 GFP construct to further quantify the effect on cell growth. gate, which had a punctate pattern. We then used this construct to look at the effect on cell growth. Figure 1C shows that when only fluorescent, construct-expressing cells are considered, the differences in growth between GFP and GFP-cyt-MINPP1 expressing cells are considerably more marked. The number of GFP-cyt-MINPP1 expressing cells was significantly reduced by 70% compared to the control GFP-transfected cells. These data confirm the inhibitory effect of cyt-MINPP1 on the growth we saw in batch-transfected cells.
Effect of cyt-MINPP1 Expression on PtdIns(3,4,5)P3
The physiological function of MINPP1 is generally recognized as an Ins(1,3,4,5,6)P5 and InsP6 phosphatase, although the enzyme does show some 3-phosphatase activity towards Ins(1,3,4,5)P4 [20]. These observations led us to consider the possibility that PtdIns(3,4,5)P3 might also be a substrate for cyt-MINPP1. Since PtdIns(3,4,5)P3 is the product of type-1 phosphatidylinositol 3-kinase (PI3K), a group of enzymes intimately involved in the stimulation of cell growth [11,13], the degradation of PtdIns(3,4,5)P3 by cyt-MINPP1 could offer an explanation for the reduction in cell growth observed in Figures 1A-C. To test the above idea, we first assessed whether cyt-MINPP1 could dephosphorylate PtdIns(3,4,5)P3 in vitro. Figure 2A shows that di(C4:0)PtdIns(3,4,5)P3 is a substrate for cyt-MINPP1, whereas a catalytically dead mutant of cyt-MINPP1 (H370A) was ineffective. Figure S1 illustrates that cyt-MINPP1 was evenly spread throughout the cytosol without specific colocalization with the ER marker, Brefeldin A-BODIPY 558/568 conjugate, which had a punctate pattern. We then used this construct to look at the effect on cell growth. Figure 1C shows that when only fluorescent, construct-expressing cells are considered, the differences in growth between GFP and GFP-cyt-MINPP1 expressing cells are considerably more marked. The number of GFP-cyt-MINPP1 expressing cells was significantly reduced by 70% compared to the control GFP-transfected cells. These data confirm the inhibitory effect of cyt-MINPP1 on the growth we saw in batch-transfected cells.
There is a small reduction in PtdIns3P, which is likely to be due to the reduced level of PtdIns(3,4,5)P 3 and its subsequent dephosphorylation route [39]. cells with cyt-MINPP1, washed them and then incubated them with [ 3 H] myo-inositol for 46 h. Subsequently, the inositol lipids were extracted and deacylated. PtdIns (3,4,5)P3 was identified in its deacylated form (GroPIns (3,4,5)P3) by the inclusion of an internal [ 32 P]GroPIns (3,4,5)P3 standard in the HPLC run ( Figure 2B). Levels of PtdIns (3,4,5)P3 were reduced by 26% by cyt-MINPP1 expression ( Figure 2C). The data were corrected to account for the decreased cell number and the increased cell volume of the cyt-MINPP1expressing cells. In the same experiments, there was no significant decrease in materials corresponding to PtdIns, PtdIns4P or PtdIns(4,5)P2, nor PtdIns(3,5)P2 and PtdIns(3,4)P2 ( Figure S2A-C). There is a small reduction in PtdIns3P, which is likely to be due to the reduced level of PtdIns (3,4,5)P3 and its subsequent dephosphorylation route [39]. (3,4,5)P3 region of the chromatogram: an unidentified peak X and a peak that co-eluted with an internal standard of [ 32 P]GroPIns(3,4,5)P3. A similar extract from mock-transfected cells showed a similarly eluting GroPIns (3,4,5)P3, which also co-eluted with a [ 32 P]-labeled standard. X is a side product of the deacylation of PtdIns(4,5)P2 [40]. From the original studies of Stephens et al. [39], it is likely to be Ins (2,4,5) In a deacylated lipid extract from cyt-MINPP1 transfected cells, two peaks are observed in the GroPIns(3,4,5)P 3 region of the chromatogram: an unidentified peak X and a peak that co-eluted with an internal standard of [ 32 P]GroPIns(3,4,5)P 3 . A similar extract from mock-transfected cells showed a similarly eluting GroPIns(3,4,5)P 3 , which also co-eluted with a [ 32 P]-labeled standard. X is a side product of the deacylation of PtdIns(4,5)P 2 [40]. From the original studies of Stephens et al. [39], it is likely to be Ins(2,4,5)P 3 . (C) Quantification of changes in peaks X and GroPIns(3,4,5)P 3 . Open bars mocktransfected and filled bars cyt-MINPP1 transfected. (Data ± SEM for one experiment carried out in triplicate are presented after normalization for the differences in cell number and volume, * p < 0.05, Students t-test. Two other experiments, also in triplicate, gave similar results. (D) Deacylation products of PtdIns3P and PtdIns4P, respectively. Note the large change of scale for GroPIns3P compared to GroPIns4P. (E) Deacylation products of PIP 2 lipids; again, note the large change of scale for the 3-PI-based lipids. (F) Illustration of the compartmentalization of MINPP1 in the ER to separate inositol phosphate metabolism from inositol lipid signaling.
Discussion
The most important new finding in this study is that the levels of PtdIns(3,4,5)P 3 are reduced in cyt-MINPP1 expressing cells, an observation that is matched by the ability of recombinant cyt-MINPP1 to degrade di(C4:0)-PtdIns(3,4,5)P 3 in vitro. The agreement between these two independent sets of observations argues against the in-cell changes in PtdIns(3,4,5)P 3 levels arising from the possible, but we believe unlikely, unknown, indirect effect of MINPP1. Our observations also indicate a greater substrate overlap than has previously been recognized. This observation also indicates that there is some overlap in the enzymological preferences between PTEN and MINPP1. What is also interesting is that the genes coding for these enzymes are both found at the same chromosomal locus [41], which is a hot spot for tumor suppressor genes [42]. The lower levels of PtdIns(3,4,5)P 3 in cyt-MINPP1 overexpressing cells provide a satisfactory explanation for their slower rate of growth. It also, for the first time, rationalizes why MINPP1 is sequestered in the ER, namely to separate important inositol phosphate metabolism from a key mitogenic signaling pathway, that is, the phosphatidylinositol 3-OH kinase production of PtdIns(3,4,5)P 3 . The importance of this separation is underscored by a consideration of the enzyme kinetics. The apparent V max with di(C4:0)-PtdIns(3,4,5)P 3 as a substrate is 850 nmol/mg protein/min for MINPP1 compared to IP 6 as a substrate with a V max of 6.2 nmol/mg protein/min [37]. To put it in a wider perspective, the V max of PTEN in vitro with di(C4:0)PtdIns(3,4,5)P 3 as a substrate is 60 nmol/mg protein/min [17].
Earlier [28] and more recent works [26] clearly indicate that the control of cytosolic InsP 6 and Ins(1,3,4,5,6)P 5 by ER-localized MINPP1 is both possible and important. A remaining open question is whether the products of their dephosphorylation by MINPP1 have some significant role in the ER. The recent characterization of MINPP1 metabolites [29] suggests that Ins(1,2,3)P 3 , a putative iron shuttle [43,44] and iron-specific antioxidant, [45] is a product of the endogenous, ER-confined MINPP1, as it is when this enzyme artificially expressed in the cytosol [23]. These recent studies [29], therefore, confirm that MINPP1 is a key endogenous enzyme in the generation of this important inositol phosphate. Interestingly, Ins(1,2,3)P 3 metabolism is tightly controlled during cell cycle progression peaking when iron metabolism is upregulated [46,47]. Moreover, in the case of the MINPP1-associated human disease, there is a marked loss of intracellular iron [26], supporting a product of MINPP1 activity, like Ins(1,2,3)P 3 , regulating iron metabolism.
Conclusions
Our experiments have shown that PtdIns(3,4,5)P 3 is a novel target for hydrolysis by MINPP1. Thus, the ER confinement of MINPP1 allows the strategic separation of inositol polyphosphate metabolism from PI3K signaling, permitting important inositol phosphate metabolism related to the regulation of higher inositol phosphates to proceed without interfering with a critical mitogenic pathway ( Figure 2F).
|
2023-05-27T15:03:44.132Z
|
2023-05-24T00:00:00.000
|
{
"year": 2023,
"sha1": "621e8aa80e77eefe83b02505e35cc24cb9c9c39f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/biom13060885",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "118270c034acf86357e31b0443e53c1da82c54ec",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
267380938
|
pes2o/s2orc
|
v3-fos-license
|
Accreditation of antimicrobial stewardship programmes: addressing a global need to tackle antimicrobial resistance
Abstract Accreditation of healthcare services provides quality assurance of hospital practice to support safe and effective care for patients. Accreditation programmes focused on antimicrobial stewardship (AMS) have been developed in high-income countries (HIC) and recently the WHO has developed a toolkit to support AMS practice in low and middle-income (LMIC) countries. BSAC has developed their Global Antimicrobial Stewardship Accreditation Scheme (GAMSAS) for hospitals based on globally applicable standards. GAMSAS aims to support healthcare organizations to build measurable AMS programmes and to support spread of best practice. GAMSAS involves a desktop assessment by BSAC experts followed by a hospital visit to gather further insight into how a hospital’s AMS programme operates. A final report of compliance with the GAMSAS standards and a recommendation about accreditation at one of three levels is formally approved at a GAMSAS panel meeting involving well-established global experts in AMS. The BSAC GAMSAS team reflect on progress during the first year and ambitions for future spread.
The need for global accreditation of antimicrobial stewardship
The recent GRAM Report 1 confirms that antimicrobial resistance (AMR) is one of the world's leading causes of death with the largest burden of disease in low-and middle-income countries (LMIC).Overuse of antimicrobials is one of the biggest drivers of AMR, therefore robust systems for antimicrobial stewardship (AMS) are needed to ensure responsible use of antimicrobials.The accreditation of AMS programmes is a mechanism to spread good practice and develop consistent, measurable and sustainable AMS across all health economies.Healthcare accreditation has evolved over many years, initially in high-income countries (HIC) but also more recently in LMIC. 2 The global push for the provision of universal health coverage has been a major driver for accreditation and it has been seen as a useful tool for ensuring and improving the quality and cost-effectiveness of healthcare in both public and private hospitals by governments.While whole hospital accreditation schemes 3 may include an element that addresses AMS, accreditation assessments of many different departments and services risk losing sight of weaknesses in AMS practice and AMS will not figure prominently in final reports or priorities for action.Whereas accreditation schemes for AMS are available in some HIC, [4][5][6][7] they are designed with a particular healthcare system in mind.The WHO has developed an excellent toolkit 8 to support LMIC in establishing AMS programmes and we believe this provides a good basis for developing a global accreditation scheme.
Developing the BSAC Global Antimicrobial Stewardship Accreditation Scheme (GAMSAS)
Vision of GAMSAS GAMSAS 9 supports healthcare organizations to build, improve and sustain robust and measurable AMS programmes including support to enable an organization's contribution to local, national and global data collection/surveillance, a key ambition of the WHO AMR action plan.GAMSAS is focused on developing and sustaining Centres of Excellence around the world to facilitate mentorship of other organizations and spread AMS across regions and countries.GAMSAS is based on achieving a set of pre-defined standards that BSAC has developed, drawing on internationally published standards and checklists to create a quality improvement focused points-based scheme.These standards focus on AMS systems, processes and outputs but also include microbiology laboratory provision and Infection Prevention and Control practice to give a holistic view of AMS within a hospital.For open applications organizations can self-fund or apply for funding from BSAC.
The GAMSAS screening process and site visit
Following a robust screening process, using an expert review panel, successful applicants from both routes are required to complete a series of online self-assessment questionnaires, uploading all supporting evidence at the same time.The BSAC team has appointed external assessors, verified as experts in AMS.These experts are engaged in a desktop assessment initially using submitted questionnaires and supporting evidence to prepare a draft report for each organization.The external assessors then attend an on-site visit to evaluate AMS systems and processes, meet with leadership and clinical teams to review clinical engagement, assessing the organization's culture surrounding AMS.Following the on-site assessment, the report is finalized and discussed with the applicants for accuracy.An overview of the GAMSAS process and timeline is shown in Figure 1.
Awarding accreditation
A virtual GAMSAS panel meeting is held to discuss the report, outcomes of the on-site visit and level of accreditation to be awarded.The meeting involves the BSAC team, external assessors and panel members (who are well-established experts in AMS from various world regions and professions).Organizations that complete the process are awarded accreditation at one of three possible levels and an organization may also be awarded Centre of Excellence if they demonstrate active AMS mentorship within a network.An important part of GAMSAS is quality improvement so all organizations receive feedback on potential areas for further work.The GAMSAS 3-year cycle for re-accreditation includes ongoing support from the BSAC team and virtual meetings to ensure maintenance of AMS and progress with identified areas for improvement and mentorship.
Reflections on the first year of GAMSAS
Establishing GAMSAS has been an iterative learning process allowing us to develop robust documentation and processes.More than 40 professionals have been engaged from across the world as potential experts to support assessment, and so far we have used six external assessors and five panel members.In 2023, we accredited 10 organizations: three as Centres of Excellence.Although nine of these 10 organizations are in HIC (five in the UK where the scheme was piloted), this is a global scheme with an LMIC organization in Nigeria becoming the first African hospital to achieve accreditation.A further 11 hospitals globally are working towards achieving accreditation in 2024.Feedback from clients, assessors and panel members has been positive with clients valuing opportunities for discussion of AMS with assessors and shared learning for all involved.
Plans for GAMSAS
In 2024, the plan is to increase the number of client organizations joining GAMSAS via open applications and collaborative grants.The model for training assessors overseas will support the sustainability of GAMSAS as the number and spread of clients grows.The processes will continue to evolve as experiences are learnt.
Viewpoint
Hospitals with an established AMS programme in any healthcare setting are encouraged to consider applying to join GAMSAS and for national healthcare policy leadership to consider a national approach to accrediting AMS programmes.The GAMSAS programme are keen to engage with commercial and philanthropic organizations to collaborate to support this important initiative.Working with clinicians and collaborators GAMSAS can build and spread measurable and sustainable AMS worldwide to tackle the global threat of AMR.To find out more, e-mail: gamsas@bsac.org.uk or visit the GAMSAS website at: www.ams-accredit.com.
1 of 3
JAC Antimicrob Resist https://doi.org/10.1093/jacamr/dlae007JAC-AntimicrobialResistanceApplying for GAMSAS accreditationHospitals with an established AMS programme and senior management support are eligible to apply.Applications can be made online on the GAMSAS website9 by completing an online screening questionnaire either via:• a direct/open call for applications announced by BSAC or • a collaborative grant offered by BSAC in partnership with commercial or philanthropic organizations.
Figure 1 .
Figure 1.Overview of GAMSAS process with timeline.
|
2024-02-03T05:08:36.281Z
|
2023-12-28T00:00:00.000
|
{
"year": 2024,
"sha1": "9c059101fe3d226f4fcb15bbb87b821fed6f7293",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "9c059101fe3d226f4fcb15bbb87b821fed6f7293",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
210935984
|
pes2o/s2orc
|
v3-fos-license
|
Comprehensive genetic structure analysis of Han population from Dalian City revealed by 20 Y‐STRs
Dalian is a city formed in the 1880s in Liaoning province, Northeastern China with a population of 6.69 million now. Han is the largest ethnic group not only across Mainland China (92%) and Taiwan (97%) but also considered to be the largest ethnic group of the world contributing to above 18% of world's population.
| INTRODUCTION
Dalian city is renowned as the trading and financial center of Northeastern Asia, with the Shandong Peninsula lying southwest across the Bohai Strait and Korea lying across the Yellow Sea to the east. Han Chinese accounts for 84% of the Dalian population, followed by 13% Manchu, 1.6% Mongol, and 0.7% Hui. Han is the world's largest ethnic group making up to about 18% of the global population (2011). The name "Han" first came from the Han Dynasty, which is considered as the golden era of Chinese civilization. During the Han Dynasty, China was able to increase its power and influence to other parts of Asia. The word "Han" seems like more a nationality than an ethnic group since many local tribes mixed up together and formed the name Han during that period (Gerent, 1996).
| Sample collection and ethical approval
The ethical review board of the China Medical University, Shenyang Liaoning Province, People's Republic of China approved this study in accordance with the standards of the Declaration of Helsinki. Blood stain samples were collected from 879 unrelated individuals who are residents of Dalian city for at least three generations and are confirmed as Han Chinese (Figure 1). Aims and procedures of the study were explained to all the volunteers before they have signed the informed consents.
| DNA extraction, PCR amplification, and fragment length analysis
Phenol-chloroform procedure was used to extract DNA. Quantification of DNA was carried out using Quantifiler TM Human DNA Quantification Kit (Applied Biosystems) according to the manufacturer's instructions. Goldeneye TM 20Y amplification kit (Goldeneye technology Ltd.) contains 20 Y-STR loci were co-amplified in a GeneAmp ® PCR 9700 (Life Technologies) thermal cycler according to the manufacturer's recommendations. Allele separation and detection were performed with reference to ORG 500 internal size standard (Goldeneye TM ) and Goldeneye TM 20Y Allelic Ladder using an ABI 3500 genetic analyzer (Applied Biosystems) in accordance to the Goldeneye TM 20Y amplification kit (Goldeneye technology Ltd.) recommendations. Allele calling was performed with GeneMapper 3.2 software.
| Confirmation of DYS448 deletions and sequencing
PowerPlex® Y23 System Amplification Kit (Promega) and Microreader TM 29Y Direct ID System were used to confirm the null alleles at DYS448 in 10 samples according to the manufacturer's protocol. These samples were later on sequenced as described elsewhere .
| Statistical analysis
Haplotype and allelic frequencies were calculated by direct counting method and haplotype diversity (HD) was calculated according to: where n is the male population size and p i is the frequency of ith haplotype. The discrimination capacity (DC) was calculated as the proportion of different haplotypes over a total number of samples. Rst and Fst pairwise genetic distances and associated probability values (p values, 10,000 permutations) were calculated using the analysis of molecular variance (AMOVA) on YHRD website. Reduced dimensionality spatial representation of the populations based on Rst values was performed using multidimensional scaling (MDS) with IBM SPSS Statistics for Windows, Version 23.0 (IBM Corp.). Fst values were utilized to construct a neighbor-joining tree using Mega 7.0 (Kumar, Stecher, & Tamura, 2016). Linear discriminant analysis (LDA) was performed using R program (R Core Team, 2015). We also predicted Y-SNP haplogroups of the samples from Y-STR haplotypes using the Y-DNA Haplogroup Predictor NEVGEN (http://www.nevgen.org).
| RESULTS
The 20 Y-STRs were successfully genotyped using Goldeneye ® 20Y System kit in 879 unrelated Han males from Dalian city of Northeast of China. Haplotypes generated in 879 Han individuals are listed in Table S1. We have used these haplotypes to predict haplogroups using online haplogroup predictor NevGen and listed the results in Table S1. The data for 17 Y-filer loci can also be accessed via YHRD with accession number YA004552.
| Gene diversity and allele frequency of 20 Y-STRs in Han individuals
Allelic frequencies of 20 Y-STRs along with gene diversity (GD) values in 879 Han male individuals from Dalian city are listed in Table S2. A total of 214 alleles were observed. DYS385a/b was the most polymorphic loci with 78 different combinations of alleles. The allelic frequency ranged from 0.0011 to 0.7008. GD ranged from 0.37971 (DYS391) to 0.81769 (DYS447) for single copy STR, while multi-copy STR (DYS385a/b) showed the highest diversity 0.96836. Overall, GD value for all 20 STRs was 0.6692. A low diversity value for DYS391 was also observed in previously studied populations (Ou et al., 2015;Wu et al., 2011).
| Haplotype diversity
To check the haplotype diversity of 20 Y-STRs in 879 Han individuals from Dalian, we evaluated our data for the minimal nine loci, the extended 11 loci, PowerPlex 12 loci, Y-filer 17 loci, and Goldeneye 20Y loci as are shown in Table 1. At minimal haplotype of 9 STRs, a total of 731 haplotypes were observed with DC 83.16% and GD value 0.9994. At the SWGDAM 11 STRs, a total of 800 haplotypes were observed and DC value increased to 1.03 fold (91.01%). At the PowerPlex Y 12 STRs, the number of unique haplotype increased to 754 compared to that of SWGDAM 11 STRs of 740 haplotypes while other parameters remained static. With the addition of 5 STRs to PowerPlex Y 12, a total of 852 haplotypes were observed at Y-filer 17 STRs, among which 94.31% were unique with DC 96.92% and GD value 0.9999. Finally, when adding another three STRs to Y-filer set, a total of 855 haplotypes were observed, but most of the parameters remained static compared to those of the Y-filer STRs.
| Genetic differences along the landscape of China among Han and other minorities from China
To analyze the relationship between other regional Han ethnic groups and currently studied Han individuals from Dalian in Northeastern China, we have collected data from 10 Han groups and 34 minorities from the east to the west and from the north to the south across China. We have calculated Fst and Rst pairwise genetic distance, which is commonly used for estimating the population differences and computing the genetic relationships among different populations. According to Fst pairwise genetic distance (Table S3) (Table S4).
We then inferred the evolutionary relationships between the Han group and other reference populations from the Neighbor-joining (NJ) phylogenetic tree on the basis of the F ST values (Figures 2 and 3). We found genetically close related groups clustered tightly in a clade while genetically distant groups separated far away. All the 35 ethnic groups were divided into six clades according to the NJ tree. She, Han, Danmin, Lingao, Kejia, Yi, Yao, Korean, Bai, and Lisu have the shortest genetic distances and clustered together in a clade. While Uyghur and Kazakhs groups lay at the T A B L E 1 Forensic statistical parameters in Dalian Han population at five different levels (Figure 4; Table S5). MDS plot between Han and 34 other minorities of China showed that the Han population formed a close cluster with Manchu, Liqian, She, and Xibe ethnic groups ( Figure 5; Table S6). We also performed LDA in this study. The results indicated that the Han population was probably an admixture of other Chinese populations with the exception of Uyghur and Tibetan populations ( Figure 6). Y chromosomal Haplogroup O is mostly found in Eastern Asian parts of the world (Figure 7). In the current study, broadly, we have observed 11 haplogroups, among these Haplogroup O is the most frequent haplogroup (65%). Haplogroup O1 accounts for 48% of the currently studied population while O2 contributes 17% followed by Haplogroup C2 at 14%. The frequency of Haplogroup C2 in Dalian Han is much higher than that found in other Han groups. This high level of C2 haplogroup is justified because F I G U R E 3 Neighbor-joining phylogenetic tree for 35 Chinese ethnic groups based on a distance matrix of Fst Manchuria was the homeland for Mongol-like-horsementurned-merchants . The results suggest Han Chinese have genetic admixture with local indigenous populations.
| Characterization of DYS448 deletions
The position of DYS448 is adjacent to azoospermia factor c (AZFc) region, which plays an important role in F I G U R E 6 LDA Analysis between 10 major Chinese ethnic groups
F I G U R E 7 Map showing
distributions of predicted haplogroups in Han population from Dalian, China spermatogenesis and forms an "ampliconic" repeat that acts as a substrate for non-allelic homologous recombination (NAHR). The core repeat motif of the DYS448 locus is the hexanucleotide repeat AGAGAT (Redd et al., 2002). DYS448 has two polymorphic domains separated by an invariant 42-bp region. After the successful sequencing of PCR products, we have submitted data to GenBank for accession numbers. We aligned our sequences with a reference sample that showed allele 20 at the DYS448 from the current study and also from Genebank (accession #MH200582). Out of ten, eight samples showed primer binding site problem on both upstream and downstream while two showed upstream mutations. The phenomena of a null allele at DYS448 in East Asia might be due to the kit itself (Goldeneye, Microreader, and PowerplexY23). We note that the primers of DYS448 designed by above-mentioned companies are not available publically and companies should validate it properly to yield good results in the future. The frequency of null allele at DYS448 is more frequent in Asia (particular East Asia) than that in other regions. All observed 10 individuals (Table 2) showed DYS448 null alleles belonging to Haplogroup C2. Haplogroup C2 is more frequent in regions once associated with the Mongolian empire in past than other regions.
| CONCLUSION
In conclusion, for the first time, we have genotyped 855 Dalian Han individuals with 20 Y chromosomal STR loci using Goldeneye ® 20Y System kit. The genetic variation in the Dalian Han population and its comparison to other relevant groups were analyzed using different statistical tests. Studies based on uniparental markers identified structural differences among the Han Chinese from Northern, Northwestern, Southern, and Eastern China (Bi et al., 2015) and results of our study are also in accordance with it. These Y-STRs, which are part of Goldeneye ® 20Y System kit, showed strong power of discrimination in Dalian Han population and could potentially be useful for the regional or national reference reconstruction for forensic paternity testing, missing person investigations, and disaster victim identification. Results of our study strongly suggest that we detect null type alleles at the locus DYS448 in 10 individuals, which accounts for 1.13% of all the alleles at this locus. This null allele phenomenon is also reported in other populations but the frequency is higher in Asia than that in other regions. Interestingly, all null allele individuals observed in the current study belong to haplogroup C2 (previously known as C3 haplogroup). We suggest the commercial companies should pay special attention while designing primers of DYS448. The current inclusion of this data in the YHRD allows the general use for forensic and other purposes.
|
2020-01-29T14:05:06.785Z
|
2020-01-27T00:00:00.000
|
{
"year": 2020,
"sha1": "24de6b1bc6880ef4904e8f5a0ec6cf14334d377f",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/mgg3.1149",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "771f45c0bb517e58dbf18d05cc824c55c48fbfb1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
3183040
|
pes2o/s2orc
|
v3-fos-license
|
Reduced graphene oxide/carbon double-coated 3-D porous ZnO aggregates as high-performance Li-ion anode materials
The reduced graphene oxide (RGO)/carbon double-coated 3-D porous ZnO aggregates (RGO/C/ZnO) have been successfully synthesized as anode materials for Li-ion batteries with excellent cyclability and rate capability. The mesoporous ZnO aggregates prepared by a simple solvothermal method are sequentially modified through distinct carbon-based double coating. These novel architectures take unique advantages of mesopores acting as space to accommodate volume expansion during cycling, while the conformal carbon layer on each nanoparticle buffering volume changes, and conductive RGO sheets connect the aggregates to each other. Consequently, the RGO/C/ZnO exhibits superior electrochemical performance, including remarkably prolonged cycle life and excellent rate capability. Such improved performance of RGO/C/ZnO may be attributed to synergistic effects of both the 3-D porous nanostructures and RGO/C double coating. Electronic supplementary material The online version of this article (doi:10.1186/s11671-015-0902-7) contains supplementary material, which is available to authorized users.
In this study, we have focused on improving the reversible capacity and cyclability of ZnO by 3-D porous nanostructures and sequential surface modification through distinct carbon-based coating steps. The 3-D porous structures can benefit from the mesopores acting as free spaces to accommodate volume expansion during cycling. In addition, the double coating of reduced graphene oxide (RGO) and disordered carbon on both the micrometric and nanometric dimensions of ZnO aggregates, respectively, establishes a conductive network connecting the aggregates and rigid buffer layers for volume changes of ZnO nanoparticles. As a consequence, the RGO/C/ZnO nanocomposites can exhibit not only high reversible capacity with long cycle life but also enhanced rate capability.
The crystal structure and grain size of the ZnO aggregates were characterized by X-ray diffraction (XRD, D8 Advance: Bruker). The morphology was analyzed using a field-emission scanning electron microscopy (FE-SEM, SU70: Hitachi), and the carbon content was measured using a carbon, hydrogen, nitrogen, sulfur (CHNS) analyzer (Flash EA 1112: Thermo Electron Corp.). The nitrogen adsorption and desorption isotherms were obtained at 77 K (Micromeritics ASAP 2010), and the specific surface area and the pore size distribution were calculated by the Brunauer-Emmett-Teller (BET) and the Barrett-Joyner-Halenda (BJH) methods, respectively.
For the electrochemical characterization, the active materials were tested by using coin-type half cells (2016 type) with a Li counter electrode. The composition of the electrode was set to be the same for all of the samples, which consisted of an active material, super P carbon black, and a polyvinylidene fluoride binder with a weight ratio of 3:1:1, and the geometric area of the electrode was 0.71 cm 2 . Calculation of the specific capacity of the cell is carried out based on the carbon content from CHNS analysis. The specific capacity of carbonaceous materials was assumed to have the same theoretical capacity with graphite (372 mAh/g). The minor contribution from the conductive additive (super P carbon black) was excluded. The electrolyte contained 1 M LiPF 6 in ethylene carbonate and diethylene carbonate (1/1 vol.%) (Panax Etec). Electrochemical impedance spectra (EIS) were measured using a potentiostat (CHI 608C: CH Instrumental Inc.) after 2 cycles, and the applied voltage was 0.5 V with an AC amplitude of 5 mV in the frequency range from 1 mHz to 100 kHz.
Results and discussion
The synthetic processes for the RGO/C double-coated ZnO aggregates are illustrated in Figure 1a. The solvothermal method initially produced approximately 25-nm-sized nanoparticles which, afterwards, aggregated to the 3-D porous ZnO. After conformal carbon coating on the surface of each ZnO nanoparticle, the carbon-coated ZnO (C/ZnO) was wrapped by graphene oxide (GO) sheets. The positively charged C/ZnO prepared through the surface modification by APTES attracts negatively charged GO, thereby resulting in the GO/C/ZnO nanocomposites [36]. The final annealing process gives rise to the reduction of graphene oxide (RGO), establishing a three-dimensional network that renders well-connected electron percolation among the C/ZnO aggregates. The bare ZnO (Figures 2a and 3c) clearly shows porous microspheres that consist of the approximately 25-nm-sized nanoparticles. The morphology of the C/ZnO (Figure 2b) resembles that of the bare sample. Both RGO-wrapped ZnO (RGO/ZnO) and RGO/ C/ZnO are covered and connected to each other by the soft RGO sheets providing facile electron conduction (Figure 2c, d) and Additional file 1: Figure S2a, b.
All of the diffraction peaks are indexed to ZnO with hexagonal wurtzite structure (JCPDS #36-1451) (Figure 3a), and the diffraction peak widths Δk (full width at half maximum) were fitted using double-peak Lorentzian functions for Kα 1 and Kα 2 . Grain sizes of the samples were estimated by the Scherrer equation [53,54] and are listed in Table 1. It can be recognized that the conformal carbon layer prevents the growth of ZnO nanoparticles during the annealing steps. The RGO sheets on the ZnO aggregates, however, were not as effective as the carbon layer, as expected, in terms of suppressing the grain growth of each nanoparticle (Table 1) [55]. The I( D ) and I( G ) from the Raman spectra are the ratio of defective and sp 2 bonding characters of the carbon, respectively. The lower I( D )/I( G ) was observed in the RGO-coated sample than C/ZnO, which indicates that RGO has the richness in sp 2 bonding than the disordered carbon. This results in higher conductivity than the disordered carbon-coated samples. The Raman spectra of RGO/C/ZnO lie between C/ZnO and RGO/ZnO, proving that the RGO/C/ZnO is successfully modified by both the reduced graphene oxide and sucrose-derived carbon (Figure 3b) [56][57][58][59][60].
The porous nanostructures of the bare ZnO aggregates were also confirmed by BET and BJH (Figure 3d), showing a typical type-IV mesoporous structure [61]. The BET surface area of the ZnO aggregates amounts to 144.6 m 2 /g, and a pore distribution of approximately 3.5 nm was determined by the desorption curve (the inset of Figure 3d). The SEM image which shows a broken ZnO aggregate also indicates the porosity inside of the ZnO aggregates (Additional file 1: Figure S1), and the pores between primary particles are reflected in the BET analysis. The surface area and average pore size of the C/ZnO, RGO/ZnO, and RGO/C/ZnO are given in Additional file 1: Table S1 and Figure S3, and all the coated aggregates have mesoporous characteristics. These porous nanostructures can be beneficial both for the facile Li diffusion and free-space buffering during volume variation [22][23][24]58].
To identify the effects of the carbon-based modifications on the electrochemical performance, the bare ZnO, C/ZnO, RGO/ZnO, and RGO/C/ZnO were galvanostatically cycled in the range of 0.02 to 3.00 V (vs. Li + /Li) at a current density of 97.8 mA/g (= 0.1 C) (Figure 4a, b, c, d, e). For the first cycle, all the samples show very high discharge capacity. It is well known that side reactions with an electrolyte such as a formation of the SEI layer severely occur on the surface area of the active material under 1 V during the first discharge, which will result in the low coulombic efficiency in particular using nanosized materials [62]. Interestingly, more vigorous side reactions could be observed in the case of graphene modification [15,40]. Cyclic voltammogram (CV) curves in Additional file 1: Figure S4 confirm that only Li insertion below 0.5 V occurs with vigorous side reactions with the electrolyte. It seems that the bare ZnO suffers from a significant capacity loss only after 5 cycles. In terms of the composites, the capacity fading was more significant for the RGO/ZnO compared to the C/ZnO or RGO/C/ZnO, yielding a discharge capacity of approximately 218 mAh/g at the 50th cycle. The C/ZnO and RGO/C/ZnO, on the other hand, show more stable cycle-life performances, which indicate the carbon layers effectively inhibit the massive aggregations of Zn/ZnO nanograins during cycling. The higher reversible capacity of the RGO/C/ZnO sample (approximately 600 mAh/g after 50 cycles) than that of the C/ZnO comes from Regarding the rate capability, the RGO/ZnO shows a dramatic capacity fade with the increased current density, and the capacity is hardly observed at a current density of 1,956 mA/g (= 2 C) ( Figure 4f ). Meanwhile, the RGO/C double-coated ZnO and C/ZnO exhibit the revisable capacity of approximately 300 mAh/g and approximately 230 mAh/g even at a rate as high as 3 C rate (2,934 mA/g), respectively. The kinetics involved in ZnO through the modification by RGO-and/or C are evaluated by electrochemical impedance spectroscopy (EIS) with an equivalent circuit ( Figure 5). The diameter of the semicircle can be approximately assigned to the charge-transfer resistance (R ct ): the RGO/C/ZnO electrode exhibits smaller R ct than C/ZnO or RGO/ZnO, indicating better electrochemical activity [57,60].
The RGO/C double-coated porous ZnO aggregates exhibit good cyclability, high specific capacity, and excellent rate capability, which are attributed to both the 3-D porous nanostructures and RGO/C-double coating of aggregates. First, the approximately 3.5-nm pores can provide a space to alleviate the volume expansion during cycling. Second, the carbon coating layer on each ZnO nanoparticle can buffer the volume expansion during lithiation. Therefore, the overall morphology during cycling can be preserved without much fracture of approximately 1-μm porous aggregates, as confirmed in Figure 6. Also, the 3-D network of graphene, wrapping around the C/ZnO porous powders, enhances the electronic conduction through the aggregates.
Conclusions
In this work, we have proposed the RGO/C doublecoated ZnO nanocomposites as an anode material with excellent electrochemical properties. The 3-D porous ZnO aggregates are facilely modified through distinct carbon-based coating steps via conformal carbon coating, GO wrapping, and thermal reduction. The approximately 32-nm-sized RGO/C/ZnO nanocomposites with approximately 1-μm porous powders exhibited superior electrochemical performance, including remarkable cycle life, high reversible capacity, and excellent rate capability. The enhanced electrochemical performance arose from the combination of unique properties of the mesopores acting as free space to accommodate volume expansion during cycling, conformal carbon layer on each nanoparticle surface buffering volume changes, and conductive RGO sheets connecting the aggregates to each other. The work introduced in doubly coated ZnO can be extended to the synthesis of other novel electrodes where the cycle life and rate capability are significantly associated with their mechanical failure and appropriate electronic conduction.
Additional file
Additional file 1: Supporting Information. Table S1. BET surface area and average pore size of the C/ZnO, RGO/ZnO, and RGO/C/ZnO. Figure S1. SEM image of the bare ZnO aggregates. Figure S2. SEM images of the (a) RGO/ZnO aggregates and (b) RGO/C/ZnO aggregates. Figure S3. N 2 adsorption/desorption isotherms of the C/ZnO, RGO/ZnO, and RGO/C/ZnO. The inset shows the pore-size distribution of these samples. Figure S4. Cyclic-voltammetry of (a) C/ZnO and (b) RGO/C/ZnO (0.001 to 3.0 V with the scan rate of 0.1 mV/s).
Competing interests
The authors declare that they have no competing interests.
Authors' contributions SW carried out the overall scientific experiments. HW synthesized and analyzed the structures of ZnO aggregates. SL and CK obtained SEM micrographs and prepared the schematic images. JK and SN participated in writing the manuscript and helped the cell fabrication. JK worked on analyzing the electrochemical properties of the electrodes. SA and CK helped to improve the logical flows in the manuscript. BP gave valuable advices about the concepts, supervised the scientific logics in detail, and finalized the manuscript. All authors read and approved the final manuscript.
|
2016-05-12T22:15:10.714Z
|
2015-05-01T00:00:00.000
|
{
"year": 2015,
"sha1": "fc1a0c7938b212f2ce341b3d4bc67e7ef3f12f3f",
"oa_license": "CCBY",
"oa_url": "https://nanoscalereslett.springeropen.com/track/pdf/10.1186/s11671-015-0902-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fc1a0c7938b212f2ce341b3d4bc67e7ef3f12f3f",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
256697108
|
pes2o/s2orc
|
v3-fos-license
|
Forensic Log Based Detection For Keystroke Injection"BadUsb"Attacks
This document describes an experiment with main purpose to detect BadUSB attacks that utilize external Human Interaction Device hardware gadgets to inject keystrokes and acquire remote code execution. One of the main goals, is to detect such activity based on behavioral factors and allow everyone with a basic set of cognitive capabilities ,regardless of the user being a human or a computer, to identify anomalous speed related indicators but also correlate such speed changes with other elements such as commonly malicious processes like powershell processes being called in close proximity timing-wise, PnP device events occurring correlated with driver images loaded.
Introduction
For our detection purposes, we need to consume events that will be coming directly from low level components of the Windows OS. We decided to take into consideration older publications such as abuse of USB2 and USB3 ETW providers and leverage such a provider as a "keylogger" (Microsoft-Windows-USB-USBPORT and Microsoft-Windows-USB-UCX ). Due to technical reproduction issues, however, and due to the facts that logging to a file would be synchronous and direct but also after seeing that some amount of customization would be needed capture-wise, we decided to utilize upper filters for one of the main parts of the proof-of-concept implementation. Those keyboard captures should be accompanied by a timestamp to give us an idea of how far they are from each other and whether the keystroke ratio represents a human user. The second part, can easily be accomplished via ETW barring its asynchronicity, buffer-based system limitations such as flushing logs all together along with mixing of events, that will not cause any issue for us as long as timestamps and data are not corrupt. A final note to keep in mind with such encounters is that privacy of the data need to be taken seriously, in our case we only need to know if the key was pressed and if it was a "Lock" key, if it wasn't it shouldn't be transmited through file writes or ETW from the driver to usermode.A past example to avoid was HP Synaptics keylogger distribution. Finally, cognitive capabilities will be needed given the product of the POC will be log-related and through looking at it as a timeline, one will understand the anomalies between a human typing normally speed-wise and a device typing abnormally fast and launching potentially malicious processes and components.
Malicious BadUSB Attacks Today
The most well-known BadUSB attack vector is probably the commercial "Ru-berDucky" which initially started as a sysadmin gadget to automate mundane tasks. This platform evolved into the most notorious attacker gadget with a series of community-backed payloads whose main capabilities include dual usage as a USB Stick, data exfiltration, HID Interaction (Keystroke Injection) and even its own scripting language, DuckyScript. This is backed by a USB2.0 hardware interface and support for USB-c. Some of the most advanced features include, copying payload to itself, "OUT endpoint" usage via "Lock Keys" "spamming", Keystroke Reflection and even features like VendorID and ProductID spoofing. For the purposes of this experiment, our "go-to" tool is going to be the Rubber-Ducky's latest version as of this writing and various payloads will be employed across all our experiments but mainly, one launching powershell to dump credential files. In most of the real-life cases powershell or other LoLBins will be used to run code and files will be dropped and executed. We should keep in mind the evasion features as they may avoid "hardcoded" detections but also in the case of sleeps, abnormally alter the keystroke timeline and introduce a cognitive anomaly.
Event Tracing for Windows and its Drawbacks
As discussed previously, one of our main source of information will be ETW. Initially introduced as a debugging feature, ETW gained a lot of attention due to the facts that it was easily usable, had a large amount of default and 3-rd party providers, Microsoft Introduced Patch-Guard-compliant Kernel API hooks with it and in general could provide easily vast amounts of telemetry from usermode and kernelmode providers. Although it may sound tempting, this mechanism is simply not a silver bullet for all kinds of detections and telemetry ingestion. Below you can find a table of the drawbacks and positive aspects of ETW. To further elaborate, we should consider a hypothetical example of what is "bad" usage of the mechanism. Supposing we would like to monitor local memory modifications such as allocation and re-protections and even scanning memory for possibly malicious patterns and PIC (position independent code), below are a few empirical consequences faced when dealing with real-life production that can reduce effectiveness and make one's life more difficult. We should keep in mind that for such an experiment ETWTI is used.
Cons
• Reading from a process that exited and its PID (Process Identified) was re-issued as the event came "late".
• Attempting to find a module in-memory that is not already loaded or was removed during a short timeframe, excluding NTDLL which is a standardized case due to both its nature of existing always and Known DLLs behavior in terms of position. This whole situation may cause access violations given the unstable and non-synchronous way loaded modules are kept track of.
• Performance impact and overwhelming produced load of data to be analyzed given APIs monitored may be used by the Windows loader and cause massive overhead and difficulties.
• Applications that reside inside the main application such as additional plugins and anti-exploit agents may increase overhead even more.
Summing up the purposes of the mechanism, one should think twice before selecting ETW for its task by considering whether timely fashion of the event processing is of critical importance to them. Regarding the practical implementation, in order to avoid needless development overhead and also avoid APIs like "Tdh*" etc. , we will reside to using "KrabsEtw", which provides a slick OOP wrapper around ETW libraries and allows for easy consumption of events form various providers, including the trace sessions of the "Kernel Logger" and also parsing them effectively along with special properties that may need to be passed such as obtaining stacks.
Upper Filters for Keyboards
Keyboard keyloggers are a very classical example of introductory projects in kernel development. The are various paths one could follow to achieve the results wanted. Quite interestingly we started from a specific approach and transitioned to another more intricate one to achieve higher kevels of functionality, stability and effectiveness. The initial approach was setting an upper filter device and a completion routine that will intercept and log keyboard "MakeCodes" from "PKEYBOARD INPUT DATA" structure pointers. The code will hook also on some other IRP Dispatch routines to ensure overall proper functionality. We should note that the code is non-PnP so far and target to filter kbdclass, the class driver under which all kinds of port-mini-port driver pairs exist for keyboards, regardless of their type (PS 2 , USB etc). Also, a bit of waiting will happen during the unload routine to ensure all IRPs were processed. However, for both of the driver cases, unloads are very basic and don't support PnP which may and will result to inconsistencies. Setting of the interception completion routine happens by the "IoSetComple-tionRoutine" call inside the Dispatch Routine for IRP MJ READ. In general, such a non-PnP aware filter driver may result in various issues, therefore, an alternative was chosen. This alternative included utilizing a KMDF driver inside which "listens" to the PnP manager for new devices. It is worth noting that one should register this filter as an "UpperFilter" above "kbdclass" inside the registry under the appropriate GUID. What you will see in the code below is the code setting the main hooking routine, after the appropriate callbacks and data were setup and upon calling WDF DRIVER CONFIG INIT to listen for devices, WDF OBJECT ATTRIBUTES initializer and even after having set the WDF Driver device. In the code below, we create a device with our extension data. To simply put what will follow, when a new device is added, we will intercept it and attach ourselves as a filter, then using a modified WDF IO QUEUE CONFIG structure with our own set PFN WDF IO QUEUE IO INTERNAL DEVICE CONTROL and by creating an I O queue through WdfIoQueueCreate, we essentially get a foothold so we can use "WdfRequestRetrieveInputBuffer" and maybe appropriate driver contexts and finally hook onto CONNECT DATA's ClassService routine with "Service-CallbackDummy". A function driver calls the class service callback in its ISR dispatch completion routine. The class service callback transfers input data from the input data buffer of a device to the class data queue. These conditions make it a prime hooking target. You can see below the code responsible for placing the hooking routines themselves. More specifically, we change the device and service callback to our own. Below you can find the relevant callback we spoofed previously so we can log keystrokes via the "MakeCode" intercepted through the buffer that was being passed through the stack. The concept behind the code is forwarding to the next driver after we have en-queued a "SafeLog" routine, that will take all precautions needed to log to our file the keyboard code safely in a multi-threaded environment. To sum up, this stealthier, lesser common and possibly more "hacky" hooking approach was employed to increase the chances of the keyboard monitoring driver of being more "universal" and "stable" accross all kind of Windows OS whether on a Virtual Machine or a physical machine, unlike the predecessor.
Microsoft-Windows-Kernel-Process and Its Use
We decided to utilize the aforementioned ETW provider to collect information about Image Loads (incl. Drivers) and Process Creation events. We limited ourselves to only these event categories. Below you can find some example code of adjusting and enabling providers on a certain trace session which will act as a glue between the consumer and the provider. We are interested into writing easily parse-able logs that will include timestamps, names of images and process IDs when applicable. We utilize Krab-sETW's parsing capabilities and created a callback handler to assist us with the process of creating logs. The code below could serve as a simplistic example of how we chose to handle the event data. The "FileLog" class will create and synchronously lock the file where we will output our logs as well as handle the text processing. We use KrabsEtw's default parser code to "excavate" event data we need and assign them to variables with proper initial values. We finally decide to work with a "std::wstring" and write our log from a C-string format. std::wstring procIdWstr = std::to_wstring(procId); std::wstring logStr = L"ProcessLog:" + timeStamp + L":" + procIdWstr + L":" + imageName + L":EndProcessLog\n"; FileLog.LogToFile(logStr.c_str()); } Based on how we architected this initial provider, we decided to extend the architecture but maintain the same backbone with additional providers or features we may wanna add. An example graph of the architecture is provided below. Essentially, our "ETW Logger Manager Class" sets the appropriate callbacks and event limitations that will handle data and lo them appropriately using the "File Logger Class".
Finally , we shall provide some raw output from the tool to give the reader an idea of what to expect. The "ImageLog" entries represent the output received when a Razer mouse was attached and the "ProcessLog" entries represent random process executions as an example. ImageLog:2968525321:\Device\HarddiskVolume3\Windows\System32\drivers\hidusb.sys:EndImageLog ImageLog:1597723157:\Device\HarddiskVolume3\Windows\System32\drivers\RzDev_0084.sys:EndImageLog ImageLog:1659522856:\Device\HarddiskVolume3\Windows\System32\drivers\RzCommon.sys:EndImageLog 6 Microsoft-Windows-Kernel-PnP and Its Use The aforementioned provider, contains various events related to the PnP system and will allow us to have some extra information, or visibility if you will. Output by itself may not be self-explanatory and even chaotic at a mass scale, but its final goal is to be used in conjunction with other metrics in a log correlation process where its value will increase. Below you can see a part from a generic log produced by the attachment of a Razer mouse, this will provide the reader with an idea of what to expect as output.
Although, evading "hard-coded data"-based detections is out of scope for this study, it is worth noting that "Ruber Ducky" provides spoofing capabilities for Vendor and Product IDs. Given the behavioral nature of the detection and not basing our indicators on a sole kind of data, we can confidently say that even if such data is spoofed, we would still be able to identify forensic footprints.
Find below the relevant raw ETW-originating logs in chronological order with the timestamps from ETW converted to human-readable form: Based on our timestamps, we can now search through the keystroke log and identify an average count to see if "powershell" launching was followed by fast-typing.
Based on a quick calculation, upon converting the time-stamps to Windows time, we identified a large amount of keys to have been pressed within a span of approximately 10 seconds from just a moment prior to spawning "powershell", exceeding the average human typing capabilities. We could account the keystrokes pressed even before launching the command prompt of "PowerShell" but even through an approximation, the point has been proven and malicious BadUsb activity is highly probable to have happened based on this behavior only. If we regard the extra information from the device, we can secure our thoughts and claim to have successfully identified a BadUsb attack "post-mortem".
Graphical Representation of Keystroke Peak
In order to assist us get an idea of the actual spike and embrace a more friendly approach towards human cognitive capabilities, we will utilize a graph to represent the keystrokes across the line of time on a "mass scale". Notice before the peak, the line of normal typing activity compared to when the "Ruber Ducky" was plugged in. The blue graph can be safely compared to the ETW logs and represents the first forensic situation we are investigating. Please bare in mind that the keystroke number is an estimation and not a definitive number representing their amount.
Conclusion
This approach is by far not fool-proof and can increase in difficulty as the metrics increase, including the size of the logs and the attacker's attempts to obscure the timeline via sleeps and other strategies. This approach could have
|
2023-02-10T06:42:35.275Z
|
2023-02-09T00:00:00.000
|
{
"year": 2023,
"sha1": "f918c05a7228398ad93bce4e283d7283b07002bd",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "f918c05a7228398ad93bce4e283d7283b07002bd",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
252714445
|
pes2o/s2orc
|
v3-fos-license
|
A medium‐weight deep convolutional neural network‐based approach for onset epileptic seizures classification in EEG signals
Abstract Introduction Epileptic condition can be detected in EEG data seconds before it occurs, according to evidence. To overcome the related long‐term mortality and morbidity from epileptic seizures, it is critical to make an initial diagnosis, uncover underlying causes, and avoid applicable risk factors. Progress in diagnosing onset epileptic seizures can ensure that seizures and destroyed damages are detectable at the time of manifestation. Previous seizure detection models had problems with the presence of multiple features, the lack of an appropriate signal descriptor, and the time‐consuming analysis, all of which led to uncertainty and different interpretations. Deep learning has recently made tremendous progress in categorizing and detecting epilepsy. Method This work proposes an effective classification strategy in response to these issues. The discrete wavelet transform (DWT) is used to breakdown the EEG signal, and a deep convolutional neural network (DCNN) is used to diagnose epileptic seizures in the first phase. Using a medium‐weight DCNN (mw‐DCNN) architecture, we use a preprocess phase to improve the decision‐maker method. The proposed approach was tested on the CHEG‐MIT Scalp EEG database's collected EEG signals. Result The results of the studies reveal that the mw‐DCNN algorithm produces proper classification results under various conditions. To solve the uncertainty challenge, K‐fold cross‐validation was used to assess the algorithm's repeatability at the test level, and the accuracies were evaluated in the range of 99%–100%. Conclusion The suggested structure can assist medical specialistsin analyzing epileptic seizures' EEG signals more precisely.
INTRODUCTION
Epilepsy is a painful disease that influences the nervous system, and subsequent seizures appear for the patient with the continuation (Capovilla et al., 2016). In some definition, seizures are known as sudden and transient abnormalities, leading to hallucinations, consciousness loss, and whole-body seizure (de Lange et al., 2016). Mutation in a molecular mechanism is one of the reasons for the occurrence of onset of epileptic seizures, which conduce the brain damage malignant brain tumor, stroke, and infection. Clinical statistics indicate that epilepsy appears in 50 million people worldwide, and this neurological disorder is a crucial cause of mortality after Alzheimer's and stroke (Hill et al., 2015).
Human societies and families of patients with epilepsy pay exorbitant costs for care each year (Yu et al., 2019). These challenges justify the requirement for a novel approach to more conventional handle seizures that will serve both the person and their families responsible for the impacts and consequences of seizures. Proper and accurate diagnosis of this disease will help the patient, and on the other hand, the staggering costs of treatment and care of epileptic patients will be significantly reduced. The main reason why many automated methods are used in the early diagnosis of epilepsy is to find a way to predict the disease smoothly. The automatic identification of epileptic EEG signals is a helpful method for epileptic seizure diagnosis. Recent deep learning patterns not successful to fully examine diagnosis and disorder classification, which may lead to eliminating nonlinear and nonstationary characteristic in an epileptic. We need a therapeutic method, and thus, the therapeutic model should be capable of recognizing seizures at their onset stage. This model is grouped by the treatment used to slow the progression of seizures: local electrode stimulation (Li & Cook, 2018), thermal stimulation (Fernandes et al., 2018), or neurochemical stimulation (Wang et al., 2018).
Although intelligent approaches to evaluating epilepsy seizures have been widely proposed, deep learning techniques have been accepted in order to increase input signals and improve classification efficiency. The advantages of deep architectures are numerous: they do not require the signal or picture at the feature extraction level, making it possible to retain the input image and signal data. Additionally, development in response to the points indicated above serves the deep network architectures. Recently, Deep Neural Networks (DNNs) have been trained using appropriate feature extraction and transformation strategies to attain the necessary performance in classifying epilepsy occurrences (Rezaee et al., 2022;. However, deep learning techniques are engaged in various fields, such as disease classification based on physiological signals, speech recognition, brain-computer interface system (BCI), and other related items (Kiral-Kornek et al., 2018;Nejedly et al., 2019). Accordingly, deep learning to study and analyze physiological signals is seen in many studies (Cho & Hyun-Jong, 2020;Antoniades et al., 2016;Chowdhury et al., 2021).
Epilepsy is regarded as the most chronic, common, and severe neurologic disease, and therefore, some studies have utilized deep learning to recognize and process EEG signals. In Turner et al. (2014), the Deep Belief Networks (DBNs) were employed to distinguish the seizure events using recorded EEG signals from the multichannel analysis. Wulsin et al. (2011) also demonstrated that DBN structure could be used in a semisupervised classification procedure for modeling patterns and analyzing the EEG signals. Several researchers have proposed CNN designs to diagnose seizures using EEG signals (Johansen et al., 2016;Antoniades et al., 2016;Li et al., 2016;Amin & Kamboh, 2016). A robust deep learning technique based on stacked auto-encoders (SAE) and the maximum entropy correlation function was presented for seizure detection (Qi et al., 2014). In Li et al. (2015), it was demonstrated that a method based on distribution entropy (DistEn) outperformed standard entropy approaches for detecting epileptic seizures via electroencephalogram (EEG) signals, particularly for short data lengths.
The EEG signals from normal and epileptic episodes were evaluated using an empirical mode decomposition (EMD) method (Pachori et al., 2015). The EMD generates internal mode functions, which are composed of a succession of modulated components.
According to , epileptic incidents can be diagnosed using multivariate oscillatory EEG data on adaptive frequency scales. The empirical wavelet transform (EWT) was applied to assess the amplitudes and frequencies of multivariate signals. The remainder of the article is organized in the following manner.
The deep learning and decomposition process is discussed in Section 2. The overview and the proposed approach for detecting epileptic seizure are detailed in Sections 3 and 4. Finally, in Sections 5 and 6, the experimental results and conclusion are discussed.
DEEP LEARNING, DECOMPOSITION, AND SIGNAL ANALYSIS
Similar to Neural network (NN) designs, the ultimate output choice of a DNN, such as CNN, is based on the biases and weights of the prior layers in the network architecture . Therefore, the biases and weights of the deep architecture in CNN models are updated. In the convolution process, the feature map from the most recent layer is mixed with the kernel's feature map from the previous layer. Nonlinear downsampling techniques such as max pooling can be used to minimize the amount of feature maps generated by convolutional layers. When a feature map is imported into the max-pooling layer, the max operation is applied to it, as illustrated in the figure. The feature map is used to update the maximum pooling layer. According to Equation ( (1), the procedure begins with the largest object.
in which Region j denotes the pooling region j contained within feature map a and i denotes the index of each contained element. Also, p denotes the pooled feature map. Multiclassification problems can also be solved using Softmax regression as (2): The cost function is minimized by training the model parameters θ. Nonlinear processing layers are implemented in deep learning structures for feature extraction and transformation. The enabling hardware part (Elhosary et al., 2019), the architect of nonlinear designs (Birjandtalab et al., 2017), and model fine-tuning procedures (Ullah et al., 2018) are between the growing viewpoints of the deep structure. The extracted pattern of seizure in EEG signals may differ from one patient to another.
The effects of extracted patterns from EEG signals in patients' seizure may be similar to the influence in nonseizure disease in other patients (Rezaee et al., 2016;Hassan et al., 2020;Dash et al., 2020).
Decomposing the EEG data into different subbands and various frequencies will significantly lead to more valuable information due to the similarity in the patterns obtained from both types of signals. have its effects to obtain the proper features. However, as shown in Figure 1, we utilized the DWT to decompose sample signals.
MODEL OVERVIEW
The proposed model's overall structure is depicted in Figure 2. The DWT technique is employed to decompose sample signals to obtain various subbands to implement the proposed method.
F I G U R E 1
We have used discrete wavelet decomposition (DWT) method to decompose EEG signals into multiple subbands. The down arrow is downsampling by 2
F I G U R E 2 The schematic of introduced approach for identifying the onset epileptic seizures
The training procedure is determined by the input of signals and their subband decomposition, as shown in Figure 2. Signal windowing and correlation mapping are performed. The test phase is completed by considering the deep learning circumstances to acquire the appropriate response. The technique is examined using the test step, regardless of the feature maps constructed during the training phase.
Moreover, the DWT has a substantial advantage over other transforms, such as the Fourier transform, in that it extracts both time and frequency information from a signal simultaneously (Hadadnia & Rezaee, 2013;Subasi et al., 2019). The initial step in wavelet decomposition is to pass a time series signal through a range of high-and low-pass filters. It is desirable to employ DWT because of its speed of processing and ease of implementation.
The deep learning structure is employed to classify patterns after analyzing the signal, improving the classification accuracy of epilepsy disorder. We show our algorithm based on separated steps, including training and test processes. The introduced model consists of wavelet decomposition and a robust framework for epileptic seizures classifi-cation. The deep learning structure is configured on a medium-weight deep convolutional neural network (mw-DCNN).
METHODOLOGY
As presented in Figure 3, we convert the EEG signals into various subband data. In the next step, the subbands preprocessed by the DWT strategy. We use two filters and a downward sampler with a sampling coefficient of two rates. The g [.] is a primary inherently high-pass wavelet and discrete method, and consequently, h [.] is defined as the mirror versions of the same wavelets. Also, the h[.] is an inherently low-pass filter that is employed as a second filter in the decomposition procedure.
To segment and analyze the nonstationary EEG signals, an overlapped window technique is handled, which slides over the data with a predefined size and a preset increment w. In proposed design, we applied a window size of 300 ms with an increment of 20 ms. According to a study (Canolty et al., 2006), the EEG signal is time series correlated.
As a result, the proposed approach initially feeds raw EEG data to the CNN. Additionally, we visualize the sample signal as a two-dimensional array, with time steps denoting breadth and EEG electrodes denoting height. Additionally, some studies (Wei et al., 2018;Prathaban & Balasubramanian, 2021) employed the aforementioned technique to change the input's size.
The proposed learning
The suggested DCNN model is constructed of a hierarchical design that includes three layers of aggregate. The convoluted layer in first part is utilized to extract different solid features of the sample EEG signal.
Besides, the additional two layers can develop higher surface characteristics, and individually feature mapping is formed of various inputs mapping with a convolution. The output can be defined as (3):
F I G U R E 3 Utilization of h[n] and g[n] filters to decompose EEG signals (x[n]) into subbands
where ℓ depicts ℓ layer and k ij is the convolution kernel. Furthermore, b j shows the bias, and M j represents the collection of input mappings.
Also, the sigmoidal function is illustrated in the mapping of the jth feature with the position (x, y) and the layer ith, v ij xy is described as (4): where b ij and sig(.) are the bias-mapping functions and sigmoidal function, besides, Q j and P i are the width and height of the kernel, and finally, w ij pq is the kernel weight. The stochastic integration layer was employed. Reducing the variance and finding the maximum value of an appropriate feature in a domain of the EEG samples are the actual implementing appearance of the stochastic integration layer. Also, preventing the over-fitting problem is another proper aspect of using mentioned layers.
Following the layers of combination and convolution, a considerable number of inadequate feature maps are prepared. The network trained to analyze the signal status by importing all the training data and defining the label of epilepsy and nonepilepsy. Eventually, by joining these layers to the Softmax full connected layer, decision-making is possible. These layers are considered as the input data, and thus, label is defined at the training step. At the training level, the system tries to determine the best-unexplored parameters, involving filter weights and coefficients of layers. Therefore, the least error is reached in the classification step. The recursive descending gradient algorithm involves of two steps, including forward-feeding (FF) and propagation error, and is also employed for training the network (Rezaee et al., 2020). First, we examine the difficulty to be of a two-class representation, that the class c and N of the training signals are investigated. The squared error function (SEF) is further displayed by (5): where T kN and Y kN are the kth dimensions of nth design of the corresponding label and the predicted label returned by the CNN model, we utilize low number of layers. Notably, two layers are developed for adequate decomposition by DWT for EEG signals following the DCNN model. The proposed structure of the introduced network is shown in Figure 4. We implement multiple layers that include 4-12 layers. The first structure has 3-5 layers (lw-DCNN), the second structure has 5-8 layers (mw-DCNN), and finally, third structure has 8-12 layers (hw-DCNN). The structure of CNN layers, the filter size, and the number of filters for CNN and max-pooling operations are presented as layers 1-7. Convolutional layers (i.e., 1, 2, 4, and 6) are Conv1, Conv2, Conv3, and Conv4 with 10 × 1 (20 filters), 20 × 23 (20 filters), 10 × 20 (40 filters), and 10 × 40 (80 filters) respectively. Stochastic layers (i.e., 3 and 5) are 2 × 1 (stride 2) and 2 × 1 (stride 2), respectively. The decision layer is Softmax or dense layer with 2 and 3 classes.
Correlation map
The
Data set
The CHB-MIT database was generated solely by Children's Hospital Boston (CHB) for this study (Goldberger et al., 2000).
Assessments
We used accuracy, sensitivity, and specificity criteria to evaluate the epilepsy seizure detection model according to Equations (6) ) , Specificity =
Tables 1-4 have been depicted the outcomes of the classification scheme for five times trials of the design in two categories for the theta, gamma, beta, and alpha subbands, respectively. In mentioned tables, the results show the effect of the decomposition strategy on the input signal. The approximate standard deviation and accuracies have also been evaluated in low, medium, and high numbers of high layers based on the used weights of each structure. In general, the initial signal decomposition increases the classification performance in all cases.
In other words, the design of the DCNN occurs with the best accuracy.
If the length of each window on the signal is assumed to be 300 ms, 165 windows will be obtained for a 30-s signal with 40% overlap between windows. Since the sampling frequency is 256 Hz, each window will have 75 step times for feature extraction. This means that we have 165 × 200 windows for each subject, and since there were 24 subjects in the test, the 3 classes consisted of about 790,000 windows.
Higher frequencies are usually more commonplace in abnormal conditions for epilepsy in which there is a position alteration of EEG signal energy from lower to higher frequency subbands before and throughout a seizure occurrence. Following wavelet decomposition of the spectrum EEG signal, the extracted features from each subband independently. Hence, onset epilepsy seizures from nonstationary signals are easier to distinguish, mainly due to higher amplitudes. The selection of a proper wavelet and the number of decomposition stages is also extremely momentous in any analysis of EEG signals utilizing the wavelet transform. We computed the wavelet coefficients for all five various subbands of EEG signals. The tabulated confusion matrix (CM) beyond all 10-folds (CV = 10) is displayed in Tables 1-4. In these tables, it is perceived that 98% of the three classes of EEG signals are precisely classified as onset epilepsy seizure.
The classification proposed structure classified normal, onset epilepsy seizure, and certain seizure EEG data sets with an accuracy of 97%, 98%, and 99%, respectively. Overall, the EEG signals have been classified with an accuracy of 99%, which is the final classification accuracy by mw-DCNN in different subbands. The eventuated classification accuracy of the proposed mw-DCNN is quite high and therefore has the potential for a real clinical application. The receiver operating characteristic (ROC) curve is engaged to assess the accuracy of a continuous measurement for predicting a binary outcome. The basic aim of visually illustrating the ROC curve is to demonstrate the trade-off between the FPF and TPF as the cutoff c varies. We investigate two classes (i.e., nonseizure and onset epilepsy seizure) in our study to indicate ROC curve for test and unseen EEG signals. There are several summary assessments of accuracy and sensitivity associated with the ROC curve, namely the partial area under TA B L E 4 Evaluations of efficiency in various data dividing conditions and with and without DWT strategy and variation in the number of deep layers for theta subband
DISCUSSION
Compared to similar methods, the proposed algorithm asserts that it can be effective for seizure analysis in EEG signal analysis with lesser The accuracy obtained by the proposed mw-DCNN design is higher than the current approaches for EEG onset epilepsy seizure classification. The specificity and sensitivity criteria are additionally more useful than similar approaches.
In Some studies (Kaleem et al., 2018;Acharya et al., 2018;Supratak et al., 2014;Stober, 2017) proposed a way to understand the features and weights learned by the CNN model. In other words, they strived to discover which EEG signals have the most efficacies on the convolution maps. As a result, our method aids in visualizing the specific orientation of band power features following decomposition of EEG signals. Furthermore, we can apply correlation maps as the input of the deep learning technique to classify onset epilepsy seizure and similar EEG signals. It should be noted that windowing and decomposition of EEG signals have facilitated the classification procedure of the nonstationary EEG signals in onset epilepsy seizure detection. Our method also obtained satisfactory specificity and sensitivity criteria, which means that the process algorithm has generalized well. Moreover, combining various machine learning approaches and optimization algorithms appears to improve classification performance (Tavasoli et al., 2021). However, combining deep learning algorithms can significantly improve the classification of various epilepsy signals (Abdelhameed & Bayoumi, 2021).
The fundamental disadvantage of DWT is that it analyzes signals using a predetermined function, which limits its adaptability. Another concern is that laboratory-based real-time EEG recordings comprise both brain activity and noise signals. Additionally, EEG seizure patterns vary significantly between patients and even within the same patient over time.
One of the major benefits of this study is that it might be utilized in hospitals or clinics to automatically detect epileptic EEG patterns.
This capacity aids in the selection of antiepileptic medications as well as the determination of prognosis. The proposed technique, on the other hand, lowers human error and computing complexity while maintaining excellent classification accuracy.
CONCLUSION
We introduced a generic structure for EEG onset epilepsy seizure EEG signals analysis and classification applying medium-weight deep CNN.
The introduced procedure is a system based on a DCNN technique with a medium-weight model and initial decomposition of input signals by the DWT method. The labels predicted by the proposed method are significantly correlated to the opinions of the neurologist, and thus, by applying unseen data, we overcame challenges such as uncertainty.
The accuracy of identifying epileptic seizures for the current study for a two-class problem, including the presence or absence of disease, was estimated to be greater than 99%. Besides, the accuracy in different recurrences was estimated to be more than 98% on average. This method, which uses deep learning with an average number of layers, requires fewer signals for training, and on the other hand, can be used as a robust system in clinical research and early detection of epileptic seizures.
The authors intend to continue developing the system in the future, focusing on real-time design and noise resistance. The authors will refine and incorporate the existing technique for announcing the initial seizure notice on a medical diagnosis platform into future research. In the future, the method may aid neurologists in detecting and treating the underlying neurological problem shown in the disease's EEG signal more successfully.
DECLARATION
None.
ACKNOWLEDGMENTS
Our sincere thanks go out to Tabriz University for providing the signal processing laboratory. To conduct this research, no funding was received from any organization.
|
2022-10-06T06:19:14.713Z
|
2022-10-05T00:00:00.000
|
{
"year": 2022,
"sha1": "ede2d3e2554c80ac9a6823a7cab2d2dc9870044e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Wiley",
"pdf_hash": "82f483151e0fb51131bfae2a20db89c7202b0bed",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
239035796
|
pes2o/s2orc
|
v3-fos-license
|
Bifunctional Janus Silica Spheres for Pickering Interfacial Tandem Catalysis
Abstract Nature provides much inspiration for the design of multistep conversion processes, with numerous reactions running simultaneously and without interference in cells, for example. A key challenge in mimicking nature's strategies is to compartmentalize incompatible reagents and catalysts, for example, for tandem catalysis. Here, we present a new strategy for antagonistic catalyst compartmentalization. The synthesis of bifunctional Janus catalyst particles carrying acid and base groups on the particle's opposite patches is reported as is their application as acid‐base catalysts in oil/water emulsions. The synthesis strategy involved the use of monodisperse, hydrophobic and amine‐functionalized silica particles (SiO2−NH2−OSi(CH3)3) to prepare an oil‐in‐water Pickering emulsion (PE) with molten paraffin wax. After solidification, the exposed patch of the silica particles was selectively etched and refunctionalized with acid groups to yield acid‐base Janus particles (Janus A–B). These materials were successfully applied in biphasic Pickering interfacial catalysis for the tandem dehydration‐Knoevenagel condensation of fructose to 5‐(hydroxymethyl)furfural‐2‐diethylmalonate (5‐HMF‐DEM) in a water/4‐propylguaiacol PE. The results demonstrate the advantage of rapid extraction of 5‐hydroxymethylfurfural (5‐HMF), a prominent platform molecule prone to side product formation in acidic media. A simple strategy to tune the acid/base balance using PE with both Janus A–B and monofunctional SiO2−NH2−OSi(CH3)3 base catalysts proved effective for antagonistic tandem catalysis.
Introduction
Tandem catalysis, that is operating multiple, mechanistically distinct consecutive reactions in a 'one-pot' manner, is a highly attractive strategy to make catalytic processes more efficient. [1] This approach can eliminate costly separation steps and allows for overall process simplification, thereby increasing time, energy and resource efficiency. [2] When (multifunctional) solid catalysts are used for such tandem reactions, catalytically active structures that have no obvious incompatibility issues can simply be immobilized on a (functional) support material, for example by common deposition or chemical grafting methods. Typical examples include immobilization of noble-metal nanoparticles on zeolites [3] or graphitic carbon nitride. [4,5] However, for catalysts that are chemically incompatible, such as acid and base catalysts, spatial separation of the catalysts is necessary. Previous examples of spatially separating catalysts for heterogeneous tandem catalysis include stacked-shell materials such as core-shell, yolk-shell or multishelled hollow structures. [6][7][8][9][10][11][12][13][14][15] These materials have mainly been used for tandem reactions in onephase reaction media. Biphasic tandem catalysis with antagonistic (e. g., acid-base) bifunctional catalysts still remains largely unexplored, however.
Biphasic oil/water systems have been extensively investigated and have demonstrated their great use in numerous organic transformations, in separation and purification. [16] Despite their extensive application, a general drawback of these systems is the low reaction efficiency due to the limited liquid/ liquid interfacial area. A very efficient way to increase interfacial area, and hence reaction and/or extraction efficiency, is to place a solid at the liquid/liquid interface and create an emulsion. Recently, such solid particle-stabilized emulsions, so-called Pickering emulsions (PEs) have been receiving increased attention for their application in biphasic catalysis. [17][18][19] The solid particles at the boundary layer in PEs serve to compartmentalize and to protect the dispersed phase from the continuous phase, for example, allowing multiple reactions to be mediated consecutively. [20][21][22] The catalysts can then be dissolved in both phases, or the solid emulsifier itself can be endowed with catalytic properties. For the latter, the solids should then be bifunctional when acid-base catalysis is targeted. Janus particles consisting of two patches with distinct properties are particularly interesting materials in this respect. Although bifunctional Janus particles are increasingly being investigated as stabilizers for PEs, the opportunities they offer yet remain largely unexploited for PE tandem catalysis. [23,24] Recent studies strongly suggest that the geometry as well as the surface properties of Janus particles have a significant influence on their surface activity. [25,26] To date, Janus particles shaped as spheres, [27,28] dumbbells, [29] polygons [30] or nanosheets [31,32] have been reported for the successful stabilization of PEs. Janus spheres are ideal for designing interfaceactive solid catalysts, as the equilibrium orientation of the Janus boundary tends to get pinned at the interface when the nonpolar and polar hemispheres are exposed to the oil and water phases, respectively. [33][34][35] By placing catalytically active groups on the polar and/or the nonpolar hemispheres, the interfacial configuration of the Janus spheres can be precisely controlled, and thereby, the catalytic activity in a biphasic reaction system. [36][37][38] The available examples of Janus sphere use in PE catalysis are mostly limited, however, to monocatalytic materials bearing metal nanoparticle functionalities. The Janus nature of the particles is exploited to improve stability and single patch decoration allows the amount of metal nanoparticles required to be reduced. [29,37,39] Synthesizing Janus particles with antagonistic catalyst patches and opposite wettability on different patches of the Janus spheres is still a major challenge and no examples of this are, to the best of our knowledge, currently available.
Biomass valorization efforts could benefit from antagonistic tandem catalysis strategies to convert renewable biomass to high-value products. 5-Hydroxymethylfurfural (5-HMF), a key sugar-derived renewable platform molecule, is attracting much attention for the production of various biobased chemicals. [40] Significant effort has been devoted to the development of efficient methods for 5-HMF production and its conversion to value-added fine chemicals. [41][42][43] For example, the catalytic hydrogenation and oxidation of 5-HMF have been widely investigated [40] and 5-HMF derived furans have been used in fine chemistry. [44][45][46][47] Integrated tandem catalytic conversion of renewable sugars to value-added products via 5-HMF as intermediate nevertheless remains a significant challenge. [48][49][50] Here, we present a strategy for the synthesis of bifunctional Janus PE emulsifiers to specifically locate acid and base functionalities into the two distinct liquid phases of the emulsion. The approach involved immobilizing hydrophobized amine-functionalized silica particles in a wax-based PE allowed the exposed patch of the silica particles to be selectively etched and further modified to also bear complementary acid functionalities. The acid-base functionalized Janus silica spheres were able to stabilize w/o PEs and used as solid catalytic emulsifiers for the tandem catalytic dehydration-Knoevenagel condensation of fructose (Scheme 1). The transport of 5-HMF across the interface minimizes side reactions, such as organic acid and humins formation. [51] Results and Discussion
Synthesis of Janus silica with sulfonic acid and amine functional groups (Janus A-B)
A schematic representation of the general synthesis strategy is given in Scheme 2. Bifunctional Janus silica spheres with asymmetric wettability were synthesized through a solidified wax Pickering emulsion method. The wax can serve as dispersed oil phase when the temperature is above its melting point, immobilizing the silica particles at the surface of the wax droplets. [52,53] Solidification at lower temperature then fixes the configuration of the silica particles, allowing the unprotected hemisphere exposed to the aqueous phase to be modified by chemical etching or grafting, for example, by silane chemistry. Figure 1a shows that the bare silica spheres synthesized using the Stöber method were highly uniform with an average Scheme 1. Schematic representation of the use of bifunctional Janus A-B spheres for tandem PE catalysis. Left: Janus A-B particlesget pinned at the oil/water interface, spatially localizing the two catalyst functionalities in the polar (acid) and nonpolar (base) phases. Right: The dehydration-Knoevenagel condensation reaction of fructose. Blue: water droplet; yellow: continuous organic phase. diameter of 450 nm. At this size the particles are small enough to efficiently stabilize PEs while also still being large enough to easily visualize any morphology changes with TEM. The SiO 2 particles were first grafted with (3-aminopropyl)triethoxysilane (APS; Scheme 2a) to obtain base-functionalized particles (SiO 2 À NH 2 ) with an NH 2 loading of 2.50 mmol g À 1 . The FT-IR spectrum of the particles (see the Supporting Information, Figure S1) shows both the OH-stretch vibration of the silanol groups (3688 cm À 1 ) and the NÀ H stretch vibration of the amine groups (3411 cm À 1 ).
As SiO 2 À NH 2 is intrinsically hydrophilic (water contact angle of 80°; SiO 2 precursor: 48.6°, Figure S4), it was necessary to partially hydrophobize the surface to favor adsorption at the oil-water interface. Reversible hydrophilization has been reported using molecular surfactants such as cetrimonium bromide (CTAB) [54] or didodecyldimethylammonium bromide (DDAB) [53,55] to tune the contact angle of the hydrophilic silica spheres, but the introduction of surfactants may have unpredictable effects on catalysis. [56] Therefore, here we used hexamethyldisilazane (HMDS) [57] instead to irreversibly hydrophobize the SiO 2 À NH 2 silica spheres (Scheme 2a). The hydrophobization was confirmed in the IR spectrum by the intense CÀ H stretch and CÀ H bending vibrations of the grafted methyl groups around 2800 cm À 1 and 1400 cm À 1 ( Figure S1b). The water contact angle for the SiO 2 À NH 2 À OSi(CH 3 ) 3 was found to be 135°, confirming its hydrophobicity ( Figure S4).
Hydrophobized amine-functionalized SiO 2 À NH 2 À OSi(CH 3 ) 3 was then used as starting material for the synthesis of the acidbase Janus particles (Scheme 2b). The SiO 2 À NH 2 À OSi(CH 3 ) 3 particles were mixed with water and molten paraffin wax at 80°C to form a wax-in-water PE stabilized by the SiO 2 À NH 2 À OSi (CH 3 ) 3 particles. Upon cooling down to room temperature, the wax solidified and colloidosomes covered with silica particles were obtained. The exposed part of the partially embedded particles was first etched with 1 wt.% HF to remove both surface functional groups (À NH 2 and À OSi(CH 3 ) 3 ; Figure 1b, c). As shown by TEM, the Janus silica particles are nonspherical after etching, as indicated in Figure 1b in which the original size of the sphere before (indicated by black circle) and after etching (red) is highlighted. The grey cap at the top indicated the part protected from etching inside the solidified wax; the thickness of the layer etched away was about 20 nm thick.
The etched silica was then refunctionalized by covalent grafting with various concentrations of 2-(4-chlorosulfonylphenyl)ethyltrimethoxysilane (CSPTMS). Finally, hydrolysis of chlorosulfonyl groups with HCl gave the desired sulfonic acids [58] to generate the Janus A-B particles, with a tunable SO 3 H loading that ranged from 1.38 to 2.73 mmol g À 1 .
As previously reported by Freire ad co-workers, the incorporation of phenyl sulfonic acids using this CSPTMS precursor led to a higher acid concentration and required less synthesis steps than when phenyltriethoxysilane or phenyltrimethoxysilane were used as precursor, which require sulfonation or chlorosulfonation steps in order to obtain acid functionality. [59] The FT-IR spectra before and after etching, grafting and hydrolysis, and drying the samples 400°C under vacuum, are shown in Figure S1. The drying procedure was less effective for the acid-functionalized particles than for the SiO 2 particles, leaving a broad band from 3600 to 3000 cm À 1 originating from water interacting with the silanol groups. The CÀ H stretch vibrations from the ethyl tail of the CSPTMS were observed at 3000-2700 cm À 1 while the ring vibrations of the phenyl group were observed at~1500 cm À 1 . [59] Sulfonation was confirmed by the weak band corresponding to the S=O stretch vibration at 1400-1350 cm À 1 . The water contact angle for the Janus A-B material was found to be 110°( Figure S4).
As direct visualization of grafted APS and CSPTMS is not possible with electron microscopy techniques, various other experiments were performed to show that the two parts of the Janus particle are in fact differently functionalized. For example, taking advantage of the amine groups on the surface of the silica spheres, amine functionalized silica spheres were dispersed in silver nitrate solution and in situ coated with silver nanoparticles (Ag np) by sodium borohydride reduction. As anticipated, the aminated silica were homogeneously covered with silver nanoparticles (Figure 1d). Conversely, coating of the etching particles lead to silver nanoparticles deposition onto the amine-functionalized patch of the Janus spheres only. No Ag np's were detected on the etched patches, confirming the Janus nature (Figure 1e). Moreover, to show the solidified wax PE method is a general approach towards bifunctional Janus sphere synthesis, arylfunctionalized Janus spheres were prepared in a similar approach. The silica spheres were coated with a vinyl layer using 3-(trimethoxysilyl)propyl methacrylate (TPM) as silane coupling agent, on which polystyrene (PS) was grafted via seeded polymerization (Figure 2). Without etching, PS was fully grown around the vinyl-coated SiO 2 , as shown in Figure 2 (left), with the dark core corresponding to the silica sphere and the grey area corresponding to PS. When the particles were partially etched, no polystyrene was detected on the etched side of the particles (Figure 2, right). The morphology of the grafted PS varies from a spherical cap to irregular cluster-like structures on the surface of the silica core, as the polymerization procedure is very sensitive to the concentrations of vinyl groups on the surface and the amount of monomers and initiators used. [60] As we were primarily interested in demonstrating the efficiency of bifunctional Janus particle synthesis, precise control over the shape of the grafted polystyrene was not necessary and therefore not further investigated. Indeed, the growth of PS again showed that the silica particles can be selectively etched and grafted using the solidified paraffin wax method.
To further substantiate the influence of each modification step on the properties of the material, we simply observed the stability of the dispersion in water and toluene, indeed noting the changes in behavior anticipated for the specific modification. It was found that unmodified silica (SiO 2 ) and aminated silica particles (SiO 2 À NH 2 ), bearing hydrophilic surface silanols and amines, could only be dispersed in the water phase (Figure 3a-d).
Particles hydrophobized with HMDS (SiO 2 À NH 2 À OSi(CH 3 ) 3 ) were found to disperse well only into the toluene phase (Figure 3e, f). The Janus silica particles did not disperse well into either water or toluene phase (Figure 3g, h) and rapidly transferred to the interface of water and toluene (Figure 3i), again corroborating that the desired Janus geometry was indeed achieved by the selective etching and coating strategy. The synthesis of the amphiphilic bifunctional Janus particles with acid functionalization on the hydrophilic and base functionalization on the hydrophobic side of the particles, to the best of our knowledge, has not been reported before.
Antagonistic tandem catalysis
With the bifunctional Janus A-B emulsifiers in hand, we set out to use them as catalysts for a tandem dehydration-Knoevenagel condensation reaction in PEs formulated with 4-propylguaiacol (PG) as organic phase and an aqueous phase saturated with sodium chloride. The alkylphenol PG was selected as the organic solvent over a more standard alkane or cyclohexane oil for its high 5-HMF extraction efficiency, [61,62] while NaCl was added to take advantage of the salting out effect to improve the partitioning of HMF into the extracting phase. [63,64] We first investigated the ability of Janus spheres to stabilize w/o PEs of PG. At 3.5 wt % silica, full PEs were obtained (Figure 4a), whereas lower emulsifier concentrations (1-3 %) did not give full emulsification ( Figure S2). Optical microscopy images and confocal fluorescence microscopy (CFM) images showed a broad droplet size distribution with droplet diameters varying between 50 and 300 μm (Figure 4c, d). As the organic phase was stained with Nile red, the CFM images confirmed that the PE is of the w/o type (Figure 4d). Due to the amphiphilic nature of the particles, the particles furthermore behave differently at the water/oil interface than the monofunctionalized ones. The hydrophobic amine-modified SiO 2 À NH 2 À OSi(CH 3 ) 3 material was also found to stabilize a PG/water PE at 3.5 wt.% ( Figure S5).
As expected, the Janus A-B-stabilized Pickering emulsion was fully resistant against destabilization phenomena such as coalescence, creaming, and sedimentation, also under the more severe reaction conditions (Figure 4b). The morphology and droplet size were well retained even after a 24 h tandem reaction at 100°C. Furthermore, cryo-SEM images were recorded to further characterize the location of the Janus particles at the oil-water interface and the solid layer structure of the PE. As can be seen from Figure 4e and f, sample preparation for cryo-SEM caused the water droplets of the PE to turn into sharp ice crystals, surrounded by a solid monolayer consisting of closely packed Janus A-B particles (Figure 4f). This dense packing of the Janus A-B particle layer is thought to contribute significantly to the observed excellent PE stability. Unfortunately, the uncontrolled fracture propagation and irregular interfacial surface topography prevented the exact configuration of Janus particles at the oil-water interface to be determined in more detail. The cross-section image in Figure S3 clearly shows that silica spheres concentrated at the interphase as a monolayer, whereas very few silica spheres are observed inside the droplets.
The compartmentalization by patch-specific functionalization should prevent acid-base quenching with these particles and we decided to perform the tandem conversion at the optimal temperature for fructose dehydration, which is 100°C. Three different Janus A-B particles with a fixed amine concentration of 2.14 mmol, and varying acid concentration (see above) were used for catalysis ( Figure 5). The Janus particles with the lowest acid concentration (1.38 mmol g À 1 , R base/acid = 1.55) showed 54.3 % conversion of fructose and 5-HMF and 5-HMFDEM yields of 12.9 and 17.3 %, respectively, after 24 h. Increasing the acid concentration (2.37 mmol g À 1 , R base/acid = 0.9) resulted, as expected, in higher fructose conversion, 62.6 %, but not in an increase in 5-HMF or 5-HMFDEM yield. This is indicative of an increase in humins side product formation instead. In line with this, using the Janus A-B materials with the highest acid loading (2.73 mmol g À 1 , R base/acid = 0.78) led to dark coloration already after 7 h of reaction. The excessive humins formation suggests that the follow up base-catalyzed reaction was not sufficiently rapid to avoid side reactions. Indeed, even though fructose conversion was as high as 70.4 % already after 7 h reaction, the yield of 5-HMFDEM was only 11.9 %. While these first results showed that the Janus A-B can indeed be used as Pickering stabilizers and as heterogeneous catalysts for the tandem catalytic dehydration-Knoevenagel condensation reaction, they also emphasized that a fine balance needs to be struck to efficiently couple the individual steps. The second step in the tandem therefore needed to be enhanced, so a higher base/acid ratio was required.
Given the materials obtained over the different steps of the synthesis protocol, a very convenient way to tune the base/acid ratio is to use a physical mixture of the Janus A-B particles and their amine-only SiO 2 À NH 2 À OSi(CH 3 ) 3 precursor. Note that the latter was shown to catalyze the Knoevenagel condensation of 5-HMF and diethyl malonate ( Figure S6); a combination of the amine-only material and HCl did not allow for tandem catalysis, indicating rapid quenching and highlighting the need for the use of immobilized catalysts.
While keeping the total amount of particles fixed, mixing 1 part the Janus A-B and 3 parts of the SiO 2 À NH 2 À OSi(CH 3 ) 3 (NH 2 loading 2.5 wt.%; total R base/acid = 3.53) gave a > 20 % increase of 5HMF-DEM yield (32.1 %) and a 37 % increase in selectivity (54 %; Figure 6, middle) compared to the pure Janus A-B system ( Figure 6, left). Further increasing the amount of base catalysts by mixing 1 part of Janus A-B and 7 parts of SiO 2 À NH 2 À OSi (CH 3 ) 3 led to a decrease in both conversion of fructose (51.1 %) and 5-HMF-DEM yield (19.6 %; Figure 6, right), with the acid now likely limiting conversion in the PE system. First recycling studies showed that the material can be recovered and reused, but further optimization is required as a gradual drop in efficiency was also noted ( Figure S7).
Conclusion
In summary, this study outlines a general route to fabricate bifunctional silica spheres featuring tailored functionality and asymmetric wettability through a solidified wax PE method. By using functional silica seed particles, a series of bifunctional Janus silica spheres were prepared. The versatility of the method is illustrated by the synthesis of polymer-grafted as well as nanoparticle decorated materials. Notably, compared to the reported Janus-type particles functionalized with a singlecatalytic functionality, [29,[37][38][39] the method outlined here allows bifunctional Janus silica spheres to be synthesized with tunable catalyst loading, such as the sulfonic acid and amine decorated ones used here. The inherent combination offered by Janus A-B to control spatial catalyst distribution and compartmentalization as well as to allow good emulsion stability suggests that the method presented here may be further adopted to produce various useful task-specific materials.
As first proof of concept, we demonstrated that the Janus A-B could be used as Pickering stabilizers and heterogeneous catalysts for the tandem catalytic dehydration-Knoevenagel condensation reaction of fructose. As is typical for such reactions, a fine balance needed to be struck to efficiently couple the individual steps. The specific choice for using PE as reaction medium offered a simple method to tune the acidbase catalyst ratio by stabilizing the emulsion with a physical mixture of monofunctional and Janus A-B particles. In this case, the use of such a physical mixture resulted in a considerable increase in yield and selectivity. We envision that the developed approach will serve as a versatile starting point for the synthesis of tailored multifunctional solid catalysts, which can serve as a platform for the immobilization of a variety of catalytic functionalities. The capability to orthogonally tune the properties of well-defined regions on a single colloidal particle offers the prospect of new applications in the fields of tandem catalysis, directed colloidal assembly and multiresponsive selfpropelling particles.
Synthesis of monodisperse silica spheres (SiO 2 ).
Monodisperse surface functionalized silica spheres were synthesized using a modification of the method reported by Zhang et al. [65] In a 100 mL round bottom flask, 11.6 mL of ammonium hydroxide and 48.4 mL of ethanol were stirred at 400 rpm for 10 min. This is followed by the addition of a mixture of 1.5 mL TEOS and 6 mL of ethanol. To grow the silica particles into the desired size, 4 mL of TEOS in 20 mL of ethanol was added dropwise after 2 h of reaction. The mixture was then mechanically stirred overnight for another 24 h at room temperature. Finally, the mixture was centrifuged and washed three times with 30 mL ethanol for the complete removal of reactants.
Synthesis of amine or vinyl-decorated silica spheres (SiO 2 À NH 2 ). Typically, 10 g of APS was added to 60 mL of toluene dispersion containing 3 g of silica spheres, and the suspension was subsequentially transferred to the oil bath to react for 24 h at 95°C. Then the solid was collected by centrifugation and redispersion in 50 mL ethanol for three times and dried using a rotary evaporator. Similarly, the modification of silica spheres whose external surfaces covered with vinyl groups were prepared using the same method, but now using TPM as silane coupling agent.
Hydrophobization of aminated silica spheres
HMDS (2 mL) was added to 50 mL SiO 2 À NH 2 ethanol dispersion containing 3 g aminated silica spheres and the suspension was stirred for 24 h. After modification, the resulting particles were washed several times with 50 mL ethanol to remove unreacted chemicals and dried in the oven at 110°C.
Preparation of wax-water colloidosomes stabilized with silica spheres
The wax-water colloidosomes stabilized with silica spheres were prepared using a modification of a method first developed by Hong et al. [52] 0.5 g of the hydrophobized aminated silica particles (SiO 2 À NH 2 À OSi(CH 3 ) 3 ) were dispersed in 10 g of paraffin wax at 80°C, followed by addition of 50 mL preheated water (80°C) under magnetic stirring at 1600 rpm. After stirring for 1 h, the wax-water PE was allowed to cool down to room temperature without stirring. The resulting solid colloidosomes stabilized by hydrophobized aminated silica particles creamed up forming a white layer of small spheres during cooling procedure. The colloidosomes were washed with water to remove particles in the aqueous solution as well as weakly attached particles, then washed with 50 mL ethanol another 3 times and finally dried at room temperature.
Preparation of Janus silica by selective etching
10 g of the dried colloidosome spheres were immersed into 30 mL of 1 wt.% aqueous HF for 12 h in a plastic test tube (caution: HF solution is very hazardous and corrosive and should be handled with great care according to the MSDS guidelines), and the resulting etched Janus silica-covered colloidosomes were collected with filter paper and rinsed carefully and thoroughly with saturated boric acid aqueous solution and demi water multiple times. The colloidosomes were then dried inside a fume hood before further modification.
Synthesis of Janus silica with acidic and basic groups (Janus A-B)
The etched colloidosomes were dispersed into 30 mL ethanol solution containing 1 mL of CSPTMS and 0.5 mL ammonium hydroxide (25 wt.%) to modify the etched hemisphere. The mixture was stirred for 24 h after which the colloidosomes were washed with water three times, followed by dispersion in 30 mL 2 M HCl solution to activate the sulfonyl groups. Finally, to release these modified silica particles from the paraffin wax, the colloidosome were dispersed in toluene at 80°C.
Synthesis of Janus SiO 2 /Ag composite colloids
10 mg of the Janus colloid were firstly dispersed in ethanol (1 mL) under ultrasonication for 20 minutes (Janus A-B could not be dispersed in water directly), then 10 mL of water was added to the dispersion. After gentle shaking, a homogeneous Janus silica dispersion was obtained upon which 0.5 mL of AgNO 3 aqueous solution (20 mM) was added. The mixture was placed in the ultrasonic batch for another 30 min before 0.2 mL of NaBH 4 solution (10 mg mL À 1 ) was added dropwise. The color of the dispersion turned green/brown immediately. The product was separated by centrifugation and washed three times each with ethanol.
Synthesis of Janus SiO 2 /PS composite colloids
10 mg of the Janus colloid, 4 μL of 20 wt.% SDS solution and 0.1 mL styrene were added to 2 mL water. The mixture was purged with nitrogen to remove oxygen and then emulsified with ultrasonication for 30 s. The polymerization was initiated by 200 μL KPS aqueous solution (1 wt.%) in the oil bath at 70°C for 10 h. The product was separated by centrifugation and washed three times each with water and ethanol.
Solids characterization
Transmission electron microscopy (TEM) pictures were taken with a Philips Tecnai10 electron microscope typically operating at 100 kV. The samples were prepared by drying a drop of diluted aqueous dispersion on top of polymer-coated copper grids. Scanning electron microscopy (SEM) images were taken with a Philips SEM XL PEG 30 typically operating at 5-10 kV. The silica particles were analyzed by Fourier-Transformed Infrared (FT-IR) for which selfsupported wafers of~20 mg was mounted in an FT-IR cell connected to an oven. The wafer was dried by heating the sample to 400°C with a heating rate of 5°C min À 1 under vacuum. A Perkin-Elmer System 2000 instrument was used to record the FT-IR spectra in transmission mode in the spectral range of 4000 to 1000 cm À 1 . For each spectrum 32 scans were collected with a spectral resolution of 4 cm À 1 . Water contact angle (CA) measurements were performed by using an FTA-1000 drop shape instrument (First Ten Angstroms Inc.). Samples were pressed to form a pellet which was then transferred to a glass slide. A 10 μL water droplet was placed on the sample pellet and the CA values were estimated by the measurement software according to the fitting method using the Young-Laplace equation.
Cryo-SEM images were taken using an Aquilos dual beam scanning electron microscope (FIB-SEM) from Thermo Scientific. Approximately 100 μL water-in-oil PE was brought into the tip of a thin plastic pipette tip and submerged into liquid nitrogen. A fresh fracture was created at the end of the pipette using pliers under liquid nitrogen and directly loaded onto the pre-cooled stage within the FIB-SEM chamber. To achieve better conductivity for imaging, Pt was sputter coated onto the fracture inside the instrument at 10 Pa with 30 mA for 10 sec to give a layer approximately 10 nm thick. Throughout the coating and imaging the temperature of the sample was kept below À 170°C. Cross sections were obtained using the Ga ion source at 30 kV in three steps: i) 15 μm × 10 μm × 5 μm (xyz) at 1 nA; ii) 15 μm × 2 μm × 5 μm (xyz) at 0.3 nA; and iii) 12 μm × 2 μm × 5 μm (xyz) at 0.1 nA using the Si cleaning cross section preset. For imaging, the electron beam was operated at 5 kV and 0.1 nA, with a working distance of 6.7 mm and the stage was oriented at an angle of 52°with respect to the incoming beam. Secondary electrons were captured using an Everhart-Thornley detector and backscattered electrons using a through-the-lens detector.
PE preparation
All PEs were prepared by first dispersing a known mass of particles into PG using a VCX 130 Vibra-Cell Ultrasonic Processor equipped with a 3 mm diameter tip (Sonics, 20 kHz, 10 W, 2 min). During sonication, it was necessary to cool the vessel in an ice-bath. After the addition of 2 mL aqueous phase with the appropriate NaCl concentration and after setting the required pH to the dispersion, the resulting mixture was emulsified using a UltraTurrax T25 homogenizer with a S25 N-10G dispersing tool (IKA, 15200 rpm, 2 min).
Tandem catalytic reactions
15 mL Ace pressure tubes were charged with 2 mL PG with hydrophobized aminated silica or Janus A-B. Fructose or 5-HMF dissolved in the aqueous phase were added to the pressure tubes and the PEs were prepared via emulsification for 1 min at 10 krpm using an IKA UltraTurrax with an S25N 10G dispersing tool. The PEs were left for reaction at 100°C without stirring for the applied reaction time. After the reaction time was complete, the mixture was cooled to room temperature in air and 1 mL of citric acid solution was added (15 mg mL À 1 ) and the PE was destabilized by centrifugation using a Rotina 38-R Hettich centrifuge (11000 rpm, 4°C, 10 min). The aqueous phase was analyzed by HPLC analysis performed on a Shimadzu HPLC system equipped with a Bio-Rad Aminex HPX-87H column, and a differential refractometer using citric acid as internal standard. The organic phase was analyzed on a Varian GC equipped with a VF-5 ms capillary column and an FID detector.
|
2021-10-21T06:22:39.334Z
|
2021-10-19T00:00:00.000
|
{
"year": 2021,
"sha1": "407f0556c5a953489633dfd3d6a01990bcb99dd3",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "a708ee5c997c71e76ccf9893893c6c77aaee8649",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
55459966
|
pes2o/s2orc
|
v3-fos-license
|
Exploration of the learning model as a strategy in enhancing the quality of academic programme
The intent of the study was to implore the learning model as process of improving quality at higher education institutions. This study was conducted through the use of semi-structured interviews and documental analysis process. The study did attract responses from 45 academics who work closely with academic quality matters as heads of schools. Majority of the respondents considered a learning model as approach towards quality assurance improvement. The policies as applied in higher Education were also the main drivers. Most of the responses felt that even though quality is a cumbersome matter they can manage the process.
INTRODUCTION
In South Africa, policy developments were monitored by the National Commission on Higher Education (NCHE) (1996) in response to the challenges facing Higher Education Institutions (HEIs).In this regard, quality assurance (QA) is seen as the process of assuring accountability through the measurement and evaluation of the effectiveness and efficiency of the transformed higher education institutions (HEIs).Henard (2007) argues that, it is important that HEI should conduct their own self-evaluation up to the point of producing a report.It is important that all stakeholders should make their contribution to the self-evaluation reports.In this whole process, internal self-evaluation would form the basis of QA.In particular, the White Paper 3 on HE (1997) clearly states that the primary responsibility of QA rests with the higher education institution itself.
Quality is increasingly being considered as a key factor in promoting competition.As a consequence, many quality management systems seem to be outwards orientated, placing more emphasis on the external presentation of the institution than on its internal development processes.It is a fact that an institution"s reputation and its dependence on the external environment (e.g.funding/ budget) can be extremely influential factors for internal QA (Kasozi, 2006).
HEIs are beginning to realise the need to build up selfevaluation and more generally, foster an internal quality culture.However, HE has always been driven by the need for quality but the explosion of external national QA systems worldwide is making greater demands on institutions to be more transparent in this area.If external accountability has become more systematic, then it is important that an internal procedure becomes more developed and visible to the public.This paper intends to implore the role of quality developing a learning environment.
REVIEW OF RELATED LITERATURE
HE has always been driven by the need for quality but the burgeoning of external national QA systems in most countries such as Finland, Denmark and Austria is making greater demands on institutions to be more transparent in this area.By and large, external measures have been useful in promoting quality, although there have been documented cases, here and there, of intrusive procedures (Jensen 2004).Nevertheless, if external accountability has become more systematic, then it is important that internal procedures become more developed and transparent to the public.
Internal QA seems, at present, to be receiving a great deal of attention at HEIs.These institutions are seeking guidance in determining the most appropriate model on which to base their internal QA policies and procedures.Thus, this study is an attempt to provide information on some of the available models.It is, however, by no means exhaustive.HEIs should feel free to experiment, but should take care to avoid unnecessary duplication of effort.Therefore, the best practices in internal self evaluation are difficult to present.
The requirements of external QA bodies that may be legislated should be met at all times (Jacobs 2000).Many businesses such as industrial and manufacturing models for internal QA are available for adaptation, but HE institutions should decide for themselves which are the most appropriate for their purposes.Such purposes may vary from merely satisfying the external QA agency"s requirements, to introducing serious mechanisms at HEIs with the purpose of improving internal quality (Woodhouse, 2006).
An overview of quality assurance in Higher Education Institutions
The concept of quality is not new; it has always been part of the academic tradition.It is the outside world that now emphasises the need for attention to quality, with the relationship between HEIs and society having changed.This encapsulates the profound changes in the context of HE; including growth, diversity, changes in size and in the nature of HE.This has been accompanied by a growing state interest in quality, demands for accountability and the establishment of national quality agencies (Newton, 2007).The notion of quality covers those elements of an HEI culture that have the strongest impact on quality teaching.
It must be emphasised that in studies of quality culture, with respect to HEIs, this concepts is perceived mainly in terms of the total quality management (TQM) philosophy, which reveals the role of leadership in creating a culture based on the constant need for improvement, team work and the participation of all in the process making (Kowalkiewicz, 2007).Majority of HEIs have focused on working out the procedures of quality evaluation and assurance, which may appear insufficient if not accompanied by the evolution of the university organisational culture towards a quality culture, since what is crucial for the success of any action aimed at quality enhancement, is a quality-orientated system of values.
Academic quality results from the leadership that develops the best-in-class policy and strategy, customer and market focus and people management with the academic and efficient use of resources.
Learning region model
The learning region model is premised on the fact that" institutions which regulate economic activity are Selesho 11559 increasingly being regionalized and economic success is becoming increasingly dependent upon trust, norms, values and tacit and personal knowledge" (Favish, 2005), which is easier to achieve within regions.Therefore, writers such as Kanter have asserted that the "challenge is to find ways to which the global economy can work locally by unlocking those resources which distinguish one place from another" (Favish, 2005).Building on the approach to understanding the transformation of the economy, writers such as Lundvall Johnson have operationalised the role of HEIs in the context of the learning economy and the learning region.She further defines the learning economy as an economic where the success of individuals, firms and regions, reflect the capability to learn (and forget old practices); where change is rapid and old skills become obsolete and new skills are in demand; where learning includes the building of competencies, not just increased access to information; where learning is going on in parts of society, not just in high-tech sectors; and net job creation is in knowledge intensive sectors (Favish, 2005).Favish (2005), in analysing the implications of the notion of the learning economy, has articulated the notion of a learning region, which would reflect the importance of lifelong learning to cope with changing patterns of skills demands, new ways of delivering education and training made possible by Information and Communication Technology (ICTs), and the changing nature of knowledge production.She summarises the challenge for universities as: Blending and combining competition in the new enterprise environment with collaboration; fostering and supporting boundary spanners who can work across the borders of the university in effective discourse with other organizations and their different cultures; fostering cultural change to enable universities to speak and work with partners from many traditions and persuasions as more learning organizations emerge and together enrich their various overlapping learning zones or regions (Favish, 2005:110).
One of the limitations of this model is that the notions of the learning economy and the learning region are potentially too closely intertwined.Thus, there is a danger of assigning primary importance to upskilling people to cope with the rapid changes in technology in order to enhance economic competitiveness, and marginalising issues related to social justice and equity, which may be manifest at local or community levels.In the absence of equal prominence being accorded to the social manifestations of globalisation, there is a danger that institutions articulating stronger local or community orientations could be regarded as inferior in status to those institutions oriented towards supporting the world of work, technological development and economic competitiveness.
RESEARCH METHODOLOGY
In this section, the methodology used in the study is highlighted and unpacked for the smooth organisation of the process.The researcher made use of the descriptive survey as it fits perfectly this kind of study.
Questionnaires and structured interviews were utilised.The purpose of using the qualitative research method is to describe internal quality and to explore how it can be used effectively in South Africa"s UoTs.It is also used to explain quality notions and concepts and to examine how they relate to institutional evaluation.
Qualitative data were gathered in as many ways as the researcher"s creativity permitted.Although the most widely used sources were observation and interviewing, analyses of records and documents are common, and it was also used.
The research strategy is idiographic, in which a single case and its structural coherence with a larger context are examined.Cohen et al. (2007) indicate that one should favour the views on social reality that stress the importance of the subjective experience of individuals in the creation of the social world.The search for an understanding should be on different issues and be approached in a totally different way.The principal concern is with an understanding of the way in which the individual creates, modifies and interprets the world he/she finds himself/herself in.
Population and sample
The population of this study consists of all six Universities of Technology (UoTs) academic heads of department (HODs) and quality assurance managers (QAMs).From six UoTs the researcher decided to select four institutions and forty five participants as they were able to participate during the distribution of questionnaires and in interviews.
The characteristics that distinguish them as urban centres are that the same external quality provider has accredited these institutions; in addition to this they follow the same curriculum and make use of convenorship.
Data collection
The data for this study consist of three kinds, namely, primary, secondary and tertiary data.Secondary data include academic journals, and form the basis for the theoretical study and quality analysis.Secondary data were supplemented by tertiary data obtained from the literature and references in academic journals, as well as from available unpublished research.The primary data were collected from the heads of academic departments by means of questionnaires and structured interviews.
FINDINGS AND DISCUSSION
This section reports on the research findings of the qualitative analysis in this study.The method of reporting has been included in order to assess the situation as perceived in the field.Therefore, the data were collected through the use of structured one-to-one interviews with QAM.It was also imperative to support the use of interviews by QAM with the support of the questionnaires from the HOD as the immediate recipients of quality process.All these methods of data collection procedures for this study, being the interviews and questionnaires, aided in eliciting diverse views on answers to the research questions.Furthermore, this study focused on the methodological aspects of the QA system as it has been identified as the key indicator for the study.
The discussion concentrates on data obtained following the administration of feedback-questionnaires and of personal interviews with heads of academic departments (HODs) and institutional quality managers (QAMs).At this stage, the researcher intends to present the responses and the views of the respondents as they are and does not intend to argue or offer his opinion or analysis at this stage.
An overview of response analysis
QA is seen as a process of continuous improvement in the teaching and learning process which, to a certain extent, will be achieved via the various pathways of employing the mechanisms, internal and external to the HEIs.QA is a process of maintaining standards in products or services through inspection or in testing samples.Lastly, it is imperative that society should be concerned about HEIs as they are a national hope for the development of the nation; therefore, accountability aspects should be employed in the process from time to time, when this issue is discussed.
It is in this scenario that the QAMs and HODs were chosen to participate in this study as they are mandated with the responsibility of QA by their respective institutions.Both stakeholders are challenged to demonstrate quality of some sort in their managerial and leadership capabilities of students and various stakeholders within their responsibility.
The particular need to which the legislation referred was the need of quality education provided in HEIs that government supported financially.Fuelling the initial fears was the embedding of QA explicitly in the legislative framework.The point raised here is the role that government plays as an umbrella body introducing various national imperatives with which HEIs will have to comply and it is the responsibility of the various institutions to develop policies that will assist institutions to respond to national guidelines.The point enhanced by the policy issues is the tensions within the policy-making process between the state and the quasi-state, with no clear boundary between their respective spheres of authority, accompanied by the realisation that both may have different interpretations of what constitutes desirable policy outcomes.The researcher explained to the respondents that they should bear in mind that the evolution of the policy-making process by their respective institutions is not simply a record of expanding institutional powers.This tension between institutional policy and government policies was bound to happen, as experienced by the respondents.It can be argued that the governance of QA raises important issues with regard to leadership.
It was important for the researcher to establish if the HEIs have compliant processes in place when dealing with QA at their respective institutions.Respondents were asked about the existence of such policies with regard to the guidance of and compliance with their internal quality mechanism.A total of 3(7%) of respondents were in contrast to the views of the HEQC which believe that all HEIs should be geared towards the revisiting of QA policies, not quite at the developmental stage.Furthermore, these views of inequality in compliances with regard to policies should have been eradicated by the intervention of the HEQC capacity development structures.The 3(7%) respondents clearly indicated the difficulty of identifying quality compliance in their respective institutions.
However, few would agree that after the first round of re-accreditation and of institutional audits, universities should be at the advanced stage of compliance.The respondents indicated that they are still at the developmental stage and that they are really working towards compliance that will guide the quality culture in their HEIs.It is indeed difficult to strategise if there are no policies in place to act as a guide.
The analysis of the submissions indicates that 42(93%) of the institutions had some sort of policy on QA (Table 1).However, in most cases these policies have not yet been translated into plans and strategies.There was not much available documentation, such as manuals or regulations, reflecting QA arrangements.
Irrespective of how policy-makers within the institution, as well as institutional leaders may decide to shift and place the focus on policy implementation and its discourse and practice; critical questions pertinent to the relevance and academic worth of the institution and its learning programme, will always engage quality scrutiny and enquiry into issues pertaining to the public good (Table 2).
In order to execute its mandate for quality promotion, the institutional audits and programme accreditation, the HEQC, need to draw on the expertise, experience and understanding of those who work in the HE sector.While it is true that most HEIs are still grappling with the QA issue in a co-ordinated and aligned way, no institution can use this as an excuse for providing inferior standards.It is in this regard that the researcher included the measurement question in the study.
The researcher felt that it was very important to find the viewpoint of the respondents regarding quality measurements.It has been mentioned from the outset that the study comprises 45 participants; 42(93%) believed that quality cannot be measured while 3(7%) differed from the rest and believe that quality is measurable and can be determined by, among other things, a students" satisfaction survey.
The institutionalisation of quality enhancement
The most critical challenge to the idea of quality enhancement is posed by its institutionalisation. Conventions for quality enhancement need to be defined and systematic structures constructed to develop its practices.Inevitably, the unfolding of its process forces a return to the question of who has the power to determine the meaning of the key concepts, how are they to be put into effect even in institutions that apparently endorse them and what should the policy outcomes be?
Regardless of what model should be constructed, the key area to explore is that of what is happening on ground, that is in the universities; how the two bodies HEQC and QAM interact with each other and relate to the universities, and how the representative institutions will respond to the emerging outcomes.
However, there are those who remain sceptical.In the words of one senior quality officer: "Quality enhancement has become more talked about, more promoted and the HEQC audit methodology says that it encourages quality enhancement but I am not convinced.I will wait until we have gone through one; the rhetoric is there but I think they are going to come in and see what our procedures are like, as they have always done.There is no sign that they are going to focus on things that might encourage us to really take the enhancement side more seriously.The development of the quality agenda suggests that ideas and power struggles are intimately interrelated but their visibility varies as policy unfolds: ideas become more prominent as policy is formulated.Politics dominates the formal construction of policy (legislative process) while both politics and the ideological struggle interact at the implementation stage."
CONCLUSION AND RECOMMENDATIONS
A summary of the findings is presented as follows.
The new learning environment
An institution may have a range of motivations for adopting new approaches in quality mechanisms or a quality framework that can expand thinking and therefore problem-solving capacities when considering a performance quality model or framework.For UoTs to transform themselves and to contribute to transforming the world, they should include QA in their research, teaching and in community engagement.Building legitimate QA transformation is done partly by exhibiting alternative quality mechanisms or models that can guide outs into a more consistent and uniform approach.
The learning approach may be seen as the first step to building a quality system that will be able to facilitate a better coordinated effort.The learning approach can be constructed from the students" performance aspect and most, if not all respondents acknowledged the fact they have to carefully monitor throughput, as well as student progression; the main aspect being whether they do it in line with certain compliances issues or in a individualistic manner.It is this regard that the study revealed that the means to maintaining the position of the learning approach by providing flexible solutions to student needs will not do the university any good in the long run.
From an institutional perspective regarding the current problem, there was an imperative to get return on the investment (ROI).Ultimately, though, an institution will be judged by the quality of the teaching and learning it offers its students.To improve the learning experiences offered, it is essential that there is an emphasis on improvement from the level of the individual subject lecturer up through the organisation to the activities of senior managers.The following comment is pertinent: This does not imply that the vision of the programme and the planning of its implementation need be a topdown process.On the contrary, there needs to be ownership, vision and enthusiasm at all levels of the organisation.
Institutional quality improvement processes
The study was conducted with four UoTs, and importantly, these institutions represent a total of 90% of student enrolment in UoTs.There has been a concerted effort by UoTs to set up quality processes leading to the development of a student evaluation system which individual academics and programme facilitators make available as a means of improving quality.In some UoTs, the use of moderators is seen as the main indicator of quality.The afore-mentioned process, according to the UoTs" designers suggests that the improvement of the next offering of subjects is as a result of the assessment of current practices.Such a system is predicated upon the evaluation of an educational activity leading to improvements in subsequent attempts; this is congruent with the action inquiry process.The learning from this process is too valuable to be left untapped within any one subject or minor project.Unless the information is shared, the institution as a whole does not necessarily benefit from these projects.Much can be gained by facilitating the sharing of new knowledge and experiences across the institution: The university must have a technical and pedagogical innovative environment for research and development projects, providing opportunities for trial and experiment and to collect feedback on these via the QA process.…manysuch pilot experiments in HE have been conducted in isolation from the HE management process.Unless the evaluation occurs within the context of the institutional process as a whole, the valuable learning opportunities inherent in these projects will be lost to the institution.
The researcher considered evaluation reports of QA and concluded that the context in which an innovation occurs has to be considered.…thebenefits were shortlived and /or did not transfer.This finding offers a salutary caution to all educational innovators and underscores the need to view innovation within the institutional contexts in which it will thrive or die.
Thus, the institutional quality processes need to be such that the culture and procedures encourage the flow of information across subject and course and across departmental and faculty boundaries.It is the contention of this research that where quality cycles do not enable this flow of information, the lessons learned do not easily go beyond the subject concerned, students do not benefit and the ROI is reduced.
It is in this regard that more than half of the respondents (64%) were of the view that the flow of information process is easily followed.The remaining respondents expressed a different approach to the flow of information and how it is disseminated.They contended that in some UoTs the quality agenda is not organised holistically which creates a problem of coordinating quality matter from the central position.The nonexistence of the quality model was one issue that respondents perceived as a problematic factor; thus creating difficulties for the advancement of the quality process and the coordination of a more coherent approach.About 28% of the respondents expressed their frustrations with regard to how things are done at their respective institutions.
A model for promoting institutional quality processes
A "conversational model" of learning where a "conversation" can be considered as a two-way flow of information is proposed.In essence, the researcher posits that learning occurs when the student acts for a particular purpose and then receives feedback on the action.The student then assimilates and reflects upon the feedback in order to re-conceptualise and articulate a new understanding to the lecturer.This is a classic action research cycle of goal-action-feedback-modified-action, integral to the quality improvement process, with the critical part of the process being the reflection.In supporting the importance of reflection as part of the learning cycle, asserts: "my own assumption is that helping academics to improve their teaching is best done using a theory that helps academics reflect on what they are doing".The researcher contends that a similar model of learning can be applied at institutional level.
Establishing an institutional learning conversation
Most of the inexperienced HODs (67%), particularly those of fewer than five years" experience in the service of HE, view the study as a revelation, as they indicated that they have never been exposed to QA teams at their different institutions.Those with more experience, that is those with more than five years in HE (15%), were more confident about how QA matters should be organised; their only concern being the operational aspects.The latter group of HODs who have been at HEIs for more than ten years (18%), in most cases, were the ones leading the process at their respective UoTs.
Most HODs felt that in going forward, the process of deliberating on QA issues was of great value to them; they felt that they were given the platform to discuss quality issues and also to voice their opinion regarding the quality direction of the institution.It is in this instance that some institutions feel that they remain focused on working towards a quality culture, as there is a general perception that exists that quality is a long-term process and cannot be achieved after just two trials.It was interesting to note that the majority of the respondents have already begun entrenching quality in their daily routines, supported by clear directives.Some HODs raised the question of the contestation of territory between traditional university and UoTs.There is a strong feeling that academics from large traditional university marginalised some HODs, who come from small HEIs, particularly UoTs.HODs have the perception that the HE sector has become a battlefield where Selesho 11563 contestation for power is all too real.UoTs, they maintain, are treated as insignificant and this attitude restricts them in making certain decisions and in contributing to the wider HE community in a meaningful way.For example, UoTs have not been strong in research and that in itself, is a disadvantage within the HE community as most rated researchers come from traditional universities.There is also the problem of similar programmes being offered by a traditional university and a UoT at the same time.The perception still exists that UoTs are still "Technikons" and they will never be universities in the true sense of the word.This contestation can be seen clearly when teams of evaluators, during the institutional re-accreditations process, comprise academics from traditional universities will be invited to evaluate UoTs.The atmosphere is often hostile and their criticism devastating.
In conclusion, UoTs are still regarded as a secondgrade universities, according to some of the respondents.
As indicated earlier, it is not enough to have "learning conversations" or "quality improvement cycles" operating at distinct levels within an organisation.There should be overlap so that these conversations occur across boundaries.For institutional learning to take place, the project team (self-evaluation team) should be in dialogue with the institution.In the context of a project, to improve the teaching and learning in a subject, the academic becomes not only a researcher of a discipline, but a researcher in how to teach the discipline.The benefits of the project team"s sharing its learning with other staff will lead to improved learning outcomes for a wider range of students and staff.With restricted budgets and in the stringent economic environment in which today"s institutions operate, it is too costly for projects to be funded without any institutional benefit coming out of them.
The rationale and the logic of the findings have indicated that a clear directive and purpose in performing a self evaluation task plays a crucial role in making the process more effective.However, in the current system, there is a need to enhance clarity through training and collaboration.
Although we can agree with or contest the idea of political interference, it is important to realise that this presents a paradigm shift in the understanding of what quality actually means to us; that is, the culture of accountability and compliance with national imperatives.It is in this regard that the study outlined the national HE DoE structure in order to assist UoTs with compliance issues and to emphasise that strong institutional policies be built on, together with a monitoring process to ensure compliance.Institutional self-evaluation principles are in actual fact very simple, indicating that QA is evidencedbased and that logic is an active force in making it a success.It is recommended that clearly defined concepts linked together to form a coherent system should be employed to build a strong self-evaluation report.This system makes the results more valid as prior planning is undertaken accordingly.
Table 1 .
Analysis of quality assurance policies.
Table 2 .
Respondents" views regarding the measurements of quality.
|
2018-12-07T00:38:57.260Z
|
2012-11-30T00:00:00.000
|
{
"year": 2012,
"sha1": "75e871866b1685ec8c3ff81f0ae8a278d7175071",
"oa_license": "CCBY",
"oa_url": "https://academicjournals.org/journal/AJBM/article-full-text-pdf/18AF75C23837.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "75e871866b1685ec8c3ff81f0ae8a278d7175071",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Economics"
]
}
|
222133569
|
pes2o/s2orc
|
v3-fos-license
|
Mitochondrial Fatty Acid Oxidation Disorders: Laboratory Diagnosis, Pathogenesis, and the Complicated Route to Treatment
Mitochondrial fatty acid (FA) oxidation deficiencies represent a genetically heterogeneous group of diseases in humans caused by defects in mitochondrial FA beta-oxidation (mFAO). A general characteristic of all mFAO disorders is hypoketotic hypoglycemia resulting from the enhanced reliance on glucose oxidation and the inability to synthesize ketone bodies from FAs. Patients with a defect in the oxidation of long-chain FAs are at risk to develop cardiac and skeletal muscle abnormalities including cardiomyopathy and arrhythmias, which may progress into early death, as well as rhabdomyolysis and exercise intolerance. The diagnosis of mFAO-deficient patients has greatly been helped by revolutionary developments in the field of tandem mass spectrometry (MS) for the analysis of acylcarnitines in blood and/or urine of candidate patients. Indeed, acylcarnitines have turned out to be excellent biomarkers; not only do they provide information whether a certain patient is affected by a mFAO deficiency, but the acylcarnitine profile itself usually immediately points to which enzyme is likely deficient. Another important aspect of acylcarnitine analysis by tandem MS is that this technique allows high-throughput analysis, which explains why screening for mFAO deficiencies has now been introduced in many newborn screening programs worldwide. In this review, we will describe the current state of knowledge about mFAO deficiencies, with particular emphasis on recent developments in the area of pathophysiology and treatment.
INTRODUCTION
Sugars, fatty acids (FAs), and amino acids are released from carbohydrates, fat, and protein after ingestion and are the 3 main substrates that an organism can use to maintain wholebody energy homeostasis. All 3 substrates are not only required for energy purposes, but also act as building blocks for the synthesis of other molecules, including lipids. Mitochondrial FA beta-oxidation (mFAO) is the major pathway for the degradation of FAs to acetyl units, 1,2 whereas the peroxisomal beta-oxidation pathway (pFAO) contributes little to the oxidation of dietary FAs in terms of energy production. However, peroxisomes do play an important role in the oxidation of a subgroup of FAs that cannot be beta-oxidized in mitochondria, which includes very-long-chain FAs. [3][4][5] The mFAO system generates energy in the postabsorptive state, as well as in fasted states when glucose supply is limited. FAs are not only a direct source of energy in tissues, but especially under conditions of more advanced fasting, the liver can convert the end product of mFAO (i.e., acetyl-CoA) into the ketone bodies acetoacetate and 3-hydroxybutyrate, which can then be used in virtually all other organs (except erythrocytes) as a source of energy. Importantly, even when glucose is readily available, mFAO is the main source of energy for the heart, skeletal muscle, and kidneys.
The importance of the mFAO system is exemplified by the existence of a large number of different genetic diseases in humans characterized by an impairment in the mitochondrial oxidation of FAs. 6 In fact, for most of the genes coding for enzymes and transporters involved in mFAO, recessively inherited disorders are known now ( Table 1). A characteristic feature of all mFAO disorders is hypoglycemia, which is the direct result of the inability to oxidize FAs and concurrent increased consumption of glucose by tissues to match energy demands. Other abnormalities frequently observed in mFAO-deficient patients, especially in those with a defect in the oxidation of long-chain FAs, include cardiac features such as arrhythmias and cardiomyopathy, rhabdomyolysis, as well as retinopathy and neuropathy in particular mFAO disorders. [7][8][9][10] In this review, we will present the current state of knowledge about mFAO and the various mFAO deficiencies with a particular emphasis on the laboratory diagnosis of patients, as well as the underlying mechanisms of pathogenesis and, finally, therapeutic options. 8q22.3 616839 CACT, carnitine acylcarnitine translocase; CPT, carnitine palmitoyltransferase; FAD, flavin adenine dinucleotide; ETF, electron transfer flavoprotein; ETFDH, electron transfer flavoprotein dehydrogenases; LCHAD, long-chain 3-hydroxyacyl-CoA dehydrogenase; MADD, multiple acyl-CoA dehydrogenase deficiency; MCAD, medium-chain acyl-CoA dehydrogenase; MTP, mitochondrial trifunctional protein; SCAD, short-chain acyl-CoA dehydrogenase; SCHAD, short-chain 3-hydroxyacyl-CoA dehydrogenase; VLCAD, very-long-chain acyl-CoA dehydrogenase.
UPTAKE OF FAs AND CARNITINE
Under well-fed conditions when carbohydrates are abundant, some FAs are oxidized right away in organs such as the heart, skeletal muscle, and kidneys, while other FAs may simultaneously undergo esterification into triglycerides followed by intracellular storage, especially in adipose tissue. Hormone-induced hydrolysis of triglycerides under conditions of fasting releases FAs into the plasma compartment, followed by their transport through the body bound to albumin. Once alongside the plasma membrane of cells, FAs are released from albumin and subsequently carried across the plasma membrane via one of several FA transport proteins, of which CD36 is probably the most important. [11][12][13] The FAs released in the cell immediately undergo activation to the corresponding coenzyme A (CoA) esters, thereby trapping the FAs inside the cell. The acyl-CoA esters are substrates for multiple enzymatic reactions in each cell, thus allowing their incorporation into different lipid species including glycerophospholipids, sphingolipids, and cholesterol esters. Furthermore, the acyl-CoA esters are also the substrate for beta-oxidation in mitochondria.
Oxidation of FAs in mitochondria involves the obligatory participation of carnitine, a lowmolecular-weight compound that is primarily derived from dietary sources, notably meat. 14 Humans can also synthesize carnitine endogenously from the amino acid lysine in the form of protein-derived trimethyllysine generated in lysosomes upon proteolytic breakdown of certain proteins. It is generally agreed that endogenous carnitine biosynthesis only accounts for some 25% of total carnitine requirements in humans consuming a regular diet, which implies that most carnitine has to come from exogenous sources. Since the de novo carnitine biosynthesis pathway is a constitutive pathway, not induced by factors such as low carnitine levels, humans on a low-carnitine diet, including vegetarians and vegans, are at risk to develop hypocarnitinemia.
De novo synthesis of carnitine occurs in 3 different organs: the liver, kidneys, and brain. The carnitine produced in these organs plays multiple key roles in metabolism, including: 1) functioning as a central player in the mitochondrial carnitine cycle, which allows the transfer of acyl-CoA molecules across the mitochondrial membrane in the form of acylcarnitine esters; 2) playing an indispensable role in pFAO by allowing the transfer of the end products of pFAO, including acetyl-CoA, propionyl-CoA, and a range of different medium-chain acyl-CoAs, to mitochondria as carnitine esters; and finally 3) playing a major role in CoA homeostasis and thereby ensuring that free, non-acylated CoA is available at all times. The latter is of key importance since free, non-acylated CoA is required in the pyruvate dehydrogenase reaction, which controls flux through the citric acid (Krebs) cycle by providing acetyl-CoA to the cycle. The central role of carnitine in CoA homeostasis is mediated by different carnitine acyltransferases, of which carnitine acetyltransferase, localized in mitochondria and peroxisomes, is the most important. Importantly, carnitine can be exported out of liver and kidney cells to the circulation and taken up by other cells that cannot synthesize carnitine themselves. The import of carnitine into cells is mediated by OCTN2, which is an integral plasma membrane protein catalyzing the one-to-one symport of carnitine and sodium (Fig. 1). The huge sodium gradient across the plasma membrane generated by Na/K-ATPase drives the uptake of carnitine into cells and explains the millimolar concentrations of carnitine found inside cells, whereas carnitine levels in plasma are much lower (20-40 μmol/L) (Fig. 2)
mFAO AND THE CARNITINE CYCLE
The mitochondrial FA oxidation system can be subdivided into 2 parts: the mitochondrial carnitine cycle required to transfer acyl-CoA esters from the cytosol into the mitochondrial interior and the actual beta-oxidation machinery itself (Fig. 1).
Mitochondrial carnitine cycle
This system consists of 2 acyltransferases, named carnitine palmitoyltransferase 1 and 2 (CPT1 and 2), located at opposing sides of the mitochondrial inner membrane (MIM), as well as carnitine acylcarnitine translocase (CACT), which is a member of the mitochondrial carrier family. 16,17 The cycle starts with CPT1, which is an integral mitochondrial outer membrane protein catalyzing the synthesis of acylcarnitines from the corresponding acyl-CoAs. Subsequently, the acylcarnitine is carried across the MIM by CACT in exchange for free, unesterified carnitine. Once inside the mitochondrial matrix space, the acylcarnitine is reconverted back by CPT2 into the acyl-CoA ester, which is now ready for oxidation. Importantly, the enzyme CPT1 is under strict allosteric control by malonyl-CoA, which is a powerful inhibitor of CPT1. 18 Malonyl-CoA is synthesized from acetyl-CoA by 1 of 2 different acetyl-CoA carboxylases, which each play a different role in FA metabolism. Malonyl-CoA homeostasis is further maintained by the enzyme malonyl-CoA decarboxylase. Under wellfed conditions, when glucose is abundant, acetyl-CoA levels are high and the same is true for malonyl-CoA, which explains why mFAO is switched off under these conditions, whereas the reverse is true under fasting conditions when malonyl-CoA levels are low. Importantly, CPT1 occurs in 2 different forms produced from 2 distinct genes, and each enzyme shows different kinetic parameters and tissue distributions, with one form predominantly expressed in the liver (CPT1A) and the other showing predominant expression in the heart (CPT1B). 16,17 So far only patients with a genetic deficiency of the liver form of CPT1 (CPT1A) have been identified, as discussed later.
mFAO
Inside mitochondria, acyl-CoAs are degraded via a process called beta-oxidation, which is a cyclic process consisting of 4 enzymatic steps. Each cycle of beta-oxidation shortens the acyl-CoA by 2 carbon atoms, which are released in the form of acetyl-CoA. The shortened acyl-CoA then undergoes a new cycle of beta-oxidation until the acyl-CoA is fully cleaved into acetyl-CoA units which are then degraded to carbon dioxide and water in the citric acid (Krebs) cycle. The reducing equivalents produced in the citric acid (Krebs) cycle in the form of NADH and FADH 2 are then fed into the respiratory chain to ultimately produce ATP from ADP and phosphate (Fig. 1).
The actual beta-oxidation of acyl-CoAs is initiated by one of several acyl-CoA dehydrogenases (ACAD) which introduce a trans-double bond in the acyl-CoA, resulting in a trans-2-enoyl-CoA.
This step is then followed by hydration of the double-bond by enoyl-CoA hydratases to generate (S)-3-hydroxyacyl-CoA. Subsequently, a second dehydrogenation step kicks in, in which the (S)3-hydroxyacyl-CoA is converted into the 3-ketoacyl-CoA ester in a reaction performed by one of multiple (S)-3-hydroxyacyl-CoA dehydrogenases. Finally, various 3-ketothiolases cleave the 3-ketoacyl-CoA into an acetyl-CoA unit and an acyl-CoA that is now shortened by 2 carbon atoms and can then undergo a new cycle of beta-oxidation.
ENZYMOLOGY OF THE mFAO SYSTEM
In principle, each of the 4 steps of beta-oxidation is catalyzed by one of a variety of different enzymes, which each have their own specific substrate specificity. 1,2 Rather than giving a full account of all enzymes able to catalyze these 4 enzymatic steps, we will limit discussion to those enzymes for which a genetic deficiency has been described. With respect to the first step, 3 different ACADs belonging to the ACAD family, which has at least 11 members in humans, have been identified, each with a well-established role in mFAO. These are: 1) verylong-chain acyl-CoA dehydrogenase (VLCAD); 2) medium-chain acyl-CoA dehydrogenase (MCAD); and 3) short-chain acyl-CoA dehydrogenase (SCAD). Importantly, VLCAD is a dimer bound to the MIM, whereas MCAD and SCAD are soluble, matrix-localized tetramers. Together, these 3 enzymes cover the dehydrogenation of the full range of acyl-CoAs. At least 2 other ACADs (LCAD and ACAD9) have been claimed to play a role in mFAO, although this remains incompletely understood at present. ACADs are flavoproteins that make use of enzyme-bound FAD as an electron acceptor. Continued enzyme catalysis requires reoxidation of enzyme-bound FADH 2 , which is mediated by the electron transfer flavoprotein (ETF) system, which is composed of a soluble matrix protein named ETF that picks up electrons from ACAD-bound-FADH 2 and then donates these electrons to ETF dehydrogenase (ETFDH), which is the other component of the ETF system. Both ETF and ETFDH are flavoproteins and it is ETFDH that ultimately feeds the electrons into the respiratory chain at the level of coenzyme Q, thereby completing the cycle of events (Fig. 3). The subsequent steps of beta-oxidation are again catalyzed by multiple enzymes, and involve at least 2 different enzymes for each step. Mitochondrial trifunctional protein (MTP) plays a dominant role in the oxidation of long-chain FAs since this enzyme harbors 3 different activities, including enoyl-CoA hydratase, (S)-3-hydroxyacyl-CoA dehydrogenase, and 3-ketothiolase activities, which are all specific for long-chain intermediate substrates.
The enzyme is a hetero-octamer of 4 alpha-and 4 beta-subunits and is strongly bound to the MIM, just like VLCAD, which has led to the suggestion that VLCAD and MTP actually function as a single motor unit that pulls a long-chain acyl-CoA into the VLCAD-MTP complex and spits out a medium-chain acyl-CoA after repeated cycles of beta-oxidation within the VLCAD-MTP-complex (Fig. 3). The medium-chain acyl-CoA then moves to the soluble, matrix space of mitochondria for final oxidation to acetyl-CoA by matrix-localized enzymes including MCAD and SCAD, a short-chain enoyl-CoA hydratase also named crotonase, a short-chain 3-hydroxyacyl-CoA dehydrogenase, and a short/medium-chainspecific thiolase named MCKAT encoded by ECHS1, HADH, and ACAA2, respectively ( Table 1). Apart from the enzymes listed above, several additional enzymes play a role in FA catabolism, including the enzymes required for the removal of double bonds in FAs. Table 1 lists the mFAO deficiencies known to date as subdivided into 2 groups including: 1) primary mFAO deficiencies, caused by mutations in genes coding for the various enzymes and transporters involved in mFAO, including the mitochondrial carnitine cycle as well as the cellular uptake of carnitine as discussed above; and 2) secondary disorders of mFAO, in which mFAO is impaired because of other factors affecting mFAO.
Primary disorders of mitochondrial FA oxidation
A general feature of all mFAO disorders is hypoketotic hypoglycemia, simply because the block in mFAO eliminates FAs as an important substrate for energy provision and thereby causes an enhanced reliance on glucose oxidation instead. Hypoketotic hypoglycemia can be life-threatening, especially when glycogen reserves are scarce, and may give rise to early death if not recognized in time. This is true for all mFAO disorders including medium-chain acyl-CoA dehydrogenase deficiency (MCADD). Prevention of hypoglycemia and timely treatment leads to an improved prognosis. This is one of the reasons why mFAO disorders have been included in newborn screening programs around the world, as discussed later.
In line with the important role of mFAO in the heart and skeletal muscle, patients with a defect in long-chain mFAO may suffer from cardiac and skeletal abnormalities, especially severely affected patients with low residual enzyme activity. Indeed, hypertrophic or dilated cardiomyopathy has been documented in patients with deficiencies at the level of VLCAD, MTP, long-chain 3-hydroxyacyl-CoA dehydrogenase (LCHAD), CACT, CPT2, and OCTN2, but not CPT1A, which follows logically from the fact that CPT1B (not CPT1A) is the predominant CPT expressed in heart tissue. Arrhythmias and conduction defects are also frequently observed in these patients. Indeed, in a study of 107 mFAO-deficient patients, cardiac involvement occurred in >50% of patients: 67% of these patients presented with cardiomyopathy (mostly hypertrophic), and 47% had heartbeat disorders with various conduction abnormalities and arrhythmias responsible for collapse, near-miss, and sudden unexpected deaths. 19 All enzymatic deficiencies, except CPT1 and MCAD deficiency, were found to be associated with cardiac signs. Muscular signs were observed in 51% of patients, of whom 64% had myalgias or paroxysmal myoglobinuria and 29% had progressive proximal myopathy. These findings are supported by other studies in large cohorts of patients. [7][8][9][10] Chronic neurological presentations are rare, except in patients with LCHAD/MTP deficiency, in whom peripheral neuropathy and retinitis pigmentosa are frequent findings. 20 Other studies have confirmed that rhabdomyolysis, myalgia, and muscle weakness are frequent features, especially in patients presenting at later stages. Hepatomegaly and hepatic abnormalities are also observed in mFAO-deficient patients, albeit less frequently.
Secondary disorders of mitochondrial FA oxidation
mFAO can also be deficient because of other factors affecting mFAO, which may be either genetic in origin or not, for instance because of mutations in genes coding for different enzymes or transporters than those directly involved in mFAO. The prototype of this group of secondary mFAO disorders is glutaric aciduria type II, better known as multiple acyl-CoA dehydrogenase deficiency (MADD) (MIM231680), first described in 1976 by Przyrembel and colleagues. 21 Below, we briefly describe the different secondary mFAO deficiencies.
MADD
Following its first description in 1976, hundreds of patients with MADD have been described in the literature. [22][23][24] Historically, patients with MADD have been classified into 3 groups including a neonatal-onset form with or without congenital anomalies (type I/II) and a later-onset, relatively mild form (type III). Patients with the neonatal-onset form suffer from life-threatening complications, which include metabolic derangements, cardiomyopathy, leukodystrophy, and hypotonia. The clinical course of type III patients is much milder, with symptoms including recurrent hypoglycemia, exercise intolerance, and chronic fatigue. 23 Patients have traditionally been identified on the basis of clinical signs and symptoms, but also via neonatal screening, at least in some countries.
In patients affected by MADD, mFAO is impaired because of a defect in the ETF system, which is made up of 2 different proteins (ETF and ETFDH). ETF is a heterodimer of 2 nonidentical subunits (ETF-alpha and ETF-beta, encoded by ETFA and ETFB, respectively), whereas ETFDH is a homodimer encoded by the gene ETFDH. ETF acts as a mobile electron carrier that picks up electrons from various ACADs and delivers these electrons to ETFDH, which is a membrane-bound enzyme, whereas ETF is localized in the soluble, matrix space of mitochondria. The electrons handed over by ETF to ETFDH are subsequently fed into the respiratory chain by ETFDH at the level of ubiquinone. The end result of this cycle of events is that the different ACADs, as well as ETF and ETFDH, are all turned into their oxidized form again so that they can engage in another round of substrate oxidation.
The genetic basis of MADD is heterogeneous, and mutations in ETFA, ETFB, and ETFDH have been described in the literature as the underlying cause of MADD. Since ETF and ETFDH are both flavoproteins with FAD as an obligatory cofactor, riboflavin should be tried in every patient, thereby enabling the dysfunctional ETF system to operate to the maximal possible extent. In the literature, many patients with a riboflavin-responsive form of MADD have been described. 23 Recently, van Rijt and colleagues 25 devised a MADD disease severity scoring system (MADD-DS3) based on an extensive literature search encompassing 413 MADD patients. The results obtained allowed these authors to define 6 disease domains (cardiac, central nervous system, peripheral nervous system, respiratory, liver, and muscle) and this information was used to compile the scoring system. The newly devised scoring system was applied to 18 patients belonging to the Dutch MADD cohort, and a good correlation was demonstrated between the MADD-DS3 score and flux though the mFAO system, as assessed in fibroblasts from these patients. 25
Riboflavin transporter deficiencies
Brown-Vialetto-Van Laere syndrome (BVVL) (MIM211530) is a rare neurological disorder first described by Brown in 1894 and later by Vialetto and Van Laere. Patients mostly present with sensorineural deafness, bulbar palsy, and respiratory complications, with the age of onset varying between infancy and adulthood. 26 The clinical signs and symptoms of BVVL overlap with those of other conditions including Fazio-Londe disease (MIM211500), which was originally thought to be distinct from BVVL, but is now known to be a phenotypic variant of BVVL. 27 In 2010, Green and colleagues 28 were the first to report the molecular basis of BVVL, when they identified mutations in the gene SLC52A3 in a cohort of BVVL patients. SLC52A3 codes for a plasma membrane-bound riboflavin transporter (RFVT3), which if deficient causes a severe deficiency of intracellular riboflavin and thereby of flavin mononucleotide (FMN) and flavin adenine dinucleotide (FAD). The identification of SLC52A3 as a causative gene involved in BVVL was soon followed by the identification of mutations in another gene (SLC52A2) in patients negative for mutations in SLC52A3. 29,30 This newly identified gene also codes for a plasma membrane riboflavin transporter (RFVT2) with the same characteristics as RFVT3, albeit differentially expressed among tissues. Despite these differences, the clinical signs and symptoms of RFVT2-and RFVT3-deficient patients are virtually indistinguishable, as concluded from a study of 37 RFVT2-deficient patients in comparison with 33 RFVT3-deficient patients. 31 In many patients with either deficiency, plasma acylcarnitines have been measured and found to be abnormal. Indeed, 17 of 28 RFVT2-deficient patients showed abnormal acylcarnitines, indicative of a defect in mFAO, whereas 4 of 8 RFVT3-deficient patients showed abnormal acylcarnitines. 31 The identification of defects in the transport of riboflavin in BVVL-patients has inspired supplementation with riboflavin with very rewarding results, both clinically and biochemically, with normalization of plasma acylcarnitine levels. 31,32 FAD synthase (FADS) deficiency FADS deficiency was first described in 2016 in a cohort of 9 individuals from 7 unrelated families, all affected by a MADD-like disorder with respiratory chain dysfunction and a biochemical profile suggestive of MADD. 33 FADS is the product of the FLAD1 gene and produces different transcripts, which generate several isoforms. Two of these isoforms have been characterized in detail 34 and include a mitochondrial (FADS1) and a cytosolic (FADS2) isoform, both with FADS activity. In recent years a few additional patients have been identified. 35 Riboflavin has been found to be highly beneficial in FADS-deficient patients, especially in those who are more mildly affected. 33 Although not studied in detail yet, riboflavin could exert its effect by at least 1 of 2 different mechanisms. First, high levels of intracellular riboflavin lead to increased FMN levels, followed by enhanced flux through FADS, at least when FADS is not fully deficient. Secondly, riboflavin supplementation could affect the folding of mutant FADS proteins, either by itself or via FMN and/or FAD. 35,36 Mitochondrial FAD transporter deficiency In 2016, Schiff et al. 37 reported the first identification of mitochondrial FAD transporter deficiency in a 14-year old girl with recurrent exercise intolerance. Laboratory investigations revealed a MADD-like profile that could not be traced back to any of the known causes of MADD. Subsequently, bi-allelic mutations were identified in the SLC25A32 gene, which codes for a member of the mitochondrial carrier family that was earlier described as carrying FAD across the mitochondrial membrane. 37 A second patient presented with a severe neuromuscular phenotype with early-onset ataxia, myoclonus, dysarthria, muscle weakness, and exercise intolerance. 38 Exome sequencing revealed a novel homozygous variant in the mitochondrial FAD transporter gene, in which the methionine (AUG) translation initiation start codon is deleted, resulting in the absence of full-length SLC25A32. Metabolite analysis in urine and plasma revealed the typical abnormalities described for MADD. Clinically, the patient improved upon riboflavin supplementation. 38 Apart from the secondary defects in mFAO described above, there are many more pathophysiological conditions in which mFAO is impaired, including diabetes, which we will not discuss here.
LABORATORY DIAGNOSIS OF PRIMARY mFAO DEFICIENCIES
For many years, correctly diagnosing patients affected by a particular mFAO deficiency has been difficult due the lack of appropriate biomarkers in blood and the lack of sensitive and specific enzyme assays. This explains why candidate patients have long been subjected to (controlled) fasting and/or loading tests such as the sunflower oil loading test. The introduction of tandem mass spectrometry at the end of the 1990s in laboratories specializing in the laboratory diagnosis of inborn errors of metabolism has revolutionized the diagnostic process in many ways, especially in patients with mFAO deficiencies, since this technology finally allowed the analysis of acylcarnitines in plasma, which had long been difficult to analyze. Since these early days, many advances have taken place, and it is now fully clear that analysis of acylcarnitines is the method of choice to identify patients affected by a particular mFAO deficiency. Importantly, each mFAO deficiency gives rise to its own characteristic acylcarnitine profile (Fig. 4), which implies that it is usually immediately clear whether a certain patient is affected by a mFAO disorder and also which enzyme is likely to be deficient. This is extremely important for the regular postnatal diagnosis of patients who present with clinical signs and symptoms, as well as for patients who are picked up by newborn screening. In Amsterdam, we have set up enzymatic assays for all mFAO enzymes in fibroblasts, tissues, and more importantly in lymphocytes. 39 This technology allows us to establish the exact enzyme defect in mFAO within 1-2 days in any patient presenting with an abnormal acylcarnitine profile, regardless of whether the patient is detected by newborn screening or identified on the basis of clinical signs and symptoms. Once the enzyme defect has been established, mutational analysis is done to identify the underlying molecular defect in the gene involved. Although acylcarnitine analysis followed by enzyme activity measurements and molecular analysis often leads to a clear-cut identification of patients, we do come across patients in whom diagnosis is less straightforward. In such cases it is advised to do more detailed studies, either in vivo (via the sunflower oil loading test, although this is rarely done any longer at our center) and/or whole cell FA flux analysis in fibroblasts using radiolabeled FAs such as [ 3 H]-oleic acid (C18:1) 40,41 and palmitate loading tests. 42 In recent years we have witnessed a change in the laboratory diagnosis of mFAO-deficient patients, and in fact any patient suspected to suffer from an inborn error of metabolism. Indeed, revolutionary developments in the field of DNA sequencing technology now allow exome and/or whole genome sequence analysis at an amazing speed, high fidelity, and relatively low cost, thereby turning the original laboratory approach from metabolite to enzyme to DNA upside down. As a result, in many cases molecular analysis-either in the form of targeted sequencing of particular genes or gene panels, or in the form of whole exome/genome sequencing-comes first after acylcarnitine profiling has been done. It is important to emphasize, however, that functional analyses remain of the utmost importance to verify the functional consequences of new mutations that are found. Indeed, it regularly happens that new variants of unknown significance are found that require functional studies, for instance in fibroblasts from patients, to establish whether or not they are causal for the disease. This is all the more important since the consequences of certain mutations, especially those causing amino acid substitutions, are very difficult to predict using prediction programs like Polyphen and SIFT. An additional complication that needs to be mentioned here is that many of the enzymes involved in mFAO operate as multimeric proteins made up of identical subunits (VLCAD, MCAD, SCAD, etc.) or nonidentical subunits, as in MTP. Since the molecular basis of many mFAO deficiencies is often heterogeneous, with different mutations from each parent, the consequences for the folding and catalytic efficiency of the resulting enzyme proteins is even more difficult to predict. For all these reasons we prefer to do detailed functional studies in fibroblasts of all mFAO- deficient patients, especially since whole-cell FA oxidation measurements in fibroblasts using oleate as a substrate is a powerful prognostic marker that we use to predict outcomes and to define personalized treatment/dietary strategies, as described for VLCADD for instance. 40,41
NEONATAL SCREENING OF mFAO DEFICIENCIES
Early diagnosis has long been known to reduce the risk of mortality among mFAO-deficient patients. This is even true for "mild" deficiencies like MCADD. Indeed, a study by Iafolla et al. 43 in 1994 in 120 MCADD patients revealed that 23 of the 120 patients had died before the diagnosis was established and many of the 97 surviving patients had developmental and behavioral disabilities, failure to thrive, chronic muscle weakness, and cerebral palsy in different combinations. This explains why MCADD and other mFAO deficiencies have been included in newborn screening programs around the world. This is also true for the US, where these disorders are part of the newborn Recommended Uniform Screening Panel. Newborn screening occurs by means of acylcarnitine analysis in dried blood spots, which needs to be followed up by confirmatory testing to define the ultimate diagnosis. The latter is especially important because of the occurrence of so-called false-positives. In our center, we perform enzyme testing along with repeated acylcarnitine analysis as second-tier tests in blood samples from newborn children, which usually resolves whether a patient is truly enzyme-deficient or not. In addition to genetic confirmation, subsequent oleate flux analysis in fibroblasts is then performed to estimate the extent of the FA oxidation deficiency, especially since this has therapeutic consequences. 41 The importance of followup confirmatory testing is also clear from other studies, including that conducted by Pena et al., 44 who performed studies in 52 VLCADD patients. The majority of diagnoses in these 52 patients was established using a combination of 2 different assays (37 of 52) with the most common combination being plasma acylcarnitine analysis and genotyping (26 of 52) whereas several individuals (7 of 52) had 4 or more different assays completed to confirm the diagnosis. It should be noted that enzyme testing and/or functional studies in fibroblasts were only performed in a minority of these patients (17 of 52).
PATHOPHYSIOLOGY AND TREATMENT
Regarding the pathophysiology of mFAO, 2 non-mutually exclusive mechanisms have been proposed: 1) energy deprivation as a consequence of the block in mFAO; and 2) toxicity caused by the accumulation of mFAO intermediates. The general notion is that energy deprivation is involved in all mFAO disorders, whereas toxicity caused by intermediates of mFAO only plays a role in patients affected by a defect in the oxidation of long-chain FAs. Indeed, it has been argued that the intermediates that accumulate as a consequence of a deficiency of an enzyme involved in long-chain FA oxidation (lcFAO) are at the basis of the cardiac and skeletal muscle abnormalities observed in all lcFAO-deficient patients and are also involved in retinopathy and neuropathy in LCHAD/MTP-deficient patients. The intermediates implicated in this toxicity include long-chain acyl-CoAs and long-chain acylcarnitines, which accumulate in cells and are known to interfere with many physiological processes. This is especially true for long-chain acyl-CoAs, which are known to inhibit a large variety of enzymatic reactions and are also powerful inhibitors of many mitochondrial metabolite transporters. 45,46 This includes the mitochondrial ATP/ADP carrier, which plays a key role in cellular metabolism by virtue of the fact that this carrier provides the extramitochondrial space with ATP synthesized by the oxidative phosphorylation system. It should be noted that much of this work has been done with isolated enzymes and at best with isolated mitochondria, but not in intact cells; therefore, there has always been some skepticism regarding whether these in vitro findings would also hold true for intact cell systems, especially since cells are equipped with a variety of different acyl-CoA binding proteins with high affinity towards acyl-CoAs. 47 The concept that the pathophysiology of mFAO disorders involves 2 different mechanisms (energy deprivation and toxicity due to accumulating intermediates) has formed the basis for the current treatment of patients, which is still very much dietary in nature. The treatment is aimed at preventing catabolism by avoidance of fasting. In addition, in patients with a defect in lcFAO, a long-chain triglyceride (LCT)-restricted diet, supplemented with mediumchain triglycerides (MCT), can be advised to bypass lcFAO for energy production. Intake of LCT and MCT is dependent on the specific type of lcFAO defect, the residual capacity to oxidize long-chain FAs, and the age of the patient. Treatment guideline meetings have been instrumental in defining the best treatment strategies. [48][49][50] Institution of the best treatment strategy remains difficult and should be based on each patient's residual capacity to oxidize long-chain FAs. To this end, we recently developed a treatment strategy for VLCAD-deficient patients based on a functional assay in fibroblasts that quantifies the extent to which mFAO is defective in each individual patient using tritiated oleate as substrate. 41 Apart from the dietary interventions described above, there are other therapeutic options worth mentioning here.
Anaplerotic therapy of lcFAO-deficient patients
The initial concept behind anaplerotic therapy came from Roe and colleagues, 51 and was based on the notion that a defect in lcFAO not only leads to a block in mFAO, but also may inhibit oxidation of glucose. This was supposed to be due to a deficit in citric acid cycle intermediates including oxaloacetate, thereby hampering the oxidation of pyruvate as derived from glucose to carbon dioxide and water. To circumvent this potential block in citric acid cycle activity, Roe and colleagues 51 proposed anaplerotic therapy as a new dietary approach for lcFAOdeficient patients and devised a brilliantly ingenious compound named triheptanoin, which is a triglyceride consisting of 3 heptanoic (C7) acid molecules attached to a glycerol backbone. Heptanoic acid can readily be oxidized just like any other medium-chain FA and thereby generates 2 acetyl-CoA units and 1 propionyl-CoA unit. The advantage of propionyl-CoA is that it feeds into the citric acid cycle at the level of succinyl-CoA via the concerted action of propionyl-CoA carboxylase and methylmalonyl-CoA mutase. Propionyl-CoA thereby fills up the citric acid cycle with 4-carbon intermediates including succinyl-CoA, succinate, fumarate, malate, and oxaloacetate, so that the oxidation of acetyl-CoA as generated from pyruvate via pyruvate dehydrogenase is no longer limited by oxaloacetate in the citrate synthase reaction. Following the initial work of Roe et al. [51][52][53] showing improvement of cardiac and skeletal muscle symptoms in a group of lcFAO-deficient patients, the potential of triheptanoin therapy has been investigated, notably by Vockley and colleagues. [54][55][56][57] Gillingham and colleagues 56 reported the results of a double-blind randomized controlled trial in 32 lcFAO-deficient patients, which revealed some improvement in cardiac parameters, including a decrease in the left ventricular wall mass and a small increase in the ejection fraction. Unfortunately, triheptanoin did not have any effect on rhabdomyolysis in these patients, which is especially disappointing since rhabdomyolytic crises are so disabling in lcFAO-deficient patients. Recently, Vockley and colleagues 54 reported the results of a single-arm, open-label phase 2 study in which the safety and efficacy of triheptanoin was studied as administered for 78 weeks to 29 pediatric and adult patients affected by a severe lcFAO deficiency. The results revealed a reduction in the rate of major clinical events compared to the pretreatment period, with improvements in walking exercise tolerance and increased health-related quality of life, suggesting that triheptanoin "may offer an improvement over existing disease management". The results of an open-label, long-term extension study were published very recently. 57
Bezafibrate
Bezafibrate is a peroxisome proliferator-activated receptor (PPAR) agonist. Upon binding a ligand, PPAR forms a heterodimer with the retinoic acid receptor RXR, followed by binding to specific response elements (PPRE) in the promoter regions of a large variety of different genes. Many of these PPAR-responsive genes code for proteins involved in lipid metabolism, including the genes coding for both mFAO and pFAO enzymes. This explains why feeding rats or mice with PPAR ligands like bezafibrate, clofibrate, and other fibrates stimulates FA oxidation through the induced expression of many of the genes coding for these betaoxidation proteins. Djouadi and colleagues adopted this notion and discovered that the residual activity of VLCAD could be stimulated by simply adding bezafibrate to the culture medium of patients with VLCAD deficiency, at least when there was some residual activity to be induced. Indeed, in patients with the severe form of VLCADD and no residual activity, bezafibrate had no effect. 58,59 Similar results were obtained in fibroblasts from patients with CPT2 deficiency and LCHAD/MTP deficiency. 60 These promising results formed the basis for a clinical trial in CPT2-deficient patients, which revealed improved exercise tolerance and a reduction in rhabdomyolytic crises in patients. 61,62 Furthermore, palmitate oxidation was increased when analyzed in vitro in muscle biopsies from these patients. Later work has questioned the potential of bezafibrate as a therapeutic agent for mFAO-deficient patients. 63 Indeed, a randomized clinical trial in CPT2deficient patients revealed no beneficial effect of bezafibrate in terms of exercise tolerance. Furthermore, whole-body palmitate oxidation was not stimulated by bezafibrate. This work has reduced the initial enthusiasm about bezafibrate as therapeutic agent. 63 Nevertheless, the idea of pharmacological upregulation of the residual activity of mFAO enzymes, making use of the fact that many of the genes involved contain a PPRE in their promoter region, remains highly attractive and should be followed up in future studies aiming to identify more selective PPAR agonists than bezafibrate.
Ketone bodies
The ketone bodies 3-hydroxybutyrate and acetoacetate are normally produced in the liver from FAs, followed by their transport to virtually all extrahepatic tissues, including the brain, where they serve as a readily oxidizable substrate, especially under fasting conditions (Fig. 5). The heart muscle can also oxidize ketone bodies and ketone body supplementation has earlier been considered as a powerful means to provide enough energy equivalents to patients. In fact, 3-hydroxybutyrate has been tried successfully in patients suffering from MADD, 64 in whom mFAO is defective because enzyme-bound FADH 2 cannot be reoxidized due to a defect in the ETF-ETFDH system, which ultimately donates the electrons coming from ACAD-bound FADH 2 to the respiratory chain at the level of ubiquinone (Fig. 5). A drawback of 3-hydroxybutyrate as a therapeutic option is the fact that 3-hydroxybutyrate is an anion at neutral pH; therefore, it cannot be administered in its acid form, but is instead usually given to patients as a potassium or sodium salt, which may be contraindicated in some patients, especially those with cardiac disease. To circumvent this potential salt problem, Clarke and colleagues devised a cleverly conceived alternative in which 3-hydroxybutyrate is chemically coupled to a second molecule (1,3-butanediol) via an ester linkage, giving rise to the compound (R)-3-hydroxybutyl-(R)-3-hydroxybutyrate (in short, a keto-ester). 65 Nutritional ketosis through the administration of this keto-ester has revealed metabolic and performance benefit in athletes during exercise. 66 We have recently tested the efficacy of this keto-ester in 5 VLCAD-deficient patients and found an improved muscular energy balance during exercise after ingestion of a single dose of the keto-ester. 67 These encouraging results obviously require further testing in a larger group of patients.
FUTURE DIRECTIONS
It is clear that much has been learned in recent years about the mFAO disorders, and the inclusion of mFAO disorders in newborn screening programs around the world now allows 5. Ketogenesis and ketone oxidation. Acetyl-CoA coming from mFAO in the liver can be used for ketogenesis. Under specific nutritional and physiological conditions such as prolonged fasting, these ketones can be used as energy substrates in other tissues, notably the brain but also heart, muscle and kidney. Synthetic ketone esters (KE) can be orally consumed and provide additional and alternative energy substrate, for instance in patients with a defect in mFAO. FA, fatty acid; BDH1, 3-hydroxybutyrate dehydrogenase type 1; SCOT/OXCT1, succinyl-CoA:acetoacetate transferase; ACAT1,mitochondrial acetoacetyl-CoA thiolase; HMG-CoA,3-hydroxy-3-methylglutaryl-CoA.
the timely identification of patients before the occurrence of irreversible damage that in some cases even culminates in early death. Nevertheless, much remains to be learned, especially with respect to the therapeutic options for patients and the question of which patient qualifies for what therapy. Studies aimed at inducing the residual activity of the deficient mFAO enzyme should be pursued following different strategies, including the search for much more potent PPAR activators than bezafibrate. In addition, the work on keto-esters as a source of readily oxidizable substrate should be expanded, especially because this therapy, in contrast to PPAR agonists, would also benefit patients with a severe phenotype due to non-inducible mutations. One other aspect that definitely requires additional work has to do with the cardiomyopathy and rhythm disturbances, which are serious complications in lcFAO-deficient patients and may cause early death, especially in patients with a severe deficiency due to a low level of residual activity. So far neither early detection by newborn screening nor rigorous dietary treatment has been able to rescue these patients. In order to try and find a solution to this problem, we have recently performed electrophysiological studies in cardiomyocytes derived from induced pluripotent stem cells generated from skin fibroblasts from 2 different VLCADD patients. 68 The mitochondrial booster resveratrol was found to mitigate the biochemical, electrophysiological, and intracellular calcium changes in cardiomyocytes from the mildly affected patients, but not in those from the severe patient. Importantly, the electrophysiological abnormalities in cardiomyocytes from both the severely and mildly affected patient were markedly corrected by etomoxir, which is a powerful inhibitor of CPT1. This finding suggests that the accumulation of long-chain acylcarnitines, rather than that of long-chain acyl-CoAs, is at the basis of the observed electrophysiological abnormalities, at least in the cardiomyocytes we studied (compare Fig. 6A and B). This conclusion would be in line with the notion that patients affected by CACT deficiency also show severe cardiac abnormalities which include arrhythmias and cardiomyopathy. CACT deficiency leads to the accumulation of acylcarnitines in the cytosol, but not inside mitochondria, so that the conclusion must be that long-chain acylcarnitines in the cytosol are the true toxic agents causing heart disease in lcFAO-deficient patients. This is in line with results from the 1990s in rodent cardiomyocytes that were exposed to high concentrations of acylcarnitines and displayed marked rhythm disturbances. [69][70][71] Future work will have to resolve whether a therapy based on (partial) inhibition of mFAO in lcFAO-deficient patients, thereby reducing intracellular acylcarnitine levels, would be a realistic therapeutic option for patients using etomoxir or some other inhibitor of CPT1. Such studies are underway. 6. Rationale for substrate reduction therapy with etomoxir in mFAO defects. (A) A defect in beta-oxidation involves the impaired oxidation of fatty acids which leads to limited energy from these substrates but also to accumulation of acylcarnitines and acyl-CoAs. (B) Treatment with the CPT1 inhibitor etomoxir does not repair the primary block in energy production from fatty acids, but it prevents the accumulation of acylcarnitines. CPT, carnitine palmitoyltransferase; FFA, free fatty acid.
|
2020-10-06T05:05:16.922Z
|
2020-07-21T00:00:00.000
|
{
"year": 2020,
"sha1": "b9135e99e83250ef3b6f5b46a223322ef3dc04a7",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.12997/jla.2020.9.3.313",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b9135e99e83250ef3b6f5b46a223322ef3dc04a7",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
234807564
|
pes2o/s2orc
|
v3-fos-license
|
Infrastructure Investments, Regional Trade Agreements and Agricultural Market Integration in Mozambique
Integration of agricultural markets has been a topic of great interest in Mozambique. Numerous studies have been conducted to assess both domestic and regional integration of maize markets in the country, though with some contradictory results. In this study domestic and regional market integration in Mozambique is assessed, focusing on maize markets as the main crop in the country. In contrast to previous work, this study takes into account new investments in infrastructure as well as changes in regional trade policies, using vector autoregressive (VAR) and vector error correction (VEC) models. The main findings suggest that maize markets in Mozambique are not efficiently integrated. This is particularly true between the deficit markets in the South and the surplus markets in the Centre and North of the country. At the regional level, market integration is also inefficient in many cases. Nonetheless, investments in infrastructure, such as the Zambezi River Bridge, linking the north to the rest of the country, as well as changes in trade policies over the years are shown to be significantly impacting to maize price changes, particularly in the north. The overall results suggest there is room for improvements in the maize value chain performance, particularly there is scope for farmers to engage more in trade and for reducing food loss. Action may include investments on training programs and incentives to shift farmers from the current subsistence farming to a more commercial farming system approach.
Introduction
Spatial integration of markets and e cient price transmission are important to reduce price volatility of commodities and to lead to gains from trade (Baulch, 1997). However, despite advances in information technology systems, agricultural markets in Africa are still seen as islands isolated from the world. Conforti (2004) and Minot (2011) found overall poor short-term and long-term price transmission from world agricultural markets to domestic markets in Africa. Together, these two authors have evaluated around 11 agricultural markets in Africa (including Mozambique), all from the Sub-Saharan region with the exception of Egypt.
examined market integration for maize within the Eastern and Southern
Africa region and the relationship between different markets across the three main regions (South, Centre and North) of Mozambique and markets in the cross-border countries. However, they did not test for integration across the markets in Mozambique.
In general, the results from past studies have been contradictory. Some studies point to poor integration of maize markets between the domestic surplus and de cit regions, while others suggest a highly e cient spatial arbitrage condition that leads to market integration (Alemu & Biacuana, 2006;Penzhorn & Arndt, 2002;Tostão & Brorsen, 2005;Van Campenhout, 2012). The reality, however, is that maize trade between the domestic surplus and de cit markets is limited. Most of the maize consumed in the South is sourced from imports, particularly from South Africa (AFF, 2012;Cirera & Arndt, 2008;Traub et al., 2010).
Findings from Traub et al. (2010) and Davids et al. (2016) point to a long-run price relationship between Maputo and South Africa with regard to maize and maize meal price transmission. Likewise, maize markets from the Central and Northern provinces are integrated with markets from Malawi and Zambia (Paulo, 2011).
With regard to Mozambique, price transmission and market integration have been topics of great interest over the last two decades. Apart from the study by Minot (2011), a considerable number of other studies have been undertaken in relation to Mozambique agricultural commodities (Alemu & Biacuana, 2006;Davids, Schroeder, Meyer, & Chisanga, 2016;Paulo, 2011;Penzhorn & Arndt, 2002;Tostão & Brorsen, 2005;Traub, Myers, Jayne, & Meyer, 2010;Van Campenhout, 2012). In many of those studies the focus has been on domestic and regional price transmission for maize. This is not surprising since maize is an important crop in the Southern Africa region, and it is the most grown and consumed crop in Mozambique. Penzhorn and Arndt (2002), Tostão and Brorsen (2005), Alemu and Biacuana (2006) and Van Campenhout (2012) focused on domestic spatial integration of maize markets across the surplus and de cit regions for maize in Mozambique. Others have concentrated on cross-country price transmission analyses. Traub et al. (2010) focused on market integration between Maputo (South of Mozambique) and South Africa, whilst Paulo (2011) analyzed the spatial integration between the markets in the surplus region (North and Centre) of Mozambique and markets in neighboring countries (Malawi and Zambia) where informal exports frequently occur.
The limited domestic trade of maize (and other commodities) between the surplus and de cit regions in
Mozambique has been associated with overall poor marketing conditions that lead to excessive transfer costs between these regions (Alemu & Biacuana, 2006;Cirera & Arndt, 2008;Penzhorn & Arndt, 2002;Tostão & Brorsen, 2005;Van Campenhout, 2012). This is also likely the case with nearby markets.
Despite some investment in road rehabilitation in the post-civil war period (after 1992), Cirera and Arndt (2008) found a weak positive impact of such investments on integration of nearby markets (24 km to 243 km away).
Further investments in transport infrastructure and road rehabilitation across the country have continued over the years. The most signi cant investment was the construction of the Zambezi River Bridge, opened in August 2009. At 2.4 km long it is one of the largest in Africa and it ensures a complete road connection between the North and South of Mozambique (Reis, Pedro, & Dalili, 2012). Other investments in road infrastructure and rehabilitation have also been made, particularly through projects funded by the World Bank and other donors (AfDB, 2006;IFAD, 2016;World Bank, 2018a). The main expected outcome from such investments is improved domestic markets integration, which is crucial to boost domestic agricultural production and minimize food loss, particularly at the farm level. The rational is that, even for markets that are not directly trade related, market integration between them is important to reduce arbitrage that is important for fairer prices, particularly at the farm level. However, to date, no study has been carried out to assess the impact of the Zambezi River Bridge on domestic or regional agricultural markets integration.
The aim of this study is to assess the integration of domestic and regional markets for maize in Mozambique and it contributes to the literature in three key ways. Firstly, it is an updated study on domestic market integration in Mozambique. Secondly, it assesses the impact of the Zambezi River Bridge and changes in trade policies on domestic and regional markets' integration. Thirdly, it is broader in scope than previous studies in that it tests for market integration considering simultaneously domestic markets across the three regions of Mozambique and markets from countries where regional trade is prevalent. Outcomes from this study provide key information on the status of market integration domestic and regionally, that can assist with policy decisions aimed at improving the maize value chain performance in the country.
Theory Of Market Integration And Price Transmission Analyses
Market integration and price transmission are interrelated, though different concepts. Different markets are said to be spatially integrated if tradability of a particular good (or set of goods) exists between them, and (or) price shocks from one market are transmitted to the other market (Barrett & Li, 2002). Price transmission, however, is limited to the reaction of price in one market (or level) as a response to changes in prices in another market (or level) (Bunte, 2006;Minot, 2011). Price transmission is then a form of market integration and, in many cases the two terms are used interchangeably.
Transmission of price shocks (or price co-movement) across markets, and hence market integration, can be driven by different factors such as trade network infrastructures, volatility of transport costs and common climatic conditions (Fackler & Goodwin, 2001;Ravallion, 1986). Other factors such as "domestic and border regulation policies, market power, product heterogeneity and perishability, exchange rate risks, imperfect ow of information and expectations" (Listorti & Esposti, 2012, p. 84) are also regarded as drivers of spatial arbitrage and price transmission.
Price shock transmission can also occur between markets not directly connected through trade. For instance, two markets (A and B) not directly connected through trade can be strongly integrated if they both are connected directly in trade to a common third market (C) (Fackler & Goodwin, 2001).
Common approaches to assess market integration are based on the "Law of One Price" (LOP) (Baulch, 1997;Rapsomanikis, Hallam, & Conforti, 2006). This law states that, under trade and arbitrage conditions, the price differential of a homogenous good between two different markets is equivalent to the transfer costs of that particular good from the lower price market to the higher price market (Listorti & Esposti, 2012;Rapsomanikis et al., 2006). Assumptions behind LOP are very strong and restrictive as noted by Listorti and Esposti (2012) and Rapsomanikis et al. (2006). An alternative to LOP is the spatial arbitrage condition, which states that a condition for market integration requires the price differential of a homogenous commodity between two markets to be at most (and not necessarily) equal to the transfer cost (Fackler & Goodwin, 2001;Rapsomanikis et al., 2006). LOP: P i + r ij = P j Spatial arbitrage condition: P j -P i ≤ r ij Where, P is the price of the commodity in markets i and j, and r ij is the commodity's transfer cost from market i to j.
The spatial arbitrage condition is regarded as a "weak LOP". Despite the inequality sign, the weak LOP is viewed as an equilibrium concept. Accordingly, even if actual prices diverge from the weak LOP, in wellfunctioning markets actions from arbitrageurs will tend to move the price spread towards transfer costs (Fackler & Goodwin, 2001).
Over the years, different approaches to market integration and price transmission analyses have been developed based on the LOP and the spatial arbitrage condition de nitions. These include correlation analysis, static and (several variations of) dynamic regression models and the parity bounds model (Abdulai, 2000;Baulch, 1997;Fackler & Goodwin, 2001;Ravallion, 1986).
Dynamic regression models are the most widely used approaches, with many being based on autoregressive distributed lag (ARDL) models. The vector autoregressive (VAR) model is a good example of ARDL models used for this purpose. As the name suggests, VAR is a matrix of multiple regression models with different related and simultaneously treated endogenous variables, with lagged variables included to capture the dynamic and long-term market integration relationships (Popat, Gri th, & Mounter, 2017). This model is also based on the correlation among a set of variables, however, it also allows for the analyses of causality and impulse response relationships between endogenous variables (Lütkepohl, 2005). VAR is appropriate to describe the relationship between stationary (or integrated) series. In cases where series cannot be made integrated, a variant of this model, a restricted VAR also called the Vector Error Correction (VEC) model, is often suggested for cointegrated series (Hill, Gri ths, & Lim, 2012).
Examples of past studies relying on ARDL models for price transmission analysis for agricultural crop commodities include Ravallion (1986), Abdulai (2000), Conforti (2004), Alemu and Biacuana (2006), Traub et al. (2010), Paulo (2011), Minot (2011 and Davids et al. (2016). Alemu and Biacuana (2006) and Van Campenhout (2012) for instance, have used the threshold autoregressive (TAR) model. This model, unlike some other variants of ARDL (e.g., VAR and VEC models), is extended to include information on transfer costs. Traub et al. (2010) used the switching error correction model, which allows the inclusion of other set of variables. Others like Chang and Gri th (1998), Taha and Hahn (2014) and Popat et al. (2017) have relied on ARDL model approaches to non-crop agricultural commodities.
An alternative to ARDL models for market integration assessment is the parity bounds model (PBM) proposed by Baulch (1997). In addition to the price data required for ARDL models, PBM demands information on transfer costs to assess spatial arbitrage e ciency -the validity of the spatial arbitrage condition -between markets. Particular advantages of this model are its explicit consideration in the analysis of the possibility of trade ow discontinuity and simultaneity of price determination between markets, as well as the issues related to stationary and cointegration of time-series (Baulch, 1997). Penzhorn and Arndt (2002), Tostão and Brorsen (2005) and Cirera and Arndt (2008) have used PBM to assess the spatial arbitrage e ciency between maize markets in Mozambique. Some limitations of PBM, however, include the model's inability to account for the time-series properties of the data, and its results being dependent on distributional assumptions, which are not based on economic theory (Cirera & Arndt, 2008;Van Campenhout, 2012).
The Maize Trade Environment In Mozambique
Domestic trade of maize and other agricultural commodities is limited domestically, particularly between the surplus and de cit regions in Mozambique. Figure 1 below displays the maize value chain behavior in the two markets. The limited trade has been attributed to the excessive transfer costs derived from the long distances and existing poor road infrastructure conditions between the regions. In contrast, the proximity to regional markets in neighboring countries has favored imports (by the South) and exports (from the North and Centre).
Current regional trade policies also seem to be favoring maize imports. Since 1997 maize (grain) import tariffs have settled at a low 2.5 percent (ad valorem) for Most Favored Nations (MFN), and have been waived (in some years) for speci c countries or regions (Table 1). For maize our import tariffs are much higher.
Although the current import duty rates on maize and maize our aim at protecting and promoting the development of the domestic milling industry, the goal of achieving self-su ciency in maize supply is adversely affected. Maize import volumes have responded positively to import tariff suspensions. The long-term free trade agreements (FTAs) established between SADC countries beginning in 2012 seem to be linked to the increasing trend in maize imports observed in the following years ( Fig. 2). Over the last 3 years (2015 to 2017), however, the rapid depreciation of the national currency (Metical) as a result of the economic downturn (FAO, 2017;World Bank, 2018b), aligned with other factors, seems to have dampened other incentives for the increasing imports.
Methods
In this study a VEC model is used to assess maize market integration in Mozambique. The main advantage of this model is the relatively low demand for data. Usually data on prices are su cient to effectively assess the relationship across markets.
The majority of past studies on maize market integration in Mozambique have relied solely on maize prices (Davids et al., 2016;Paulo, 2011) and transport costs (Penzhorn & Arndt, 2002;Tostão & Brorsen, 2005;Van Campenhout, 2012). Others have extended their models to capture seasonal variations (Alemu & Biacuana, 2006) or the impact of road rehabilitation (Cirera & Arndt, 2008), while a few studies (e.g., Traub et al., 2010) have included other variables such as tariffs and traded volumes. In this study, the impacts of the Zambezi River Bridge, macroeconomic changes in trade policies and seasonal variations are taken into account. Some of these are nonlinear models, whose usefulness are highlighted by Sexton, Kling and Carman (1991).
Model Speci cation
The standard form of the proposed VEC model for this study is described in Eq. 1. This equation derives from Johansen's methodology for error-correction models as described by Johansen (1995) and Mukherjee and Naka (1995). Chang and Gri th (1998) used a similar VEC model with dummy control regressors. For Eq. 1, the proposed set of dummy control regressors is summarized in Table 2. p = the lag length; x t = (n x 1) vector of endogenous variables; Δx t−i = (n x 1) vector of x t−i in rst differences; I = (n x n) the identity matrix; A = (n x n) matrices of the unknown parameters for the endogenous regressors; π = matrix rank; π 0 = (n x 1) vector of intercepts; B = (J x J) vector of the unknown parameters for the exogenous (dummy) regressors; Z t = (J x J) matrices of the exogenous (dummy) variables; ε t = (n x 1) vector of the white-noise disturbance term (assumed to be independently and identically distributed with zero mean and variance matrix Σε). Log of (real) maize (retail) price for Maputo (South of Mozambique) x 2 Log of (real) maize (retail) price for Chimoio (Centre of Mozambique) x 3 Log of (real) maize (retail) price for Nampula (North of Mozambique) x 4 Log of (real) maize (retail) price for Lilongwe (Malawi) x 5 Log of (real) maize (wholesale) price for Randfontein (South Africa) Z 1 Dummy control regressor for the investments in road infrastructure (the Zambezi River Bridge). Z 1 = 1 since August 2009, and Z 1 = 0 otherwise. Z 2 Dummy control regressor for the changes in trade policy as displayed in Figure 2. Z 2 = 1 for periods of FTA (with South Africa), and Z 2 = 0 otherwise. D 1-11 Seasonal dummies (January to November). D is a subset of the Z de ned in Equation 1. VEC are appropriate models for data series that are not stationary but cointegrated (Enders, 2015). A rapid diagnosis on the data suggests that maize prices (in most markets) are random walk nonstationary processes moving closely in the same direction for the majority of the sampling period. Therefore, VEC seems appropriate compared to VAR. Formal unit roots tests are also performed to con rm the stationary condition.
In VEC models, the right-hand side in Equation 1 is usually of primary interest. It displays information of the matrix rank (π), which represents the long-run impact in the model (Chang & Gri th, 1998). The order of π (also called matrix rank order, r) is important to determine the number of cointegrating vectors in the system. If r = 0, the VEC model is equivalent to a VAR in rst differences and variables are not cointegrated, i.e., they do not share the same stochastic trend and the long-run relationship cannot be established from this model (Enders, 2015). If r = n, all variables are stationary and VEC is again equivalent to VAR, though in levels. Intermediate cases where the rank order is between 1 and n imply the existence of a single (if r = 1) or multiple (if 1 < r < n) co-integrating vectors (Chang & Gri th, 1998;Enders, 2015).
The matrix rank order can be determined by the characteristics roots tests (Equations 2 or 3). Critical values from these tests derive from the Monte Carlo approach and are dependent on degrees of freedom equivalent to the "number of nonstationary components under the null hypothesis (i.e. n -r)" and the presence of a constant and (or) drift terms in the model (Enders, 2015).
Another property of π is that it can also be used to determine the speed of adjustment parameters (α), which is a measure of the long-run dynamics (Chang & Gri th, 1998;Enders, 2015). As described in Johansen's methodology, the matrix rank order can be viewed as a product between the matrices α and cointegrating parameters (β), i.e., π = αβ´ (Enders, 2015). Chang and Gri th (1998, p. 372) summarizes that: A large (small) value of α means that the system will respond to a deviation from the long-run equilibrium with a rapid (slow) adjustment. On the other hand, if the αs are zero for some equations, it implies that the corresponding variables do not respond to the disequilibrium error and, hence, may be weakly exogenous. Equation 1 is estimated as a bivariate model for each pairwise market using Eviews 8. If market pairs are not cointegrated, Eq. 1 is reduced to the appropriate VAR form.
The VEC model is estimated using Johansen's methodology. Overall, four main steps are required to implement VEC models, and parameters can be consistently estimated by the maximum likelihood estimator (Enders, 2015). The steps include: Variables pretesting (for lag length and order of integration); Model estimation (as in Eq. 1), rank determination and white-noise properties test; (if white-noise properties are satis ed) Analyses of the normalized cointegrating vector(s) and the speed of adjustment coe cients; Assessment of the model adequacy (causality tests).
Lag Length and Stationary Tests
The rst step in estimating VEC (and VAR) models includes the pre-tests to determine the variables lag length and order of integration. Appropriate lag order selection, in particular, has the property of returning the minimum mean square errors (Lütkepohl, 2005) and, hence, (ceteris paribus) leading to more consistent parameter estimates and robust post-estimation tests. Different procedures for lag order selection can be found in the literature. According to Lütkepohl (2005), the Schwarz Bayesian information criterion (SBIC or SC) is more consistent compared to others. Hence, SBIC is chosen for this study.
Although VEC models can reveal the presence (or absence) of cointegration among series regardless of their integration order, Enders (2015) recommends testing for series order of integration. Augmented Dickey-Fuller (ADF) and Philips-Perron are some of the tests used for this purpose (Enders, 2015;Hill et al., 2012). The former is constructed under the assumption of residuals independence with constant variance whilst the "Phillips-Perron test [, which is a modi cation of the Dickey-Fuller test,] allows the disturbances to be weakly dependent and heterogeneously distributed" (Enders, 1994, p. 239
Post-estimation Tests
Signi cance tests on the α's are important to identify weakly exogenous variables in the system (i.e., if there are any α = 0) whilst on the β's statistics tests are relevant to identify the system and establish r linear combination of variables necessary to construct meaningful parameters of α's through appropriate normalization of β's (Johansen, 1995). The test statistic for the purpose of testing individual (or a linear combination of the same) parameter type is described as a likelihood ratio test (LR). This test involves comparing restricted to unrestricted models, and it follows a chi-square distribution with degrees of freedom equal to the number of restrictions. In the case of a single cointegrating vector (r = 1) LR is asymptotically equivalent to a t-test for a single coe cient test (Enders, 2015).
Other tests such as the impulse response function are also recommended as complementary to VEC analysis. Lütkepohl (2005, p. 262) supports that "impulse responses may give a better picture of the relations between variables". Chang and Gri th (1998), Listorti and Esposti (2012) and Popat et al. (2017) are some examples of past studies that applied impulse response function as complementary to VAR or VEC model analyses. Broadly speaking, impulse response functions are useful to assess the adjustment response from a variable to shocks on other variable within the system (Hill et al., 2012;Lütkepohl, 2005). For this study, however, the focus is limited to the degree of integration between markets and the signi cance of the estimated coe cients. Assessing the response of one market to shocks on the other market is beyond the scope of this study.
Data
This study is based on monthly price series from January 2007 to December 2015. Prices from Mozambique (Maputo, Chimoio and Nampula) were gathered from the Ministry of Agriculture and Food Security (MASA) and converted to USD using the monthly exchange rate obtained from the Central Bank. FAO GIEWS was the source of price data from Malawi (Lilongwe) and South Africa (Randfontein). The selected markets are similar to those used in previous studies. All maize prices are at the retail level, except from South Africa where wholesale prices are the only prices reported. All prices are converted to real terms by the Mozambique's consumer price index provided by the Central Bank and the National Institute of Statistics (adjusted to the base period of December 2010), and used in log-forms. The trends of these prices over time for each market are displayed in Fig. 3. Descriptive statistics of real prices are presented in Table 3. Overall, price series variability range from 20 to 35 percent, with the series for Maputo being the most stable.
Data availability and access is a major limitation for this study. Only price data could be accessed for this study. Some (10) cases of missing data were observed on the dataset, all for Lilongwe from January to October 2011. The missing data was replaced with information from the closest market from which data was available. Nsanje (in Malawi), was used as a proxy for Lilongwe and information was gathered from February to October 2011. A linear interpolation with the months in between was then used to estimate a price for January.
Another important feature of the dataset is the timing of coverage. Prices gathered from MASA were up to December 2015. This captures the initial period of the Metical rapid depreciation as shown in Fig. 4. However, it is unlikely that this has much impact on the model since the dataset used does not extend to following years.
Estimated Bivariate Models
Based on the outcomes from the preliminary tests, VAR(2) and VEC(2) models are estimated in this study.
For the sake of simplicity, full results from the preliminary and post-estimation tests are presented in the Supplementary Materials. Overall, the estimated VAR and VEC models display consistent results. LM and White tests reveal no issues with autocorrelation and heteroskedasticity, respectively, for each of the 10 bivariate models at the lag order selected. The only issue is the violation of the normality assumption.
However, "in large samples the maximum likelihood estimator […] has a probability distribution that is approximately normal" (Hill et al., 2012, p. 723). Results from the bivariate models are reported in Table 4.
Domestic Market Integration
Most recent data suggest that Mozambique is still a net importer of maize despite the surpluses produced in the Northern and Central provinces. Annually, over $USD 25 Million is spent on maize imports (UN COMTRADE, 2018). In contrast, maize surplus is often partly lost at the farm level and traded informally to neighboring countries such as Malawi, Zambia and Zimbabwe (Cugala et al., 2017;FEWSNET, 2018;Hugo, 2008).
Nevertheless, the results in Table 4 suggest that Maputo is integrated with markets from the Centre (Chimoio) and North (Nampula). The integration, however, is not symmetric. For instance, a 1 percent increase in maize prices in Chimoio has a signi cant impact in Maputo, leading to a 0.20 percent maize price increase in the latter market. Conversely, a 1 percent increase in maize prices in Maputo has no signi cant causality impact on maize prices changes in Chimoio and Nampula. Whilst such an outcome is generally expected given that Maputo is a de cit market for maize and others are described as maize surplus markets, the magnitudes of the signi cant coe cients reported are curious. Given the excessive transportation costs between the North and South of Mozambique (Coughlin, 2006), one would expect Maputo to be more integrated to Chimoio than to Nampula. Nonetheless, the ndings from this study suggest the opposite.
One explanation relates to the expansion of the milling industry from the South to the Central and Northern provinces of Mozambique, which happened during the same period. One of the biggest millers operating in the Central region was launched in 2005 (Agriterra Ltd, 2019; World Bank, 2012), and a number of others operating in the Northern provinces were launched about the same time. This expansion of the milling industry has potentially increased the North's demand for maize. This may have led to some degree of competition between the North and South for the maize produced in the Centre. This is likely the classic example of two apparently unrelated markets being integrated from sharing a third and common market supplier.
The Central region of Mozambique (where Chimoio is located) is the major producer of maize, contributing around 60[1] percent of national production (MASA, , 2016. With the Central region presumably playing an important role as a common domestic maize supplier for both the North and South, the positive and signi cant relationship between Maputo and Nampula seems acceptable. Since Maputo (and the South in general) is by far the major consumption market for maize domestically (Alemu & Biacuana, 2006;Penzhorn & Arndt, 2002), a larger response to price changes in Nampula would be expected as an incentive to attract sellers from the Centre. In contrast, price changes in Maputo are likely to have a minimum (or no signi cant) effect to price changes elsewhere since the North is also an important production region for maize.
Chambo (2013) also found a positive and signi cant impact of maize price changes in Nampula to prices in Maputo. That study used weekly data between January 2007 and May 2013 to assess maize price transmission at the wholesale level across Maputo, Nampula and South African markets using a similar methodology. To explain these ndings Chambo (2013) points out that "a considerable part of the domestic maize consumed in Maputo is sourced from Nampula", which is stated to be "the major producer province of maize in Mozambique" (Chambo, 2013, p. 60). This view, however, does not match with the data reported by MASA ( , 2016 for the years 2012, 2014 and 2015, which suggests Tete, Zambezia and Manica (all in the Central region) were the top three producers of maize, respectively. Santos and Tschirley (1999) also suggest infrequent trade of maize between the two regions due mainly to excessive transaction costs. Even though data on maize trade across the domestic markets could not be accessed, Chambo's arguments are hardly convincing.
Van Campenhout (2012) using monthly data from January 2000 to February 2011 also studied the maize price relationship between Maputo and Nampula, using autoregressive, TAR and exible TAR models. The author found a negative relationship with estimated coe cients ranging from about -0.10 to -0.06. This inverse relationship is not clearly discussed by the author.
Maize markets integration between Maputo and Chimoio is the most studied relationship in recent years. Penzhorn and Arndt (2002), Tostão and Brorsen (2005), Alemu and Biacuana (2006) and Van Campenhout (2012) are some examples of past studies focusing on market integration between these two markets. Past results, however, are not congruent. The ndings from this study point to Maputo being integrated to Chimoio (and not the opposite) in accordance with ndings from Penzhorn and Arndt (2002) and Alemu and Biacuana (2006). Conclusions from other studies have been contradictory. For instance, using the PBM approach with weekly data, Penzhorn and Arndt (2002) point to market integration between Maputo and Chimoio more than 75 percent of the time during the period 1993-1998.
In contrast, Tostão and Brorsen (2005), in analyzing the relationship between the same markets using the same methodological approach, found market ine ciency (and hence non-integration) over 80 percent of the time using monthly data over the period 1994-2001. Results from other studies are also intriguing.
Whilst overall results from Alemu and Biacuana (2006) point to a strong integration and a positive price relationship between the two, Van Campenhout (2012) using a similar approach (TAR model) identi ed a negative relationship.
With regard to Chimoio and Nampula, a similar outcome to Maputo and Chimoio integration would be expected since the Centre is likely a maize supplier to the North and South. Nonetheless, the results from Table 4 point to a symmetric signi cant price transmission between Chimoio and Nampula, with Chimoio's responsiveness being about twice the price changes in Nampula, and Nampula's responsiveness being about 0.50 percent to a 1 percent price changes in Chimoio. This outcome suggests one of two things: either (i) buyers from Nampula have low bargaining power, or (ii) excessive transfer costs exist between the two markets. The latter seems more plausible. Overall access to production centers in the country is di cult due to poor infrastructure, and until mid-2009 the road connection between the North and the rest of the country was interrupted, and access was only be possible by ferry (Reis et al., 2012).
Overall, the results suggest that domestic maize markets are poorly integrated. Whilst prices are transmitted from the North and Centre to the South, the opposite doesn't hold. Between the North and Centre, a symmetric signi cant price transmission is observed. However, market ine ciency is still an issue, with price transmission not being proportional. If the LOP holds, the price differential between markets should only be explained by changes in transfer costs, in which case symmetric and proportional price changes (in relative terms) would be expected unless access costs display some volatility. The main likely outcome from such poor market integration is the overall low domestic trade, which may have some implications to food losses, mainly at the surplus markets, and to the South's strongest reliance on imports. Considering the current downturn in the domestic economy (World Bank, 2018b), the Government of Mozambique should seek to reduce the dependence on overall imports by better linking the surplus and de cit regions. Alternatively, to avoid (or minimize) food losses e ciently, surplus markets should at least be price integrated with regional markets with trade occurring between them.
Regional Market Integration: Mozambique and South Africa
In small-scale producing countries such as Mozambique, regional market integration is crucial to ensure that domestic markets are not islands isolated from the world or region, and that changes in domestic prices are fairer in an open economy context. With regard to the relationship between Mozambique and South Africa, a number of interesting and intriguing results are identi ed from Table 4. Overall, domestic markets don't respond to price changes in South Africa in the long run. Whilst these results would seem reasonable for the Central and Northern markets domestically, it may not be so for Maputo. Almost all maize processed in Maputo is imported, mainly from South Africa (World Bank, 2012). That being the case, it is likely that domestic maize grain traded in Maputo goes to a different market segment, which is smaller and non-responsive to price changes in South Africa. Traub et al. (2010) also found a nonsigni cant long-run price relationship between Maputo and South Africa. Nonetheless, these authors found some evidence of a signi cant relationship between maize meal prices in Maputo and maize grain prices in South Africa.
Possibly the most interesting result on the maize price relationship between the two countries is the signi cant impact of price changes in Nampula to South Africa. There is no apparent trade connection between Nampula and South Africa and they don't share a common trade partner that would explain such an outcome. Also, considering the relative size of the markets in terms of maize production, it seems intuitive that price changes in South Africa would cause price changes in Nampula and not the opposite.
Closer inspection to Fig. 3 shows that maize prices in Chimoio, Nampula and Randfontein (South Africa) move closely together for most of the sampling period. Outcomes from the signi cant price relationship between South Africa and Nampula can be an indication of other (market or non-market related) factors that lead to price co-movements across these markets that are not effectively captured in this study.
Regional Market Integration: Mozambique and Malawi
The outcomes from the maize prices relationships between Mozambique and Malawi in this study are consistent with theory. As shown in Table 4, markets in Central and Northern Mozambique are the only ones that display a signi cant and symmetric price relationship with Malawi. With Malawi being the main destination of Mozambique's informal exports of maize, this outcome is clearly consistent. However, whilst market integration between Chimoio and Malawi seems more e cient, with almost proportional price transmission between the two, the same does not seem true for Nampula and Malawi. Prices in Nampula react by almost 0.6 percent to a 1 percent price changes in Malawi, whilst prices in Malawi react by around 1.6 times to price changes in Nampula. This is likely an indication of lower trade frequency between Malawi and Nampula compared to Malawi and Chimoio, and/or higher access costs for the Malawian imports from Nampula compared to Chimoio.
Results from Davids et al. (2016) also point to a signi cant and symmetric maize price transmission between Nampula and Lilongwe. In either direction (Nampula -Lilongwe or Lilongwe -Nampula) Davids et al. (2016) found that price changes in each market react similarly (between 0.54 and 0.58) to a 1 percent change in prices from the other market. This is close to the ndings in Table 4 regarding the impact of price changes in Lilongwe to Nampula. These authors' model, however, did not account for any control regressors, which could have improved the magnitude of the estimates, particularly with regard to the impact of prices changes in Nampula to Lilongwe.
Impact of Infrastructure Investments and FTA
The overall results in Table 4 suggest that the Zambezi River Bridge and FTAs have mostly signi cant impacts on maize price changes in Nampula. The negative coe cient for the Zambezi River Bridge suggests that such investments may have lowered the access costs to the Northern markets (Nampula), which in turn may have contributed to lower prices in those markets. In contrast, in terms of FTAs there are positive and signi cant impacts for every price equation for Nampula. On one hand this could seem to be counter intuitive, however, on the other it may reveal some preference for maize produced in the North by regional importers such as Malawi. That being the case, FTAs are effective in promoting regional trade particularly from that region of the country, which may have a substantial role to minimize food loss and improve maize farmers and traders' welfare.
Conclusion
The results from this study indicate that maize markets in Mozambique are becoming more integrated domestically and regionally. However, integration is not symmetric in some instances, as is the case for the South. Whilst Maputo seems to be integrated with the Centre and North, domestically these two markets are only integrated with each other. Even in this case of the Centre and North, market integration is not yet e cient as the coe cients suggest some non-proportional price transmission. At the regional level, even where market integration is found to be signi cant, it is also ine cient. The only exception is for the Centre (Chimoio) and Malawi. The investments in road infrastructures such as the Zambezi River Bridge have had signi cant impacts on maize pricing in the North (Nampula), contributing to lower prices from these markets. In contrast, FTAs have impacted positively and signi cantly to increase maize prices from these markets. This may be an indication of some regional preference for maize produced in the North. This is also highlighted by the positive and signi cant market integration identi ed between Nampula and Lilongwe.
With maize markets becoming more integrated domestically and regionally, it is likely that e ciency in the overall maize value chain in Mozambique will be improved, with farmers engaging more in trade and food loss being minimized. However, for this to happen investments in training programs and incentives should be provided to shift farmers from subsistence farming to a more commercial farming system approach. Incentives should target improvements in infrastructure to promote trade and market linkages as well as minimize postharvest losses.
Overall results from this research are based on a linear modelling approach. In future research, other models that account for the nonlinearities in the market integration status for many agricultural commodities, should also be considered. Nonlinear models could reveal other features of the integration status not e ciently or effectively captured by linear models. These models could be integrated with other techniques to estimate other useful, though unavailable, information (e.g., transaction costs) that is required for implementing the nonlinear models. Outcomes from that new proposed research could be important to reassess the current results.
Declarations Compliance with Ethical Standards
All secondary datasets used in this article were gathered online (from websites or documents) and public institutions. Consent has been given to the use of datasets from public institutions which are not available online.
Con ict of Interest
Authors of this article declare that there are no con icts of interest related to this publication.
|
2021-05-21T16:57:00.538Z
|
2021-04-14T00:00:00.000
|
{
"year": 2022,
"sha1": "db88fa859799cfd1a8ca8ffccc3e630f7be549fa",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-417100/v1.pdf?c=1631895347000",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "203b2009533d9b42d06128d333c9b7a9c5bf2a92",
"s2fieldsofstudy": [
"Economics",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Business"
]
}
|
118444650
|
pes2o/s2orc
|
v3-fos-license
|
Explaining the t-tbar asymmetry with a light axigluon
We propose an axigluon with mass between 400 and 450 GeV and flavor universal couplings to quarks to explain the Tevatron t-tbar forward-backward asymmetry. The model predicts a small negative asymmetry for t-tbar pairs with invariant mass below 450 GeV and a large positive asymmetry above 450 GeV. The asymmetry arises from interference between s-channel gluon and axigluon diagrams and requires a relatively weakly coupled axigluon ($g_{a} = g_{qcd}/3$). Axigluon-gluon interference does not contribute to the t-tbar cross section. New contributions to the cross section arise only at fourth order in the axigluon coupling and are very small for a sufficiently broad axigluon. Dijet measurements do not significantly constrain the axigluon couplings. We propose several possible UV completions of the phenomenological axigluon which explain the required small couplings and large width. Such UV completions necessarily contain new colored fermions or scalars below the axigluon mass and predict multi-jet events with large cross sections at the Tevatron and LHC.
I. INTRODUCTION
This paper proposes a light axigluon to explain the asymmetry observed in the production of tt pairs at the Tevatron. The asymmetry has been observed in events where both tops decay leptonically [1] as well as in semi-leptonic events [2][3][4][5], and it significantly exceeds the Standard Model (SM) prediction [6][7][8][9][10]. Particularly striking is the mass dependent asymmetry measured at CDF [4]. It shows that most of the asymmetry arises from tt events with high invariant masses, while events with low invariant masses may even have a negative asymmetry. A number of different models with new physics contributions to the asymmetry have been suggested . Here we explore the effects of a weakly coupled axigluon [61][62][63] with a mass slightly below 450 GeV. The mass is chosen to coincide with the scale √ s = m tt at which CDF observed a change-over from negative to positive asymmetry. In our model, the asymmetry arises from the axigluon-gluon interference term of the differential cross section ( Figure 1). This term is proportional to the s-channel axigluon propagator which changes sign at the mass of the axigluon M a . The signs are such that the asymmetry is negative for s < M 2 a and positive for s > M 2 a , as suggested by the CDF data. It is also interesting to consider axigluons with masses below the tt threshold. Then the asymmetry is only very weakly s-dependent and positive. Motivated by the sign change of the asymmetry in the CDF data we continue to focus on values of M a between 400 and 450 GeV in this paper. Note that our axigluon has flavor universal couplings to all quarks and therefore no constraints from flavor physics are expected.
To demonstrate that we can fit all relevant data, we compute the tt differential cross section as a function of the axigluon mass, coupling to quarks g a , and width Γ a . The color and spin summed and averaged squared matrix element for the process u(p 1 )ū(p 2 ) → t(k 1 )t(k 2 ) is [11,63] . The three curves correspond to axigluons with mass 420 GeV which each produce a 30% asymmetry from new physics in the 450 GeV and above invariant mass bin. Note that the asymmetry is negative below the resonance of the axigluon. All three example points predict about -5% asymmetry when integrated from the tt threshold to 450 GeV. To obtain an estimate for the total new physics + QCD asymmetry, one can simply add the SM asymmetry (about 10% in the high invariant mass bin).
Here we used the partonic Mandelstam variables s ≡ ( In terms of the top quark velocity β ≡ 1 − 4m 2 t /s and the scattering angle θ between the outgoing top and the incoming quark in the CM frame we have t t = −s(1 − β cos θ)/2 and u t = −s(1 + β cos θ)/2.
The second term in (3) comes from axigluon-gluon interference and is odd under the reflection cos θ ↔ − cos θ (u ↔ t), whereas the QCD and new physics squared contributions are even. Therefore the interference term contributes to the forwardbackward asymmetry but not to the differential cross section dσ/dm tt , whereas the new physics squared term contributes to the cross section but not to the asymmetry.
The measured pp → tt total cross section, σ tt = (7.5±0.48) pb [64] and cross section shape dσ tt /dm tt [65] are in reasonable agreement with predictions from perturbative QCD [66][67][68][69] σ tt = (6.5±0.5) pb while a large new contribution to the asymmetry is required. This implies that the new physics squared term must be small for all values of s while the interference term is required to be large. These two conditions are satisfied with small coupling g a ∼ g s /3 and large width Γ a > ∼ 0.1M a . Much smaller values of the width would produce a noticeable "bump" in the tt invariant mass spectrum while much smaller values of the coupling would fail to produce a significant asymmetry. The large values of the width which we need require additional decay channels for the axigluon beyond the decay to standard model quarks. We postpone a discussion of models which accomplish this until after showing the phenomenological fits.
In Figure 2 we show the new physics contribution to the asymmetry as a function of invariant mass m tt for three different choices of axigluon parameters. Each corresponds to an axigluon mass of M a = 420 GeV and a new physics contribution to the high invariant mass asymmetry A N P (m tt > 450 GeV) = 0.3. Since the contributions from new physics to the differential cross section are small it is a good approximation to simply add this to the SM value of the asymmetry, A SM (m tt > 450 GeV) = 0.11 [6]. Given the large uncertainties on the shape of the measured asymmetry all three are in good agreement with the asymmetry data. Figure 3 shows the corresponding tt cross sections as a function of invariant mass. One sees that for 15% or 20% width, the cross section shape shows very little distortion from the cross section of the SM alone. The integral of the new physics contribution under the bump in these two cases is 0.6 and 0.5 pb, respectively. This is well within the experimentally allowed cross section. For a width of < ∼ 10% there is a visible "bump" in the spectrum. However, even 10% may still be consistent with experiment after taking into account significant smearing due to detector effects and statistical fluctuations. The total new physics cross section in this case is 0.7 pb.
Another important constraint on many models comes from the absence of large deviations in the tt cross section at the LHC [70,71] and the dijet cross sections measured at the Tevatron [72][73][74] and LHC [75][76][77]. Since the axigluon in our model is relatively light and weakly coupled, the LHC top cross section does not give an interesting bound. Potentially more interesting are dijet constraints. However our axigluon is sufficiently weakly coupled and broad that the bounds are evaded provided that the new decay channels of the axigluon which are responsible for the large width do not correspond to dijets. We show a plot of the dijet invariant mass distribution including the axigluon contribution at the Tevatron in Figure 4 where we multiplied the new physics contribution by a factor of 3 to make its effect visible on the plot. The integrated new physics cross section under the peak is below the CDF bound [72] for narrow resonances of about 8 pb for axigluon mass M a = 420 GeV. Finally, our axigluon can modify the coupling of fermions to the W and Z through loops. Such effects have recently been analyzed in [53] with the result that an axigluon as weakly coupled as ours is completely unconstrained by precision electroweak.
II. A GAUGE INVARIANT LAGRANGIAN
We now show how our phenomenological axigluon may be obtained from a gauge invariant Lagrangian starting with an SU (3) L × SU (3) R gauge group which is broken to the diagonal SU (3) color by the vacuum expectation value for a bifundamental scalar field φ. The Lagrangian involves some dimension 5 and 6 couplings which we envision coming from integrating out vector-like heavy fermions with masses of several hundred GeV. The dimension 6 couplings modify the axigluon couplings to fermions after replacing the φ-field by its VEV. 1 The SU (3) L × SU (3) R gauge symmetry is anomalous, requiring new fermions not far above the TeV scale. We briefly discuss explicit anomaly-free UV completions in later Sections of this paper.
A good fit to the tt cross section data requires the couplings of the axigluon to be very close to axial. This is natural if the strong interaction sector of the theory respects parity. However, parity is broken by the weak interactions and SM Yukawa coupling, and radiative corrections will generate some parity violation in the strong sector. The size of the parity violation is at least δ p ∼ g 2 2 /16π 2 log(Λ U V /M a ) > ∼ 1% which will give rise to vector couplings of the axigluon of order δ p g s . Λ U V is model dependent and corresponds to the scale of parity breaking in the strong sector. We assume that parity violation in this sector is negligible and ignore possible small vector couplings of the axigluon.
The Lagrangian is where Q represents the left-handed quark doublets, U and D are the right-handed singlets, L yuk gives rise to the SM Yukawa couplings, and L(φ) contains the kinetic term and potential for the bifundamental scalar φ. F L and F R are the field strengths for the two SU (3) gauge groups, and the covariant derivatives are A is the SU (3) L gauge field when acting on Q or (φU ) and the SU (3) R gauge field when acting on U or (φ † Q), etc. The action of parity takes We assume that the scalar potential forces a vacuum expectation value (VEV), φ = f 1 3 for the scalar. The vacuum expectation value breaks two SU (3) symmetries to the diagonal, and 8 Nambu-Goldstone Bosons (NGBs) are "eaten" by the massive axigluon. The remaining NGB from the breaking of U (1) L × U (1) R → U (1) V remains massless at this level. We can give it a small mass by breaking explicitly the off-diagonal U (1) symmetry with a det φ term in the scalar potential. Replacing the scalar with it's VEV we can solve for the mass and couplings of the axigluon by diagonalizing the gauge boson mass matrix and rescaling the fermion fields. We find the new Lagrangian where now the covariant derivative contains only the gluon field and the SM weak interactions. The axigluon A A = (A L − A R )/ √ 2 couples with opposite signs to Q and U, D. The axigluon mass, its coupling to quarks, and the strong coupling constant are To obtain a small axigluon coupling g a we choose the fermion mixing parameter λf /Λ to be close to unity; thus the new fermions cannot be much heavier than the axigluon. In fact, we will be interested in the case when they are lighter.
To implement the SM fermion masses in this model we must introduce Yukawa couplings of the three generations of fermions to the Higgs field. Because Q and U, D are charged under different SU (3) gauge groups this requires insertion of the link field φ. For example, the up-type Yukawa couplings could come from a coupling of the form
A. The axigluon width
If the axigluon is lighter than all the other new particles in the model it can only decay to standard model fermion pairs. Then it will have a very narrow width because of the small coupling g a . This is ruled out because it would produce a significant bump in the tt invariant mass spectrum. Therefore there must be additional colored particles which are lighter than the axigluon and which have sufficiently large couplings to the axigluon. 2 A very interesting possibility is that this role is played by the heavy vector-like fermions which we integrated out to obtain the higher dimensional operators in (4). As we will show in the next section, in a UV completion with such heavy fermions the axigluon does have large couplings to one SM fermion and one heavy fermion Given this coupling, the width of the axigluon is where N f is the number of heavy fermion partners which are lighter than the axigluon and M f is their mass. We will allow only the partners of all first and second generation quarks as well as the right-handed bottom quarks to be below the axigluon mass. Left-handed third generation and right-handed top partners must be heavier because their decay chains lead to copious production of leptons from W decays. Fortunately, it is consistent with minimal flavor violation and natural to expect precisely these particles to have significantly different masses because of the large top Yukawa coupling. Thus we will take N f = 9.
Assuming that the 9 heavy fermions are much lighter than the axigluon so that there is no phase space suppression, one obtains a tree level axigluon width of 15% from these decays alone. For more realistic masses of M f = 200 GeV one obtains a width of 10%. Of course, the fermions must then decay in a manner which is not already ruled out by existing Tevatron and LHC searches. Direct decays via off-shell axigluons into three light quarks appear to be ruled out by recent searches for R-parity violating gluino decays [84,85] up to fermion masses of 300 GeV. 3 However, if the 9th pseudo-Nambu-Goldstone-Boson η 9 from the U (1) L × U (1) R → U (1) V breaking is light, then the heavy fermions can decay into one light fermion and η 9 . The coupling responsible for this decay is of order one so that this decay mode would dominate over the three body decay. Decay widths into SM fermions with mass m q and W bosons are suppressed by mixing angles ∼ m q /M and are negligible except for the top quark. The pseudoscalar axion η 9 then decays into pairs of the heaviest standard model quarks for which there exists sufficient phase space. We find that η 9 masses in the range 10 GeV to 25 GeV are consistent with experiment, ensuring that it will predominantly decay to b's. The lower bound on the mass comes from upsilon decays and the upper bound ensures that the two b quarks from η 9 are reconstructed as a single boosted jet (consisting of two nearly collinear b-quarks). Thus a typical axigluon decay will result in two light jets and one "axion jet". We will have more to say about the phenomenology of these states in the final section.
B. Designer widths
Here we consider the possibility that the heavy fermion masses are above the axigluon mass. Then we must introduce additional states below the axigluon mass to produce the large width. Since the axigluon production cross section at the Tevatron is very large (between 50 and 100 pb), we must ensure that the final states from the decays of the new particles are not already ruled out. One option for the new particles is to add k color adjoint scalars σ i L/R , i = 1 · · · k to each of the two gauge groups. The scalars are parity mirrors σ L ↔ σ R of each other. By choosing the multiplicity k we can dial the resulting axigluon width. After the gauge symmetry breaking, we obtain the parity even/odd linear combinations σ ± = (σ L ± σ R )/ √ 2 with equal masses which we choose of order 100 GeV. Re-expressing the σ gauge couplings in terms of the axigluon field and σ ± we find the coupling for each scalar-pseudoscalar pair. The axigluon width to decay into two scalars is Thus to get a sufficiently large axigluon with we take k ∼ 5−10. To decay the scalars σ ± we introduce dimension five couplings which allow both σ + and σ − to decay to pairs of gluons. Such couplings are obtained from integrating out Dirac fermions with masses M = Λ and Yukawa couplings η = η + + iη − to the scalars σ L/R . In absence of any other significant decay channels for the σ ± the axigluon would predominantly decay to four jets. Such signatures would closely resemble those recently discussed in the context of colorons arising from strong dynamics [79,80]. We are not aware of any 4-jet searches at the Tevatron or LHC which rule out this signal. Searches for 4-jet final states with multiple b-tags [81,82] at the Tevatron do not apply here, and more recent searches for gluinos with R-parity violating decays resulting in six jet final states have such aggressive cuts that axigluon events would not pass selection cuts [83][84][85].
III. UV COMPLETIONS
In the previous Section we presented a gauge invariant Lagrangian for the axigluon. This Lagrangian is adequate as a lowenergy description of the axigluon and its interactions. However the scale suppressing higher dimensional operators cannot be very high because g a = g s /3 requires λf /Λ = 1/ √ 2. We therefore explore a few example UV completions.
A. A minimal two site model
The gauge group of this model is SU Table 2). A simple graphical representation for such a model is shown in Figure 5 where flavor and parity symmetries ensure equality of the masses and couplings. To determine the axigluon couplings to the light fermions we may integrate out the massive fields perturbatively, expanding to second order in φ/M , and treating terms with φ's as interactions. Alternatively, we may first substitute the VEV for φ and diagonalize the mass matrices for the fermions 4 As described so far this model has SU ( exactly. Doing the former, we would obtain the Lagrangian of the previous section (4) with Λ = M . Doing the latter, we first diagonalize the fermion mass matrices by defining and similar linear combinations for U and D. The coupling of the massless linear combination Q SM to the axigluon is obtained by solving for Q and Q in terms of Q heavy and Q SM and substituting them into the gauge kinetic terms for Q and Q . We find that the axigluon couples axially to standard model and heavy quarks with coupling g a = g s (1 There is also a coupling of the axigluon to one standard model and one heavy quark given by which, as will be discussed in the phenomenology section, can have interesting phenomenological consequences.
Note that in this model the SM Yukawa couplings can be obtained from renormalizable couplings. For example, for the up-type Yukawa couplings we may write As written, these Yukawa couplings break the parity symmetry and lead to small radiatively generated differences between the left and right parameters in (14). This leads to small vectorial couplings for the axigluon. It is possible to restore the approximate parity symmetry of the strong sector by also adding the Yukawa couplings Y u Q † HU . The large width of the axigluon in this model derives from the decay into heavy-light fermion combinations. The coupling g mixed a for this decay can be read off from (16). In the limit where the axigluon coupling to the SM fermions g a becomes small this coupling approaches g s , and the width is given by (10).
If the heavy fermions are too heavy to provide a significant width for the axigluon we must add new light particles. As in the model of the previous section we can add k copies of scalars σ i L/R with masses of order 100 GeV. The axigluon width into these particles is given by (12). To generate the dimension 5 operators which allow the scalars σ ± to decay we introduce a vector-like colored fermion for each of the gauge groups and write the couplings Integrating out the fermions generates the desired dimension 5 terms at one loop.
B. The symmetric g-G-g model This model can be described using the graphical representation of Figure 6. There are three distinct SU (3) gauge groups. The two external ones have equal gauge couplings g, as required by parity, and the central one has gauge coupling G > g. The action of parity in this model is After the link fields develop a (parity preserving) VEV φ i ≡ f 1 3 there is one massless gauge boson that corresponds to the gluon, and two massive gauge bosons. One is odd under the parity transformation and is identified with the axigluon while the other is even under parity and corresponds to a "heavy gluon". In terms of the original parameters we find that the QCD couplings is g s = gG/ g 2 + 2G 2 , the axigluon mass is M a = gf , and the heavy gluon mass is M G = g 2 + 2G 2 f . We will assume that G g, so that the heavy gluon is much heavier and more weakly coupled to the SM fermions than the axigluon and thus does not contribute to low energy phenomenology.
In this model the fermions that are charged under the "external" gauge groups have Yukawa couplings to the fermions charged under the "central" gauge group given by with λ 1 ∼ λ 2 . Consequently when the scalar fields get a VEV there is a combination of Q and Q that gets a mass M H = λ 2 1 + λ 2 2 f withQ (analogously, the U and D fields get a mass withŪ andD), and the other combination is identified with the standard model Q SM . Rewriting the original fields in terms of the standard model fields and the heavy fields we find that both couple axially to the axigluon with a coupling g a = g s (λ 2 2 − λ 2 1 )/(λ 2 2 + λ 2 1 ). There is also a coupling of the axigluon to a light and a heavy field with strength g HL = 2g s λ 1 λ 2 /(λ 2 2 + λ 2 1 ), which reproduces the result from the phenomenological model for The SM Yukawa couplings in this model may be generated in the same way as in the model of the previous section. The large axigluon width may again be generated from decay into heavy-light fermions or from decay into additional scalars. This model is gauge anomaly free provided that Q, U, D and the leptons have the usual SM SU (2) × U (1) charge assignments. This model is represented graphically in Figure 7. It is also anomaly free. One can infer the masses of the axigluon and heavy gluon in the G-g-G model from the previous one by changing g ↔ G. The QCD coupling is given by g s = gG/ 2g 2 + G 2 . We include fermions charged under the central SU (3) with the same quantum numbers as the SM quarks. In addition, we include heavy vector-like fermions which are charged under the external gauge groups and which mix with the fermions of the middle group. We will be interested in taking G > g. In this limit the axigluon has a large coupling to heavy fermions and can therefore have a large width. The downside is that the axigluon and the "heavy gluon" are approximately degenerate so that we must arrange for the heavy gluon couplings to be very small. We start with the Lagrangian After substituting the VEV for the scalar fields and diagonalizing the mass matrix one finds the axigluon's couplings to two SM quarks, one SM quark and one heavy quark, and to two heavy quarks Analogously, the couplings to the heavy gluon are As desired the axigluon is weakly coupled to SM quarks if is small. The heavy gluon couplings are vectorial and have contributions from two small terms with opposite signs. In order for the model to not predict obvious features in the tt and dijet mass spectra we must assume a cancellation of about 30% between the two terms G 2 − 2g 2 .
Because the quarks charged under the central gauge group have exactly the same quantum numbers as the standard model ones it is trivial to write the Yukawa couplings to the Higgs. This model is more efficient in giving the axigluon a large width because the couplings to heavy-light fermions and to pairs of heavy fermions are enhanced by factors of 1/ and 1/ 2 , respectively.
IV. COLLIDER PHENOMENOLOGY
Before committing to a particular decay for the axigluon we can make two model independent predictions about axigluon cross sections at the Tevatron and LHC. First, the axigluon is produced with a large cross section in the s-channel at the Tevatron. In the narrow width approximation (which is not unreasonable even at 20% width) we expect a total axigluon production cross section of 50-100 pb at the Tevatron in the region of parameter space which can explain the tt asymmetry. About 1% of the axigluons contribute to a slight increase in the tt cross section. Tevatron dijet bounds allow only about 10% of the events to decay into dijets unless the axigluon extremely broad. Therefore most axigluons must decay into multi-jet final states for which there have not been dedicated searches. Whatever the final state, events rates so large that a dedicated search for that particular multi-jet final state would be sensitive to our signal.
Second, the axigluon as well as the colored particles which it decays into, can be pair produced with their respective QCD cross sections at the Tevatron and especially at the LHC. For example, in the interesting region of parameter space the cross section for axigluon pair production at the 7 TeV LHC is between 10 and 50 pb [80]. 5 Given that the axigluons decay to multi-jets we predict events with 6, 8, or even 12 jets with a cross section of 10s of pb.
In the following we briefly discuss four possible scenarios for the axigluon decays. Since the production cross section at the Tevatron is so large, the axigluon would be ruled out if it had a significant branching fraction to leptons. A fifth possibility of decaying the axigluon into a pair of heavy particles which then decay into soft jets and slowly moving WIMPs appears to be already ruled out by early LHC searches [86]. We therefore concentrate on the four multi-jet final states depicted in Figure 8. 1. Decay to a light quark accompanied by a heavy quark which then decays to a light quark and an axion. The axion then further decays into a boosted bb pair (first diagram in Fig. 8). This would presumably be reconstructed as a three jet final state of which one is b-tagged. The axion jet would have a peculiar signature with very few tracks originating from the decay of a colorless particle, but it would have two displaced vertices. One could reconstruct the total invariant mass as well as the heavy quark invariant mass at the Tevatron. At the LHC one would look for a final state with 6 jets of which two are b-tagged from axigluon pair production, or for a four jet final state with two b-tags from heavy quark pair production.
2. Decay to a light quark accompanied by a heavy quark which then decays to three jets (second diagram in Fig. 8). This 4 jet final state would allow reconstruction of the total invariant mass as well as the heavy quark invariant mass at the Tevatron. At the LHC one would look for an 8 jet final state from axigluon pair production.
3. Decay to a scalar-pseudoscalar pair which each decay into 2 gluon jets (third diagram in Fig. 8). This final state would allow reconstruction of both resonances as well as the total axigluon resonance at the Tevatron. At the LHC one would look for an 8 jet final state from axigluon pair production.
4. Decay to a pair of heavy fermions which decay into 3 jets each (fourth diagram in Fig. 8). This final state resembles the decay products of hadronic top pairs. Similar events are also expected from R-parity violating gluino decays and a dedicated search for this final state was performed by CDF and CMS [84,85]. In order to suppress the large QCD background both analyses applied very stringent cuts which would eliminate all events in which the six jets come from axigluon decay. However, direct QCD pair production of the heavy quarks and subsequent decay to six jets would result in events which the search is sensitive to. This scenario is therefore strongly constrained by the two searches. The CDF search rules out heavy quark masses below about 140 GeV whereas the CMS search rules out masses from 170 GeV to about 300 GeV. These bounds apply to color octet fermions. Color triplets have smaller QCD cross sections, but most of our UV completions require multiple such fermions to obtain a sufficiently large axigluon width.
|
2011-07-14T19:44:10.000Z
|
2011-07-05T00:00:00.000
|
{
"year": 2011,
"sha1": "07576d80641ee3aade2e8d8e8178a4feebe014d2",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://link.aps.org/accepted/10.1103/PhysRevD.84.054008",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "07576d80641ee3aade2e8d8e8178a4feebe014d2",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
90143949
|
pes2o/s2orc
|
v3-fos-license
|
Effect of dietary L-tryptophan on cannibalism, growth and survival of Asian seabass, Lates calcarifer (Bloch, 1790) fry
The effect of L-tryptophan (TRP) supplemented diets at levels of 0, 0.5, 1.0, 1.5 and 2% on cannibalism, survival and growth performance of Asian seabass, Lates calcarifer (Bloch, 1790) fry was evaluated. Thirty days old seabass fry (mean weight: 0.31±0.16 g) were reared for 45 days in a recirculating aquaculture system. Results of the present experiment showed that L-tryptophan (TRP) supplementation from 0.5 to 2% in diet significantly (p<0.05) reduced cannibalism compared to control diet and did not affect growth performance. The lowest survival percentage (14%) was noticed in control group, whereas a higher survival percentage (33.33 to 39.80) was observed in TRP supplemented groups. Coefficient of size variation (%) ranged from 21.50 to 91.61 and decreased with increased level of TRP supplementation, Similar results were also obtained in size heterogeneity. Based on the results of the study, it is recommended to supplement 0.5% of TRP in larval diet to reduce cannibalism and improve survival of seabass fry.
Introduction
Asian seabass, Lates calcarifer (Bloch, 1790) commonly known as bhetki or barramundi is an economically important carnivorous food fish of tropical and subtropical areas (Greenwood, 1976). Seabass is a euryhaline (Lim et al., 1986) protandric hermaphrodite (Grey, 1987) and is cannibalistic in nature. The potential for L. calcarifer farming has increased in India after the successful induced breeding of this fish at the ICAR-Central Institute of Brackishwater Aquaculture (ICAR-CIBA), Chennai, India (Thirunavukkarasu et al., 2001). In predatory fish larval rearing, intra-cohort cannibalism has a major impact on survival (Loadman et al., 1986;Katavic et al., 1989;Baras, 1999;2013). During the early life stages, size heterogeneity is more and type I cannibalism (partial and tail-first ingestion) is prominent (Cuff, 1980;Baras and Jobling, 2002;Baras, 2013). Later, as size heterogeneity increases, type II cannibalism dominates, which is characterised by total prey ingestion (Cuff, 1980;Hecht and Appelbaum, 1988;Hecht and Pienaar 1993;Baras, 1999;Baras et al., 2003). In European seabass, Dicentrachus labrax, 37% fish were found to be cannibalistic accounting to 66% loss in six weeks (Katavic et al., 1989). Consequences of size variation during larval stage are more severe due to larger mouth to body size ratio, which allows cannibalism on prey that is even slightly smaller (Baras, 1998). Baras and Jobling (2002) reported several population and external factors (environmental structure, temperature, light intensity, food availability and food quality) that regulate size heterogeneity and cannibalism in fishes. Cannibalism during larviculture of seabass is controlled by routine size grading, frequent feeding and gradual weaning from live food to formulated diets. Grading is a labour intensive activity which also causes additional mortality of fry. An alternative strategy could be the use of dietary factors that reduce cannibalistic behaviour. Tryptophan (TRP), which is a precursor of serotonin (5-HT), is an essential amino acid, widely used in controlling aggression in vertebrates, including fishes. Increased TRP content in the feed enhances fish brain serotonergic activity with stress-releasing effects (Johnston et al., 1990;de Pedro et al., 1998;Lepage et al., 2002) or decreased aggression Hseu et al., 2003;Hoglund et al., 2005). Based on the above factors, we assumed that dietary TRP may suppress cannibalism in L. calcarifer fry by increasing brain serotonergic activity. It is also reported that high levels of brain serotonin reduces the food intake in fish (De Pedro et al., 1998;Hseu et al., 2003;Papoutsoglou et al., 2005a). Therefore, it is important to find the growth performance of fish fed with TRP supplemented diets. The present experiment was conducted to optimise the TRP supplementation that would reduce cannibalism in Asian seabass fry without affecting growth performance.
Materials and methods
The experiment was conducted with 30 day old seabass fry (mean length: 1.5 cm and mean weight: 0.054 g) produced from the Asian seabass hatchery, ICAR-Central Institute of Brackishwater Aquaculture (ICAR-CIBA), Chennai. The fry were completely weaned to a commercial larval diet. Commercial fish larval diets (500 and 800 µm dia, 55% protein and 10% lipid) were used for preparing the experimental feed. Supplemented diets containing 0% (T1), 0.5% (T2), 1% (T3), 1.5% (T4) and 2% TRP (T5) was prepared by the sprinkle method (Jarosław Kro´l and Zdzisław Zake, 2015). The TRP (crystalline L-tryptophan, Himedia) for each treatment was weighed, dissolved in hot water plus ethanol (80%) and then sprinkled on the commercial diet. The diets were dried in oven at 37 0 C for 1 h and then cooled and stored at 4 0 C until use. To evade the palatability effect, control (T1) diet was also spread with the same ethanol water solution. Thirty days old fry were randomly distributed into five groups and fed on experimental diets viz., T1, T2, T3, T4 and T5. Each diet had three replicates. The fish were reared for 45 days in a recirculating aquaculture system (RAS) consisting of 15 FRP tanks of 250 l capacity (water volume 200 l). Each tank was stocked with 400 fry @ 2 fry l -1 . The water flowthrough in the rearing tanks was maintained approximately at 5 l min -1 . Water temperature, salinity, pH and dissolved oxygen were measured daily while total ammonia and nitrite were measured weekly (APHA, 1989). During the experimental trail, fish were fed four times a day ad libitum. The unconsumed feed and fecal matter were siphoned once daily in the morning before feeding. Dead fry were removed, noted for tail bite (type I cannibalism) and also monitored for missing ones (type II cannibalism). At the end of the experiment, the length and weight of surviving fry was measured and their count noted.
A total of three samplings were carried out at fortnightly intervals (15, 30 and 45 days of culture, DOC) to assess the biomass and the quantity of feed was adjusted accordingly. During sampling, total length (L) in cm and weight (W) in g of 30 fishes each were measured using a graduated scale and a digital electronic balance. The parameters recorded at the end of each sampling comprised growth (%), specific growth rate (SGR), feed conversion ratio (FCR), protein efficiency ratio (PER), cannibalism (%), coefficient of variation (CV%), size heterogeneity (SH, weight), survival rate (%) and condition factor (K). The mathematical relationship between length and weight was calculated at each sampling using the conventional formula, W = aL b , by regression after log transformation (Pauly, 1993).
All the parameters were calculated as follows: Fulton's condition equation (Ricker, 1975;Chow and Sandifer, 1991) was used to find out the condition factor: K = W/L 3 X 100 where, K = condition factor, W = weight of fish (g) and L = total length (cm). Comparison of all the variables during different DOC was made using one way analysis of variance (ANOVA). Statistical analyses were performed using SPSS for windows (version 20.0) software.
Results and discussion
In the present experiment, physio-chemical parameters of water such as temperature, salinity, pH, dissolved oxygen, total ammonia and nitrite were in the range of 28-32°C, 25-28 ppt, 7.4-7.9, 5.0-5.5 ppm, 0.10-0.15 ppm and 0.02-0.05 ppm, respectively. The body weight of experimental groups at 15 days interval is shown in Fig. 1. Growth parameters, such as growth percentage, SGR, FCR, PER and survival are presented in Table 1.
Effects of TRP supplemented diet on aggression behaviour in fishes is intervened by brain serotonin level which may also control food intake and growth (De Pedro et al., 1998;Hseu et al., 2003;Papoutsoglou et al., 2005a, b). No significant difference (p>0.05) was observed in SGR among T1, T2, T3, T4 and T5 groups, however weight gain percentage was significantly higher in T3 group. Present finding is similar to the observation made by Jarosław Kro´l and Zdzisław Zake (2015) Hseu et al. (2003) observed that juvenile grouper, Epinephelus coioides fed with TRP supplemented diet expressed lower growth rates due to increased brain serotonergic activity and decreased aggression and/or appetite. In the present study, we observed a significant reduction in FCR due to TRP supplementation compared to control groups which is similar to the finding of Papoutsoglou et al. (2005 a, b) in European seabass, Dicentrarchus labrax. Reduction in FCR might be due to lower feed intake resulted from increased brain serotonergic activity which is also reported in mammals and birds (Pinchasov et al., 1989;Young, 1996). In goldfish, Carassius auratus, intracerebro ventricular injection of serotonin (5-HT) significantly reduced feed intake (De Pedro et al., 1998). Peng and Peter (1997) reported that 5-HT can directly act on somatotroph cells of the pituitary gland and inhibit growth hormone secretion in goldfish. In fish, growth hormone plays an important role in controlling growth and feed intake (Peng and Peter, 1997;Lin et al., 2000). The mean survival rate of the experimental groups varied significantly (p<0.01). The lowest survival percentage (14±2.94) was noticed in control group (T1), whereas maximum survival (39.80±3.00) was observed in T3 group which did not vary significantly from T2, T3 and T5 groups. L-TRP supplementation was found to improve survival rate in L. calcarifer due to reduced cannibalism. Similar observation was made by Hseu et al. (2003) in E. coioides. Dietary supplementation of TRP has been shown to reduce aggressive behaviour in several fish species viz., Oncorhynchus mykis, ; E. coioides (Hseu et al., 2003); G. morhua (Hoglund et al., 2005) and Sander lucioperca (Jarosław Kro´l and Zdzisław Zake, 2015) due to higher level of brain 5-HT. Coefficient of variation (%), size heterogeneity (%) and condition factor (K) are shown in Table 1. Cannibalism (%) ranged from 15.68±2.35 to 37.78±2.22 (Fig. 2). Maximum cannibalism (%) was observed in control group compared to treatment groups and significant reduction in cannibalism was noticed in TRP supplemented groups. In European seabass D. labrax, 37% of fish were found to be cannibalistic (Katavic et al., 1989) and in Asian seabass L. calcarifer maximum of 17.71% fish were found to be cannibalistic (Sukumaran et al., 2011). Coefficient of variation (%) ranged from 21.50±2.70 to 91.61±18.68 which reduced in all TRP supplemented groups. A similar trend was observed for size heterogeneity. Hseu et al. (2003) reported that dietary supplementation of TRP reduces size heterogeneity in juvenile grouper and thereby intensity of cannibalism. They indicated that dietary supplementation of TRP at levels of 0.25 to 1% of the dry diet, significantly increases brain 5-HT level and reduces cannibalism in grouper juveniles. Condition factor is used to compare the condition, fatness or wellbeing of the fish and can be useful for management of culture systems as it gives indication of favourable or stress factors in the system (Biswas et al., 2011). In the present study, TRP supplementation up to 1.5% level improved the K value whereas poor condition was noticed in 2% TRP supplemented group. Parameters of the lengthweight relationship and coefficient of correlation (R 2 ) for different treatments are shown in Table 2.
Lower R 2 value in T1, T4 and T5 groups revealed that the linearity is less in these groups than the other groups. According to Enin (1994) when the parameter b is equal to 3, growth is called isometric and when it is less or greater than 3 it is allometric. In the present study, all the treatments showed negative allometry (b<3) type of growth patterns where weight was not symmetrical with length.
In summary, the present results showed that dietary supplementation of tryptophan (TRP) at 0.5 to 2% reduces cannibalism without affecting the growth performance of seabass fry. Hence, it is recommended to supplement 0.5% of TRP in diet to reduce cannibalism and improve the survival of seabass fry. However, further study is required to find the effect of higher levels of TRP supplementation.
|
2019-04-02T13:08:40.112Z
|
2017-06-30T00:00:00.000
|
{
"year": 2017,
"sha1": "675a54a917a0d8ad6343a49efe5a65d292ea79c0",
"oa_license": null,
"oa_url": "https://doi.org/10.21077/ijf.2017.64.2.61333-05",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "936018829ecce52159bc9d8f365fd199770aa824",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
256024310
|
pes2o/s2orc
|
v3-fos-license
|
The predictive value of deep learning-based cardiac ultrasound flow imaging for hypertrophic cardiomyopathy complicating arrhythmias
Objective To investigate the predictive value of deep learning-based cardiac ultrasound flow imaging for hypertrophic cardiomyopathy (HCM) complicated by arrhythmias. Methods The clinical data of 158 patients with hypertrophic cardiomyopathy were retrospectively collected from July 2019 to December 2021, and additionally divided into training group 106 cases, validation group 26 cases and test group 26 cases according to the ratio of 4:1:1, and divided into concurrent and non-concurrent groups according to whether they were complicated by arrhythmia or not, respectively. General data of patients (age, gender, BMI, systolic blood pressure, diastolic blood pressure, HR) were collected, a deep learning model for cardiac ultrasound flow imaging was established, and image data, LVEF, LAVI, E/e', vortex area change rate, circulation intensity change rate, mean blood flow velocity, and mean EL value were extracted. Results The differences in general data (age, gender, BMI, systolic blood pressure, diastolic blood pressure, HR) between the three groups were not statistically significant, P > 0.05. The differences in age, gender, BMI, systolic blood pressure, diastolic blood pressure, HR between the patients in the concurrent and non-concurrent groups in the training group were not statistically significant, P > 0.05. Conclusions Deep learning-based cardiac ultrasound flow imaging can identify cardiac ultrasound images more accurately and has a high predictive value for arrhythmias complicating hypertrophic cardiomyopathy, and vortex area change rate, circulation intensity change rate, mean flow velocity, mean EL, LAVI, and E/e' are all risk factors for arrhythmias complicating hypertrophic cardiomyopathy.
Introduction
The global prevalence of hypertrophic cardiomyopathy is 1:500, with an estimated 1.1-2.8 million patients in China [1]. The incidence of hypertrophic cardiomyopathy and death rates are still on the rise worldwide [2]. Hypertrophic cardiomyopathy is a chronic progressive disease caused by excessive myocardial contraction and impaired blood filling of the left ventricle and is a relatively rare cardiovascular disease [3]. Because patients may not have obvious symptoms and have symptoms similar to other diseases, only a minority of cases are clinically diagnosed and it is estimated that about 80-90% of patients are undiagnosed [4]. There is no cure for hypertrophic cardiomyopathy and the prognosis for most patients is poor [5]. Once patients with hypertrophic cardiomyopathy become symptomatic, the disease progressively worsens and in later stages can be combined with cardiovascular disease such as heart failure, arrhythmias and stroke, and is the leading cause of death in older patients with hypertrophic cardiomyopathy [6,7]. Patients with dyspnoea, chest pain, palpitations, fatigue and syncope should seek immediate medical attention from a cardiology department. Further diagnosis and active treatment through ultrasound testing can slow the progression of the disease, prevent sudden death and heart failure, and improve quality of life [8,9]. Its potential to cause arrhythmias, which can be serious enough to cause sudden death [10], also puts patients with HCM under great psychological stress and seriously affects their quality of life. Doppler ultrasound flow imaging is a simple, convenient and non-invasive technique that is now widely used [11]. Conventional cardiac ultrasound images do not reflect the variability of LV function and haemodynamics, limiting their use in clinical practice, whereas ultrasound flow imaging can provide complex and realistic information on cardiac blood flow dynamics at a lower cost [12]. It has been shown that ultrasound flow imaging in patients with HCM can clearly display their left ventricular haemodynamic parameters and identify their clinical phenotype, allowing for more accurate prediction of HCM [13]. Cardiac ultrasound flow imaging is a common tool for the detection of hypertrophic cardiomyopathy, but there may be diagnostic inaccuracies in manual judgement of its imaging [14], with the development of artificial intelligence, represented by deep learning With the development of artificial intelligence, artificial intelligence technology represented by deep learning has become an auxiliary tool for various imaging techniques, which takes raw image data as the basis, learns higher-order features of the image through multilayer neural networks, and the network automatically extracts the features for reorganisation and attribute categorisation and finally uses them to identify feature images, thus solving practical clinical problems [15,16]. This study investigates the predictive value of deep learning-based cardiac ultrasound blood flow imaging models for the prevention of arrhythmias in patients with hypertrophic cardiomyopathy.
General information
The clinical data of 158 patients with hypertrophic cardiomyopathy were retrospectively collected from July 2019 to December 2021 and additionally divided into training group 1 06 cases, validation group 26 cases, and test group 26 cases according to the ratio of 4:1:1, and divided into concurrent and non-concurrent groups according to whether they were complicated by arrhythmias or not, respectively.
This study was conducted with the approval of our ethics committee.
Inclusion criteria: met diagnostic criteria for hypertrophic cardiomyopathy [17]; left ventricular posterior wall thickness or septal thickness ≥ 13 mm and left ventricular outflow tract pressure ≥ 20 mmHg.
Exclusion criteria: combination of other serious cardiovascular disease; previous history of cardiac disease; combination of serious cardiovascular disease; combination of serious arrhythmias; incomplete data.
Collection of information
General patient information is collected through the electronic medical record, which includes age, gender, BMI, systolic blood pressure, diastolic blood pressure, HR, in addition to extracting data using a deep learning model in the patient's cardiac ultrasound flow imaging, which includes LVEF, LAVI, E/e' , rate of change of vortex area, rate of change of circulatory intensity, mean blood flow velocity, mean EL value.
Cardiac ultrasound flow imaging tests
Test equipment The instrument is a Hitachi Aloka Pro-soundF75 colour Doppler ultrasound diagnostic device with a 15-5 MHz probe and image processing using the DAS-RS1 ultrasound VFM image post-processing workstation. Testing is performed by a physician with more than 6 years of clinical experience in our department.
Test methods
The patient was placed in the left lateral recumbent position at rest, the probe was placed on the patient's left ventricular apex, the imaging condition was adjusted to VFM, standard apical 2-chamber, 3-chamber and 4-chamber combined 3 complete cardiac circulation imaging was acquired and completely surrounded, and the imaging frame rate was adjusted to greater than 18 Hz. Using a dual Doppler simultaneous sampling technique, pulse Doppler or Doppler (PW or TDI), with PW and TDI set on the diastolic mitral orifice and the lateral wall of the left ventricular annulus, respectively, to capture synchronous motion rates, and PW or PW-type on a three-chamber standard three-chamber view, with sampling volumes set on the mitral and aortic orifices to capture synchronous flow spectra and measure IVC times. Ultrasound flow imaging of the heart is shown in Fig. 1.
Imaging analysis (1) Echocardiographic images of apical four-chamber and two-chamber views were acquired, and the length of the biplane area was measured and calculated using the Simpson formula. (2) The acquired ultrasound images were input into the DAS-RS1 ultrasound workstation for offline analysis of VFM parameters: ① vortex area-concentric, closed flow line, outermost ring area is the vortex area, and the rate of change of vortex area was taken for early IVC versus late IVC; ② circulation intensity-vortex was measured through the centre of the spiral and a sampling line perpendicular to the ultrasound beam, and the circulation intensity was taken for early IVC versus late IVC rate of change; (iii) blood flow velocity-an anatomical flow line from the top to the bottom was set and the blood flow velocity gradient was applied from the tip of the heart to the left ventricular outlet area; the average of the three cross-sections was obtained by calculating the blood flow velocity for 2, 3 and 4 chambers, and the average blood flow velocity for the top + middle + bottom was taken; (iv) EL values-were derived using the VFM method, and the average EL for the top + middle + bottom was taken.
Image post-processing
Ultrasound image analysis was performed by CVI42 (Circle Cardiovascular Imaging Inc., Calgary, Alberta, Canada) software. Two specialists with more than 6 years of clinical experience performed the analysis of the basal, mid-and apical segments, and if the results differed between the two physicians, another senior physician (more than 10 years) made the determination. The ITK-SNAP software (version: 3.6.0) was used to segment the HCM images. Short-axis images of the basal, intermediate and apical segments were entered into the software separately and plotted between the ventricle and epicardium. Two cardiovascular imaging specialists with more than 6 years of clinical experience performed the image segmentation independently.
Deep learning model training
The deep learning model is SE -ResNext-50, the overall deep learning training is done using the Pychar m compiler (https:// www. jetbr ains. com/ pycha rm/), the language used is Python 3.7, the deep learning framework is pytorch 1.0.4 (ht:/pytorch.or/), and the GPU model is NVIDIATESLA V100.
The image data of all HCM patients are first transformed into multiple 2D matrix forms and input into this network (2D input matrix size: 80 × 80), which is feature extracted using 32 parallel stacked residual blocks and the 32 features are combined. Due to the dependency between the convolution channels, this paper proposes a SE mechanism where the input image is assumed to be H*W*C (H and W represent the width and width of the input image, respectively, and C represents the number of channels in the input image), towards which it is expanded to 1*1*C by global pooling, an operation also known as the Squeeze operation. This is followed by two full convolution (FC) layers, which model the correlation between channels through the bottleneck structure and are activated between FC layers through ReLU, thus improving the sparsity of the network and better fitting the complex correlation between channels, while also reducing the number of parameters and the number of operations. In addition, a normalised weighting from 0 to 1 is obtained using Sigmoid and each channel is weighted using the scale operation, a process of activity assignment also known as the Excitation operation. Finally, the network input is combined with the SE mechanism to process the image and feed it into the next step of the network.
In response to the gradient diffusion problem prevalent in the inverse transmission of deep networks, the BN layer is introduced in this study by normalising the output data after the convolution of each layer. This method can effectively avoid the activation function ReLU from entering the non-linear saturation region, speed up the convergence of the network and prevent overfitting, thus enhancing the generalisation capability of the network (see Fig. 2).
Epoch (period) within the training group, validation group, and test group once each, thus facilitating control of the quality of the deep learning network training and obtaining the optimal test results. This study deals with a binary classification problem, so the predicted probability of 0-1 should be output in the last layer of this network, using One-Hot coding to obtain a 0 or 1 result, and calculating the CrossEntropyLoss with the original data labels, using this Loss metric to determine how close the actual output is to the desired output. Since the stochastic gradient algorithm may not follow the correct direction at each update and is prone to oscillate at the optimal solution and stop at the local optimum, this study uses the stochastic gradient descent algorithm SGDM with momentum, which accelerates if the update direction at this moment in the gradient descent is the same as the update direction at the previous moment, and decelerates if the opposite is true, and can accelerate the Loss descent and converge to the global optimum. At the same time, the learning rate is decayed by a factor of 0.1 if the 10-round Loss descent is not significant.
Observation indicators
To compare general patient data, factors influencing hypertrophic cardiomyopathy complicating arrhythmias detected by cardiac ultrasound flow imaging based on deep learning models and the predictive value of the models.
Statistical methods
SPSS26. 00 software was used to analyse the data in this study and the measures collected (age, BMI, systolic blood pressure, diastolic blood pressure, HR, rate of change of vortex area, rate of change of circulatory intensity, mean blood flow velocity, mean EL, LVEF, LAVI, E/e') were tested for normality by the Shapiro-Wilk method, P > 0.05 for normally distributed data expressed as (mean ± standard deviation), t-test, P < 0.05 for non-normally distributed data described as median (quartiles), Mann-Whitney U-test. Collected count data (gender) are expressed in (%), data were unordered using 2 or Fisher's exact test and data were ordered using the Mann-Whitney U test. Consistency of image segmentation was evaluated using ICC. The area under the subject operating characteristic curve (ROC) (AUC) was used to evaluate the predictive value of the model. P < 0.05 was considered a statistically significant difference for comparison between groups. Other data processing was done in the deep learning algorithm program.
Comparison of general information
There was no statistically significant difference in the comparison of general information (age, gender, BMI, systolic blood pressure, diastolic blood pressure, HR) between the three groups, P > 0.05 (Table 1). (Table 2).
Binary logistic regression analysis of factors influencing arrhythmias complicating hypertrophic cardiomyopathy
The factors influencing significant differences in Tab.
SE-ResNext-50's ability to recognise images
A total of 158 images from patients were entered into the SE-ResNext-50 model. The agreement between the two physicians for the segmented area of all images was good (ICC = 0.903). The loss function Loss and acc change curves of the training showed (Figs. 4, 5) that the Loss change curve was essentially zero at epoch of 300, and the acc change curves of the training, validation and test groups gradually levelled off in value when epoch reached 300, indicating that the training did not show overfitting.
Diagnostic efficacy of deep learning cardiac ultrasound blood flow techniques
The training group had a sensitivity of 0.940 and specificity of 0.882, the training group model ROC curve AUC value of 0.978, the validation group ROC curve AUC value of 0.985, sensitivity of 1.000 and specificity of 0.974, and the test group ROC curve AUC value of 0.974, sensitivity of 0.867 and specificity of 1.000 (Figs. 6, 7, 8).
Discussion
HCM is a primary cardiomyopathy caused by structural and functional abnormalities of the myocardium, and arrhythmias are a common complication in patients with HCM [18]. Until now, the prediction of the risk of sudden death due to arrhythmias in HCM has been poorly identified [19]. The incidence of arrhythmias in patients with HCM in this study was 39.2% (62/158), which also indicates a high risk of arrhythmias in patients with HCM, and once arrhythmias occur in patients with HCM, they can lead to atrial thrombosis and ventricular tachycardia, increasing the risk of heart failure, stroke and sudden cardiac death [20]. Therefore, it is essential to analyse the risk factors for screening patients with HCM for complications of arrhythmias and their prediction. This is why it is essential to analyse the risk factors and their prediction in patients with HCM. In this study, a deep learning model was developed using cardiac ultrasound flow imaging to predict the risk of complications of arrhythmias in patients with HCM. In this study, a total of 158 patients with hypertrophic cardiomyopathy were selected and divided into training group, validation group and test group. There was no statistically significant difference between the general information of the three groups for the follow-up experiment. The patients in the three groups were divided into concurrent and non-concurrent groups according to whether they were complicated by arrhythmias or not. The differences in age, gender, BMI, systolic blood pressure, diastolic blood pressure and HR between the patients in the concurrent and non-concurrent groups in the training group were not statistically significant, P > 0.05. The rate of change of vortex area, rate of change of circulatory intensity, mean blood flow velocity, mean EL, LAVI and E/e' in the non-concurrent group were significantly lower than those of the concurrent group, and LVEF was significantly higher in patients in the non-concurrent group than in the concurrent group. A multifactorial analysis later showed that vortex area rate of change, circulatory intensity rate of change, mean blood flow velocity, mean EL, LAVI, and E/e' were all risk factors for arrhythmias in hypertrophic cardiomyopathy, and LVEF was a protective factor for arrhythmias in hypertrophic cardiomyopathy. VFM is a new hydrodynamic evaluation technique for visual display and quantitative assessment. The incongruity of the luminal canal configuration and wall due to cardiomyopathy and valvular disease affects the formation and movement of vortex flow, thus increasing the level of EL. Abnormal changes in EL can lead to changes in the overall structure and function of the myocardium, so the location, morphology and extent of vortex formation are highly relevant to the structure and function of the left ventricle, and in addition patients with arrhythmias have significantly abnormal blood flow velocity indicators [21,22]. The rate of change in vortex area, mean blood flow velocity and mean EL in patients with HCM complicated by arrhythmia were also consistent with the results of this study.
Previous studies have shown [23] that LVEF and diastolic function are protective factors against complications of arrhythmias in patients with HCM. LVEF is a quantitative indicator of cardiac systolic function, and a greater LVEF value indicates greater myocardial contractility and a lower incidence of arrhythmias. Atrial arrhythmias can be caused by increased left atrial internal diameter, increased left atrial pressure, disproportionate enlargement, atrial myocardial degeneration, increased stress, and inconsistent conduction and nonconformity. In addition, in the study by Casella et al. [24] cardiomyopathy complicated by arrhythmias, cardiac ultrasound showed an increased right atrial right ventricular internal diameter and a widened right ventricular outflow tract internal diameter, and the right atrial right ventricular internal diameter was larger than that of patients with cardiomyopathy. This also suggests that the right atrial intraventricular diameter may be a factor in arrhythmias associated with cardiomyopathy. Significantly elevated LAVI has been found in patients with arrhythmias and its role in the development of arrhythmias [25]. The E/e' ratio has been widely used in the assessment of left ventricular diastolic filling pressures in various cardiac disease processes. Impaired left ventricular diastolic function and the resulting increase in left ventricular filling pressure can lead to stagnation of blood flow in the left atrium and left atrial thrombosis [26]. Therefore, in addition to indicating increased LV diastolic filling pressures, an increase in the E/e' ratio may also indicate an increased risk of left atrial stasis and thrombosis, which can be a cause of sudden cardiac death in arrhythmias. Finally the loss function Loss and acc change curves for the deep learning model training established in this study showed that the training for did not show overfitting, indicating a better model. In addition, the ROC curve AUC value for the model in the training group was 0.978, the ROC curve AUC value for the validation group was 0.985 and the ROC curve AUC value for the test group was 0.974, with high sensitivity and specificity and high predictive value. In summary, deep learning-based cardiac ultrasound flow imaging can identify cardiac ultrasound images more accurately and has a high predictive value for arrhythmias complicating hypertrophic cardiomyopathy, and vortex area change rate, circulation intensity change rate, mean flow velocity, mean EL, LAVI, and E/e' are all risk factors for arrhythmias complicating hypertrophic cardiomyopathy, and LVEF is a Protective factors for arrhythmias in hypertrophic cardiomyopathy. The use of cardiac ultrasound flow imaging should focus on abnormalities in these parameters to avoid arrhythmias in hypertrophic cardiomyopathy. There are some limitations to this study. Due to the limited sample size of this study, future studies should include a larger sample size to explore the effectiveness of deep learning models to identify arrhythmias complicating hypertrophic cardiomyopathy. In addition, cardiac ultrasound flow imaging was used in this study, and subsequent studies could use combined diagnostic images for deep learning modelling to better predict arrhythmias complicating hypertrophic cardiomyopathy.
|
2023-01-21T05:18:43.890Z
|
2023-01-19T00:00:00.000
|
{
"year": 2023,
"sha1": "ef722dbfae59cbce8697a3b49b0a4766bcb54979",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "ef722dbfae59cbce8697a3b49b0a4766bcb54979",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
252370264
|
pes2o/s2orc
|
v3-fos-license
|
Risk factors for aneurysm rupture among Kazakhs: findings from a national tertiary
Background Rupture of intracranial aneurysms (RIA) leads to subarachnoid hemorrhage (SAH) with severe consequences. Although risks for RIA are established, the results vary between ethnic groups and were never studied in Kazakhstan. This study aimed to establish the risk factors of RIA in the Kazakh population. Methods Retrospective analysis of 762 patients with single IAs, who attended the neurosurgical center from 2008 until 2018, was conducted. Demographic characteristics, such as age, sex, smoking status, and hypertension were considered. Descriptive and bivariate analyses were performed. A multivariable logistic regression model was built to identify factors correlated with RIA. Results The mean age of participants was 48.49 ± 0.44 years old. The majority (68.37%) of IAs have ruptured. Of the ruptured aneurysms, 43.76% were < 6 mm, and 38.39% were located on the anterior cerebral and anterior communicating arteries (ACA). Logistic regression model indicates younger age group (16–40 years), smoking, having stage 3 hypertension, smaller IA size and its location on ACA increase the odds of rupture. Conclusions This study has revealed that younger, smoking patients with stage 3 arterial hypertension are at higher risk for RIA. Small aneurysms (< 6 mm) and location on ACA had increased odds of rupture, while larger aneurysms on internal carotid arteries had lower odds.
Background
Intracranial saccular aneurysms develop when the wall of the cerebral artery becomes weak to resist hemodynamic pressure [1]. A RIA causes SAH associated with a high rate of mortality, with an in-hospital mortality rate of 18% [2], and permanent disability [3,4]. SAH is attributed to 85% of IAs ruptures [5]. With such devastating outcomes, it is urgent to identify and prevent IAs from rupturing. However, many IAs remain stable throughout their lifetime, with one lifelong cohort study reporting one-third of rupture cases among the participants with IA [6]. Therefore, identifying markers differentiating stable IAs from those at risk of rupture would be essential to avoid unnecessary surgical interventions as well as avoiding neglect of the patients with SAH risks.
The prevalence of IAs was estimated to be 3.2% in the general population worldwide [7]. The population of Kazakhstan is around 18,700,000 people with around 67.5% of ethnic Kazakhs [8,9]. About 500,000 people develop IAs, and the annual incidence of IAs is estimated to be 5.4 cases per 100,000 people [10,11]. However, the Open Access *Correspondence: m.aisha.babi@gmail.com 2 Hospital Management Department, National Center for Neurosurgery, 34/1 Turan Avenue, Nur-Sultan 010000, Kazakhstan Full list of author information is available at the end of the article true number of IAs in Kazakhstan remains unknown, as available statistics are limited to those who attended diagnostic clinics and hospitals.
Age, sex, arterial hypertension, smoking, history of SAH, aneurysm size, and location are all known risk factors for RIA [12][13][14]. The average age of the population with IA varies according to the previous studies [15][16][17][18][19]. A large multinational study has found that the average age of patients with IA is 50.4 years old [17]. Countryspecific findings varied, the average age of IA diagnosis is 53.2 years old in Finland [18], 52 years old in the Netherlands [16], and 60-63 years old in Japan [15,19]. According to the large cohort study, the IAs occurrence ratio discrepancy between sexes in the adult population was 3:1 female to male [20]. It was found that the size of aneurysms in 90% of cases is less than 10 mm in diameter and anterior cerebral circulation accounts for 90% of IA locations [21]. More than 20-29% of the population with IAs have multiple aneurysms [13,22]. In addition, ethnic factors [12,14] and irregular IAs shape [15,23,24] play a role in the risk of rupture.
Significant differences in RIA risk factors were found among different study population groups. Specifically, several Japanese [15,25], Finnish [6,18] and Australian [26] studies have demonstrated higher risks of RIAs < 7 mm in size than what has been previously reported in several international studies [20,27]. This study aims to define the risk factors associated with RIA among Kazakhs, as no such work has been done prior.
Study population
The study included 762 patients with single saccular IA of the cerebral vessels treated in the Department of Vascular and Functional Neurosurgery, National Center for Neurosurgery during the 10 year period from 2008 to 2018. Patients of Kazakh ethnicity and with saccular IA were included in the study. Ethnicity was defined through self-reported information. Data were derived from the patients' electronic and paper medical records.
Variables
Age, sex, smoking, arterial hypertension, size, and location of an aneurysm were analyzed. Arterial hypertension was classified into three groups according to WHO classification [28]. Information on arterial hypertension was derived from the unified Hospital Information System, self-reported information, and daily examination results.
Diagnosis, location, and sizes of IAs
IAs were diagnosed based on the CT, MRA, and CA data. SAH was diagnosed based on the CT scans. Vascular neurosurgeons reviewed the images. In cases of patients who had clinical symptoms of SAH, but had non-definitive CT scans, a lumbar puncture was taken for laboratory diagnosis to verify the diagnosis.
In this study, the location of an aneurysm was grouped into ICA, PCA, MCA, and ACA. Anterior communicating artery and anterior cerebral arteries were abbreviated as ACA. Posterior communicating, posterior cerebral, vertebral, and basilar arteries were abbreviated as PCA. The maximal diameter of an aneurysm was designated as the size of an aneurysm. IAs were grouped into four groups according to Yasargil's classification [29].
Statistical analysis
Data were analyzed using STATA 16 (Stata Corp, 2019). Descriptive analysis included reporting mean values with standard deviation and frequencies, where appropriate. The normal distribution of continuous data was checked using histograms and scatter plots. Association between continuous and categorical variables was tested with Student's t-test, Wilcoxon signed-rank test, and oneway analysis of variance (ANOVA), where appropriate. For categorical variables, the Chi-square test or Fisher's exact tests were used. Logistic regression model was built using a backward stepwise approach to identify factors associated with an increase in aneurysm rupture risk.
Assumptions for logistic regression were tested with link test (_hat p-value < 0.001, _hatsq p-value = 0.934). Pearson goodness-of-fit test was used to estimate the fit of the model (p-value = 0.1858). P-value < 0.05 was considered statistically significant.
Results
Seven hundred sixty-two patients with single aneurysm diagnosis data were reviewed (Table 1). Among them, 68% had ruptured aneurysms and approximately 32% had unruptured aneurysms. The mean age of the patients was 48.49 ± 0.44 years with the youngest being 16 years old and the oldest 81 years old. RIA occurred at a significantly higher proportion (79.66%) among the youngest age group of 16-40 years old. Approximately 60% of the participants were women and 40% were men. However, rupture occurred in men more frequently than in women. A quarter of the patients were smokers and almost 84% of smokers had ruptured aneurysms. Just under 30% of the patients had no arterial hypertension, and about 43% had stage 3 hypertension. The proportion of aneurysm rupture was significantly higher (80.66%) among those with stage 3 hypertension. The rest were diagnosed with stage 1 (4.72%) and stage 2 (22.97%) hypertension. The average size of the aneurysms was 9.81 ± 0.29 mm. Unruptured aneurysms were significantly larger than those that did rupture (13.41 ± 0.67 vs 8.15 ± 0.27). The majority of the aneurysms fell into the medium size category (44.09%); however, small-sized aneurysms had the highest proportion of rupture (78.62%). Most of the aneurysms (38.71%) were located on the ICA, but ACA had the highest proportion (87.82%) of ruptured aneurysms.
The correlation between RIAs size and the location of the IAs was found in ACA and ICA ( Figure 1). As can be seen from the Figure the distribution of sizes was rightskewed and significantly smaller among RIAs in ACA and ICA. No such correlation was found at MCA and PCA.
Multivariable logistic regression was built to determine factors associated with aneurysm rupture. All variables that showed statistical significance and had a biological basis were considered for the model. Our model included categorical age, sex of the participants, aneurysm size, aneurysm location, arterial hypertension, and smoking as factors influencing the odds of rupture (Table 2). Compared to the youngest age group (16-40 years), other age groups had decreasing odds of rupture with the lowest likelihood (0.07 times) in the oldest age group, when adjusted for other variables. Men had 17% higher odds of rupture, when compared to women and adjusted for other variables. An increase in size decreased the rupture likelihood when compared to the small aneurysms with the smallest odds of 0.22 in giant aneurysms when adjusting to other variables. IA's location on ACA compared to ICA increased the adjusted odds of rupture 5.43 times. Patients with stage 3 hypertension had a 12.4 times higher likelihood of aneurysm rupture when compared to patients with normal arterial blood pressure and adjusting for other variables. The odds of smokers increased by 86% when compared to non-smokers and adjusted for other variables.
Discussion
This study aimed to identify risk factors associated with RIA in the Kazakh population. To our knowledge, this is the first study to analyze differentiating factors between those with ruptured and unruptured IAs. The participants of the study were patients who underwent treatment in the NCN located in the capital of Kazakhstan, Nur-Sultan, which admits patients from all over the country. From the results of the logistical model, such factors as younger age, active smoking status, stage 3 hypertension, location of the aneurysm on ACA, and smaller aneurysm size (< 6 mm) increased the odds of aneurysm rupture. Although studies have shown an association of female sex with reduced risk of aneurysm rupture [30], in the model of this study the factor of sex becomes insignificant when adjusted for other variables. In an unadjusted analysis, men had 1.78 times higher odds of IA rupture, but when adjusted for other variables the odds fell to 1.17. Unadjusted smoking had increased the odds of rupture 3 times, but adjustment for other variables lowered it to 1.86 times, indicating a possible confounding effect. Among our participants, 87.5% of smokers were men.
In line with findings from other studies [30][31][32], the youngest age group in our sample had higher odds of RIA. There are several possible explanations for such observation. As was noted [31] in a previous study, this might be due to the slower blood flow rate, as well as due to the calcification of arterial walls among older patients. Meanwhile, Zhang and colleagues [32] found an association between the younger age of the participants and morphological features that are more likely to lead to the rupture, such as the presence of daughter and irregular domes, larger flow angle, and other features. Unfortunately, such factors were not available for analysis in this study.
Hypertension was another factor that had the largest effect on the odds of rupture. Stage 3 hypertension increased the odds of RIA by 12 times when compared to the patients with normal blood pressure. Similar results to a varying degree were found in other studies [33][34][35].
Multivariable logistic regression showed that aneurysms located on ACA had 5.43 times increase in odds of rupture compared to ICA. Different studies have found different sites to be more prone to rupture. For example, the ARETA study found an association of ACA/ Acom location with RIA [35], while the PHASES study had shown a higher correlation of RIA with PCA and PCoA [12]. In this study sample, the majority (38.71%) of the aneurysms were located on ICA, but only 50.51% of them had ruptured. Meanwhile, among the 29.92% of aneurysms that were located on ACA 87.82% have ruptured. Overall, aneurysms located on ACA accounted for 38.39% of all ruptured IAs in our sample.
Interestingly, smaller aneurysms in this study had a higher likelihood of rupture, while large and giant IAs had a lower probability of rupture. The correlation of rupture with the size of the aneurysm was tested in different locations of the aneurysm, and RIAs on ACA were statistically significantly smaller than unruptured IAs in the same location. Similarly, the same correlation with size was found on ICA. PCA and MCA did not have a significant difference in the sizes of ruptured and unruptured aneurysms. This result confirms findings that suggest anterior communicating artery aneurysms have a higher probability of rupture at a smaller size [36].
There also could be a genetic factor influencing the prevalence of RIA on ACA among our sample. As was found in a previous study, 13 SNPs are associated with the risk of development and rupture of aneurysms in the Kazakh population [37] Other factors, such as morphological features of IAs, were found to be good predictors of IAs rupture [38]. However, no correlation was found between morphological features that predispose to rupture with the size of IA [39]. Therefore, a subset of small aneurysms can be at heightened risk for rupture, which could be the case in our sample.
A study conducted in Finland has shown that 68% of RIA were smaller than 10 mm [18]. The majority of the aneurysms among the population they studied were located on MCA and ACoA. The results differ from those found in a pooled analysis, which indicated location on PCoA as well as aneurysms size ≥ 20 mm as risk factors for North American and European countries [12]. Finland and Japan were the only exceptions. A Japanese study has shown 64.6% of the small IAs (< 6 mm) and 73.9% of medium IAs (≥ 6-15 mm) have ruptured with the majority of the IAs located on ACA [40], which is similar to findings in our study.
Although the PHASES score [12] developed to estimate the risk of RIA suggests minimal risks of rupture for those younger than 70 and with smaller IA (< 10 mm), these recommendations are contrary to the findings of the studies conducted in Japan and Finland. Our study has also confirmed the necessity for a more thorough examination of IAs without the assumption that smaller IA would not rupture. As was noted by Zanaty et al. there is an increasing pool of evidence suggesting a high risk of rupture for smaller aneurysms. The study also emphasized the importance of establishing specific indicators that identify the instability of the small aneurysms at risk of rupture [39].
Population-specific studies are important for establishing more accurate risk factors for the populations, as the findings suggest that increased attention to smaller IAs could decrease SAH incidences among Kazakhs.
Limitations
This study has its limitations, one of which is the nature of a retrospective study. Such important variables as diabetes status, alcohol consumption, and severity of smoking, presence of atherosclerosis, family medical history of the patients were not studied. In addition, the data were collected from one neurosurgical center, which mainly performs elective surgeries. It was not possible to include patients who have passed away suddenly from SAH. This could distort the size average of the RIA reported. Convenience sampling of the participants does not allow the authors to find causation, only association, and does not allow us to extrapolate our findings to the whole population.
Conclusions
This is the first study to examine factors associated with RIA among the Kazakh population. This study has identified younger age, smoking status, stage 3 hypertension, size < 6 mm, and location on ACA as risk factors for RIA among the Kazakhs. Although the larger size of IAs is a major risk factor in many North American and European studies, the Kazakh population coincides with findings from the Finnish and Japanese cohorts, where smaller IAs were at higher risk of rupture. This study confirms the need for a more thorough examination of IA on such aspects as morphology regardless of the size, and the need to pay more sensibility to ethnic differences in risk factors, especially in ethnically diverse countries. Future prospective cohort studies should be conducted to better understand the etymology of IA.
|
2022-09-20T13:26:43.825Z
|
2022-09-20T00:00:00.000
|
{
"year": 2022,
"sha1": "b725434a34a3ef22addfa010a30a56fcb44443d8",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "ff96d445bce8d13302ebcf8550edd6faa8d42c2d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
247957102
|
pes2o/s2orc
|
v3-fos-license
|
Comparability of Heart Rate Turbulence Methodology: 15 Intervals Suffice to Calculate Turbulence Slope – A Methodological Analysis Using PhysioNet Data of 1074 Patients
Heart rate turbulence (HRT) is a characteristic heart rate pattern triggered by a ventricular premature contraction (VPC). It can be used to assess autonomic function and health risk for various conditions, e.g., coronary artery disease or cardiomyopathy. While comparability is essential for scientific analysis, especially for research focusing on clinical application, the methodology of HRT still varies widely in the literature. Particularly, the ECG measurement and parameter calculation of HRT differs, including the calculation of turbulence slope (TS). In this article, we focus on common variations in the number of intervals after the VPC that are used to calculate TS (#TSRR) posing two questions: 1) Does a change in #TSRR introduce noticeable changes in HRT parameter values and classification? and 2) Do larger values of turbulence timing (TT) enabled by a larger #TSRR still represent distinct HRT? We compiled a free-access data set of 1,080 annotated long-term ECGs provided by Physionet. HRT parameter values and risk classes were determined both with #TSRR 15 and 20. A standard local tachogram was created by averaging the tachograms of only the files with the best heart rate variability values. The shape of this standard VPC sequence was compared to all VPC sequences grouped by their TT value using dynamic time warping (DTW) in order to identify HRT shapes. When calculated with different #TSRR, our results show only a little difference between the number of files with enough valid VPC sequences to calculate HRT (<1%) and files with different risk classes (5 and 6% for HRT0-2 and HRTA-C, respectively). In the DTW analysis, the difference between averaged sequences with a specific TT and the standard sequence increased with increasing TT. Our analysis suggests that HRT occurs in the early intervals after the VPC and TS calculated from late intervals reflects common heart rate variability rather than a distinct response to the VPC. Even though the differences in classification are marginal, this can lead to problems in clinical application and scientific research. Therefore, we recommend uniformly using #TSRR 15 in HRT analysis.
Heart rate turbulence (HRT) is a characteristic heart rate pattern triggered by a ventricular premature contraction (VPC). It can be used to assess autonomic function and health risk for various conditions, e.g., coronary artery disease or cardiomyopathy. While comparability is essential for scientific analysis, especially for research focusing on clinical application, the methodology of HRT still varies widely in the literature. Particularly, the ECG measurement and parameter calculation of HRT differs, including the calculation of turbulence slope (TS). In this article, we focus on common variations in the number of intervals after the VPC that are used to calculate TS (#TSRR) posing two questions: 1) Does a change in #TSRR introduce noticeable changes in HRT parameter values and classification? and 2) Do larger values of turbulence timing (TT) enabled by a larger #TSRR still represent distinct HRT? We compiled a free-access data set of 1,080 annotated long-term ECGs provided by Physionet. HRT parameter values and risk classes were determined both with #TSRR 15 and 20. A standard local tachogram was created by averaging the tachograms of only the files with the best heart rate variability values. The shape of this standard VPC sequence was compared to all VPC sequences grouped by their TT value using dynamic time warping (DTW) in order to identify HRT shapes. When calculated with different #TSRR, our results show only a little difference between the number of files with enough valid VPC sequences to calculate HRT (<1%) and files with different risk classes (5 and 6% for HRT0-2 and HRTA-C, respectively). In the DTW analysis, the difference between averaged sequences with a specific TT and the standard sequence increased with increasing TT. Our analysis suggests that HRT occurs in the early intervals after the VPC and TS calculated from late intervals reflects common heart rate variability rather than a distinct response to the VPC. Even though the differences in classification are marginal, this can lead to problems in clinical application and scientific research. Therefore, we recommend uniformly using #TSRR 15 in HRT analysis.
Heart Rate Turbulence
With a simple point-of-care investigation of the heart rate, it is possible to estimate the condition and prognosis of patients. A possible method is HRT, which is a naturally occurring phenomenon that arises after a VPC (2): The characteristic pattern comprises an initial drop of interval length (IL) followed by slowly increasing and afterward decreasing length (refer to the Supplementary Figure 1 for a visual representation). This heart rate fluctuation is provoked by the ineffectiveness of the premature beat, which leads to a drop of blood pressure and activates the baroreflex (3,4).
Because of this dependency on the autonomic nervous system (ANS), HRT can be used as a marker for autonomic health (5). Studies have shown that HRT parameters can be useful risk indicators for all-cause mortality after myocardial infarction or chronic heart failure (5,6). In combination with other risk indicators, HRT can be used in clinical diagnostics to make therapeutic decisions (7)(8)(9). Several methods for the inclusion of HRT in implantable cardioverter defibrillators have already been suggested (10)(11)(12). Similarly, GE Healthcare implemented HRT assessment in their Holter analysis software tools MARS and CardioDay, which both have already been used for HRT analysis (13)(14)(15).
For HRT, there are three main parameter values that can be calculated (check the Supplementary Figure 1 for a graphical depiction): turbulence onset (TO) describes the first drop of the IL after the VPC compared to the intervals before the VPC. It is, therefore, a marker for the parasympathetic response. TS describes the steepest slope of the tachogram after the compensatory interval (compI). The third parameter TT is the index of the first interval that shows TS (4). Both TS and TT are markers for the sympathetic and parasympathetic activity.
Although the #TSRR was described in the standards as being 15 (16), many studies use 20 instead as suggested in the first description of HRT (17). The first article to give 15 as #TSRR is Barthel et al. (18) but without giving a reason for changing the original method. In reviews about HRT, there is a switch of suggesting #TSRR 20 at first (19)(20)(21) and then #TSRR 15 in recent Abbreviations: ANS, autonomic nervous system; compI, compensatory interval; couplI, coupling interval; CRAN, The Comprehensive R Archive Network; DTW, dynamic time warping; ECG, electrocardiogram; HRT, heart rate turbulence; HRV, heart rate variability; IL, interval length; IQR, interquartile range, i.e., the difference between the upper and lower quartiles; ISHNE, International Society for Holter and Noninvasive Electrocardiology; NR, not reliable; nRMSSD, RMSSD normalized for heart rate; nTS, TS normalized after (1); numTSRR[#TSRR], number of RR intervals in which TS is calculated; postRRs, RR intervals in a VPCS following the compI; preRRs, RR intervals in a VPCS before the couplI; refI, reference interval; RMSSD, square root of the mean of the squared successive differences between adjacent RR intervals; SD, standard deviation; SD1, SD of data points in poincare plot projected to the axis perpendicular to the line of identity; SD2, SD of data points in poincare plot projected to the line of identity; SDANN, standard deviation of the averages of all normal sinus rhythm intervals in any 5 min segments; SDNN, standard deviation of the averages of all normal sinus rhythm intervals; stVPCS, standard VPCS; TO, turbulence onset; TS, turbulence slope; TT, turbulence timing; TTSD, SD of TT; VPCS, VPC snippet, i.e., all RR intervals surrounding the VPC used for HRT calculation; VPC, ventricular premature contraction. years (2,5,(22)(23)(24)(25). However, many publications of late still use #TSRR 20 (26)(27)(28)(29)(30).
Rationale and Scope
Comparability is one of the key factors of scientific research, especially when developing techniques and workflows used in clinical medicine. Methodological variance diminishes comparable data and can lead to seemingly contradictory results, which make it difficult to assess the usefulness of a technique for a particular use case. For HRT, a standard methodology has been published in the "International Society for Holter and Noninvasive Electrocardiology (ISHNE)" Consensus (16). However, many studies still use different methods to assess HRT (31) causing the aforementioned difficulties.
Until now, no study has analyzed the difference in HRT parameter values when calculated from different #TSRRs. A higher #TSRR increases the risk of artifacts and other arrhythmias to lie in the required calculation range which leads to an exclusion of the VPCSs (VPC snippet, i.e., all RR intervals surrounding the VPC 78 used for HRT calculation). Conversely, with a lower #TSRR, these compromising intervals may lie outside of the needed calculation range for some VPC snippet, i.e. all RR intervals surrounding the VPC used for HRT calculations (VPCSs) which would make them shorter but valid sequences for HRT assessment. In consequence, a change in #TSRR can lead to a selection of different sets of VPCSs and, therefore, affect all HRT parameter values of a person.
Since HRT is triggered by a VPC via the baroreflex, it is plausible that the reaction should arise without any delay. This means that the slope that represents the turbulence should arise first in direct proximity to the compI and second always after a similar time period. Accordingly, TS calculated from either only late intervals or intervals with widely differing indices may only describe random fluctuation rather than a reaction of the ANS. Because TT describes the localization of TS, it can be used to test this assumption.
In this article, we analyze two hypotheses: • Hypothesis 1: There is a distinct difference in HRT parameters when calculating HRT with #TSRR 15 or 20.
We test this on a large free-access data set from Physionet and compare the resulting HRT parameters and classes. • Hypothesis 2: Persons with a high TT value or a high TT variability do not show HRT, but seemingly random fluctuations, i.e., heart rate variability (HRV).
We, therefore, create an averaged ideal standard VPCS (stVPCS) with distinct HRT by filtering the Physionet data set via HRV parameters. This standard VPCS is then compared with sequences that have been averaged from VPCSs sorted for their respective TT value.
Data
We used databases available on physionet.org (32). The databases had to include annotations of long-term electrocardiograms (ECGs) specifying the beat types. All databases that fit those criteria at the time of analysis (15.01.2021) are summed up in Table 1.
Since our analysis should be independent of the medical background of measurements, we did not exclude databases based on their scope. In sum, our analysis included 1,080 annotation files. If possible, we preferred annotations that were manually corrected, although most of the databases only included automatically generated annotations.
The RHRT Package
For the calculation of the HRT parameter values of each annotation file, we used our R package RHRT (v. 1.1) (38). RHRT provides functions to find VPCSs in time intervals and calculate HRT parameter values with customisable filtering criteria, order of calculation and normalization. The package can be found on The Comprehensive R Archive Network (CRAN) (https://CRAN. R-project.org/package=RHRT) and on github (https://github. com/VBlesius/RHRT). The default methodology of filtering, The IDs correspond to the URLs under which the databases are accessible online. The number of ECGs mostly matches the number of recorded persons in each database, only for ltstdb four of the 80 persons were recorded multiple times. The columns Length and corrected give the median length of the records and whether the annotations were manually corrected, respectively. For sddb, only a part of the records had manually corrected annotations. Information that could not be found, i.e., whether annotations were corrected or the scope of the study, was marked as "unknown." The version of all databases is 1.0.0, and they can be found on https://physionet.org/about/database/ (32).
calculation, and classification is done as suggested in Blesius et al. (31), which mostly follows the ISHNE consensus (16). In contrast to the standards, we use 5 instead of 2 RR intervals in a VPCS before the couplI (preRRs), because the preceding intervals are used to calculate the reference interval (refI) and must, therefore, be included in the filtering process. Furthermore, we use TS normalized after (1) (nTS) which is TS normalized for heart rate and #TSRR. A detailed description can be found in the Supplementary Data Sheet 1 and the documentation of the package.
Other R Packages
Statistical differences between data sets were calculated with the stats package (v 4.
Comparing Data With #TSRR 15 and 20
We assessed HRT of all files twice with the default parameters of RHRT and the settings numPostRRs = 15 (TSRR15) and numPostRRs = 20 (TSRR20). Additionally, we created a data set from the valid VPCs included in the analysis with #TSRR 20, but calculated HRT results with numPostRRs = 15 (TSRR15∩). This leads to a data set with identical VPCs but shorter VPCSs that allows comparison without considering filtering effects. We then calculated the arithmetic mean and SD of HRT parameter values and classified the data into HRT0-2 and HRTA-C. Depending on the number of files with enough valid VPCSs, either a Welch's unequal variances t-test or a paired Student's t-test (both with t.test of the stats package) were used to detect differences between the sets of parameter values.
Creating a stVPCS
The purpose of the analysis was to compare VPCSs sorted on the basis of TT with an stVPCS. Since HRT is most pronounced in persons without autonomic dysfunction as reviewed in Bauer et al. (16), we needed to select files of supposedly healthy persons. However, the databases used do not include data from persons labeled as healthy. It may be assumed that NSRDB and NSR2DB are comprised of such data, but manual inspection of Poincaré plots revealed abnormal patterns throughout both databases. Therefore, we used HRV to filter the files of all databases that have the most sound values of these autonomic markers. The filtering process included three steps:
Length of Files
At first, all files that were shorter than 20 h or longer than 28 h were discarded. Since HRT is correlated with the circadian rhythm of the ANS, it should be calculated from measurements with a length as close as possible to 24 h or its multiple if procurable. Applying the following filters-especially the Poincaré filterto full measurements can lead to the discarding of basically valid data due to temporary irregularities in heart rhythm. Furthermore, variability in the length of the measurements can lead to a bias in HRV parameter values (39). Therefore, we cut the measurements into snippets of 30 min and applied the following filters to these chunks. Only data files with at least 75% valid chunks were passed on by the Poincaré filter to the next step. In the HRV filter, the mean of all chunks was calculated for every parameter, respectively, before ranking the measurements.
Poincaré Filter
As a next step, we used a filter that quantifies the data distribution within a Poincaré plot: this non-linear method of HRV analysis plots data points of a time series against their respective successors to visualize the beat-to-beat variability of RR intervals. Any pathology that affects the length of RR-intervals causes distinct patterns in the Poincaré plots. These patterns have been systematically analyzed and categorized by Esperer et al. (40) and were called lorenz plot patterns. Plots from persons with sinus rhythm show so-called "comets" or "torpedos, " which are shaped as long cones or ellipses, respectively (Refer to Figure 1A). Other lorenz plot patterns are: • "island" patterns consisting of four or nine roundly shaped clusters that are connected to atrial tachycardia or atrial flutter, both with the atrioventricular block ( Figure 1D). • "fan" patterns which look like broader spread torpedos or triangles and occur in persons with atrial fibrillation or multifocal atrial tachycardia ( Figure 1C). • "lobe" patterns consisting of one central and several eccentric clusters which occur due to frequent VPCs or atrial premature contractions ( Figure 1B).
For our analysis, we focused on filtering out Poincaré plots with island and fan patterns since they are specific for different kinds of atrial arrhythmia and atrioventricular block. This leaves torpedos, comets, and lobe patterns that show sinus rhythm or possible VPCs. Since high-frequent VPCs are an indicator for high risk (41)(42)(43), we focused on plots that show mostly comets and torpedos. The chunk-wise analysis of the plots left enough VPCs in the resulting data. In contrast to most other shapes, torpedos and comets consist of just one evenly shaped cluster. We use this fact for two conditions of our filter: First, we projected all data points onto the axis perpendicular to the line of identity and analyzed their distribution. After taking the logarithm and smoothing, histograms with more than one extremum exceeding 40% of the maximum were excluded to rule out strong side lobe and island patterns (refer to Figure 1). Second, we calculated the SD of the projected points which is the HRV parameter SD1. Analogously, we calculated SD2 from the diagonal. The ratio of SD1 with SD2 had to exceed 1.5 since a lower ratio proved to be indicative of broader spread patterns like islands or fans. This cut-off was deduced from Esperer et al. (40), who showed that especially fans have a ratio of the length and the width of the central cluster of less than 1. We used SD1 and SD2 here since they are more commonly known and the exact methodology to create a cluster has not been described in the article.
HRV Filter
The last filtering step is based on the HRV time and frequency domain of the data. We calculated the following HRV parameters with the RHRV package: SD of the averages of all normal sinus rhythm intervals (SDNN), SD of the averages of all normal sinus rhythm intervals in any 5 min segments (SDANN), triangular index, i.e., the total number of all normal sinus rhythm intervals divided by the maximum of the interval frequency distribution, square root of the mean of the squared successive differences between adjacent RR intervals (RMSSD), very low frequency power, low frequency power, high frequency power, and the ratio of low and high frequency power. For every parameter, all files were ranked for their HRV values, respectively: Files with a value that exceeded three times the interquartile range, i.e. the difference between the upper and lower quartiles (IQR), were considered to be outliers. If their values were greater or less than the median±3·IQR they were given the penalty score "-1." For all HRV parameters, high values were assumed to be better, only for triangular index lower values were scored higher. Accordingly, the files were sorted by their HRV value and the best 20% received the score "1, " while the remaining received "0." After this scoring process for every HRV parameter, the scores were summed up for every file, leading to possible scores from -8 (all parameter values are outliers) to 8 (all parameter values are in the top for their respective parameter). On the basis of the scores, the highest ranking 20% of the files were used to create the stVPCS. Heart rate turbulence of all top ranking files was calculated with the RHRT package. All HRT calculations were done with the default settings of the RHRT package except for "numPostRRs" (#TSRR) for which we used 20 intervals because the longer range is the maximum of commonly used #TSRR and provided more intervals for later comparisons. For each file, the averaged VPCS was used as the basis to calculate an overall averaged VPCS (stVPCS).
HRT Values
We calculated the HRT parameter values of every file in our databases with the default settings of the RHRT package except "numPostRRs" for which we used 30 intervals to ensure a wide range of possible TT values.
DTW With stVPCS
We extracted the RR intervals in a VPCS following the compI (postRRs) of every averaged VPCS, grouped them based on their respective TT, and calculated an averaged postRRs sequence for every TT. For the next step of matching the postRRs of the stVPCS to every averaged postRRs sequence via DTW, we tested two methods: First, we matched the standard sequence dynamically to the averaged sequences with the default step pattern "symmetric2" of the dtw function. Second, we removed the leading intervals of the standard sequence before the TT to Of the 682 files that could be calculated in both TSRR15 and TSRR20, 43 files are classified differently. NR, not reliable. receive a sequence that only consists of the intervals that shape the TS and all following intervals. The averaged sequences were cut accordingly and shortened to fit the standard sequence. The standard sequence was matched to all averaged sequences index by index with the dtw step pattern rigid.
Intra-Subject Variability of TT
As a measure of the variability of TT within a file, we calculated the SD of TT (TTSD) and the Pearson correlation coefficient for TT and TTSD.
Comparing Data With #TSRR 15 and 20
The Tables 2, 3). When comparing the HRT parameters of TSRR15, TSRR20, and TSRR15∩ (data in TSRR20 recalculated with #TSRR 15), the most influenced parameter is TT with 5.47 ± 2.38 (TSRR15∩) and 5.75 ± 3 (TSRR20) (refer to Table 4). The TO values of TSRR15∩ and TSRR20 were identical, while the mean difference of the TS and TT values were 0.06 (CI 0.03 to 0.09, p = 4.9·10 −4 ) and 0.4 (CI 0.28-0.53, p = 2.9 · 10 −10 ), respectively. The most differing values of the unpaired t-tests were the TT values of TSRR15 and TSRR20 with CI -0.5 to 0.03 and p = 0.08. The pvalues of all other unpaired t-tests ranged from 0.71 to 0.99 with differences of the arithmetic means between 0.001 and 0.043. A noticeable difference is the high number of TT values that were NR with #TSRR 20 (79) compared to both #TSRR 15 analyzes (9 and 6, respectively).
Creating a stVPCS
Of the 1,080 annotation files included in the analysis, 70 files were shorter than 20 h and 1 file longer than 28 h. Thus, they were excluded. Of the 1,009 remaining files, 652 were removed through the Poincaré filter, leaving 357 files.
After HRV parameter calculation and averaging 33 of the files contained at least one outlier. The median score of the files was 1 with a minimum of -7 and a maximum of 7.
From the best 20% (71 files), HRT was calculated. In 24 files, no or too few valid VPCSs could be found. While most of the remaining 47 files showed a distinct HRT pattern (refer to Figure 2), some did not (refer to Figure 3). Table 5 shows a detailed overview of the number of filtered files broken down by databases.
After classification, 41 of the files used for the stVPCS had HRT class HRT0, 5 files had HRT1, and 1 file HRT2. Of these files, 7 (2 HRT0, 4 HRT1, 1 HRT2) are marked as not reliable by the RHRT package. When adding TT to the classification, 40 files had HRT class HRTA and 7 files HRTB, whereas the classification from 10 files (4 of HRTA, 6 of HRTB) are marked as unreliable.
Because stVPCS should be used for comparison as the ideal HRT shape, it is important that it shows a pronounced reaction to the VPC and low risk HRT parameters. The sequence averaged from all 47 VPCSs showed a distinct HRT pattern (refer to Figure 4 with TO = 3.12%, TS = 7.85 ms/RR, and TT = 3. Therefore, it falls in the lowest risk categories HRT0 and HRTA. The parameter nTS could not be calculated for stVPCS because RMSSD needs to be calculated from a respective long-term measurement, which is not applicable for the averaged VPCS.
HRT Values
The HRT parameter values of all files sorted by TT can be seen in Figure 5 and in the Supplementary Table 1 in more detail. Half of the files have a TT between 4 and 8. The median TS is above the threshold of 2.5 ms/RR for TT values 6 and lower and under the threshold for most higher TT values. For high TT values, the median of TS varies, whereas the number of files in these groups is considerably lower. Unequal distribution is noticeable since the groups with more than 20 files (TT of 2 to 10) include 82% of all files. Analogously to TS, the median of TO is below the threshold for low TT values (1 to 11) and varies with increasing TT.
The parameter values of nTS worsen clearly and the pattern changes compared to the TS values: Only the medians of nTS from a TT of 2-4 still lie above the threshold. The nTS medians of all other TT values including 1 lie below the threshold. For many TT values, no file has an nTS value that exceeds the threshold.
DTW With stVPCS
The results of the DTW analysis are shown in Figures 6, 7. The plots for all TT values can be found in the Supplementary Figures 2 and 3. The averaged VPCS that matched the stVPCS the best was TT 3. Apart from TT 1, with FIGURE 4 | The stVPCS calculated from 47 files that matched all filter criteria and had the best HRV parameters.
rising TT, the difference between VPCS and stVPCS increased. The averaged VPCS with TT 1 lacked the characteristic delayed IL decrease but showed an immediate IL rise followed only by a shallow IL decline.
Analogously to the comparison with the full sequences, the averaged VPCS with TT 3 matched the best with the stVPCS after cutting. The difference of the VPCS of TT 1 to the best sequence is similar to the analysis without cutting (full VPCSs: Diff TT1 82, Diff TT3 31; with cutting: Diff TT1 60, Diff TT3 22). The sequences with TT 2 to 4 considerably line with the stVPCS, while the sequences flatten out continuously with rising TT.
Intra-Subject Variability of TT
The TT and TTSD within a file were significantly correlated (ρ = 0.26, p < 0.005, refer to Figure 8).
Differences in Classification
In our analysis of 1,080 files, only 8 could additionally be classified when using a lower #TSRR. Furthermore, there were 43 and 55 files that changed classification due to #TSRR in the classification systems HRT0-2 and HRTA-C, respectively. The switches were both within HRT classes as well as between an HRT class and NR. Interestingly, a high number of these files switched from an HRT class when calculated with #TSRR 15 to NR with #TSRR 20, meaning that a higher #TSRR leads to more variability in the data.
The same can be seen for TT values, where a higher amount of values was NR with #TSRR 20. This can be explained by the majority of the files with not reliable TT values showing a very shallow tachogram in visual analysis. With no distinct IL increase, random fluctuations have a stronger influence on the location of the steepest slope, thus increasing the variability of TT. Furthermore, longer VPCSs lead to a higher number of possible TT values and therefore higher variability. This high variability combined with a lower number of VPCSs results in non-significant results in the reliability check and, thus, a higher number of files with NR TT.
The only HRT parameter with distinctly differing values is TT which is to be expected with higher #TSRR. This leads to the differences in classification being marginal with less than 1% more classifiable files and 5-6% files changing the resulting classes. However, in clinical settings, even small numbers of patients that cannot be classified or are differently classified based on methodological variances are unfavorable-especially if this could be avoided by uniformly adjusting one parameter.
stVPCS
The stVPCS received through our pipeline shows a distinct HRT pattern with the HRT classes of HRT0 and HRTA, which imply the least possible risk. The tachogram of our stVPCS is similar to tachograms showing characteristic HRT patterns in reviews (16,19,22). Although the databases used to consist of files from subjects with severe diseases, with the filtering pipeline, we were able to find a set of files without pathological abnormalities based on their sound HRT parameter values. The resulting stVPCS seemed to be a feasible approximation of a healthy HRT reaction that could be used as a template for the following analysis.
Random Fluctuation With High TTs
Apart from TT 1, the tachograms with low TTs showed a similar pattern to the stVPCS (check the Supplementary Data Sheet 2 for a discussion of VPCS with TT 1). With increasing TT the tachograms get more shallow meaning the reaction to the VPC becomes less distinct with increasing distance to the VPC. Especially with high TT values, the tachograms show no distinct pattern but apparently random fluctuation. This can also be seen in the mean HRT values grouped by TT. As expected, the VPCSs with a low TT show the best TS values. The same can be seen for TO. With high TT values, however, the medians for both TS and TO vary, which implies common HRV rather than HRT. Still, the number of VPCSs used to calculate the medians decreases with increasing TT, which may bias this observation.
Nevertheless, TTSD is lower with lower TT values meaning that in persons with low TT the fastest slope occurs in a narrower range. Again, a narrower range implies a steady underlying mechanism that causes turbulence within a distinct time interval while a high fluctuation of TT values within a person suggests randomness. Therefore, TTSD may possibly be used as a measurement for the reliability of TT as well as TS and nTS.
Our data suggest, that only measurements with TT 2 to approximately 6 show distinct HRT. This can be seen in the DTW plots in combination with the median values of the HRT parameters grouped by TT. We recommend visually inspecting all measurements with TT 1 or higher than 7 to ensure the validity of the HRT parameters. Admittedly, manual visual inspection introduces unpreventable human error and, thus, variability to the analyzes, which should be avoided wherever possible. Therefore, DTW may be a method to ensure a reliable reaction to the VPC by comparing the progress of the tachogram of a person to a standard tachogram established from a healthy peer group. Additionally, stVPCS could be generated for different pathological conditions, which would enable using HRT not only for risk assessment but also as part of diagnostics. Possibly, DTW could replace the original HRT parameters, because it analyzes the tachogram as a whole instead of reducing it to selective parameters that can be biased as seen in this study with TS.
Using DTW for HRT analysis needs establishing the mentioned stVPCSs through a sufficiently large data set with fitting health conditions and with respect to factors influencing HRT like age or circadian rhythm (31). A similar approach has already been (45,46). Under certain circumstances, this assessment is more robust regarding noise than TO and TS and needs fewer VPCSs to reach a high probability of detecting distinct HRT (46), which shows that comparison of shape patterns as a whole instead of reducing them to restricted aspects of the curve progression offers promising risk assessment parameters.
Hypotheses
Our first hypothesis suggested a distinct difference in HRT values when calculated with different #TSRRs. Although we could show a difference in the number of assessable HRT values and HRT classes, the differences are not as distinct as we expected with < 1% and 5-6% affected files, respectively. However, no variable risk assessment is obstructive in clinical diagnostics, especially if the results obtained from the same person vary solely based on a difference in methodology. Consequentially, the question remains which of the commonly used #TSRR are optimal for the analysis.
Therefore, our second hypothesis tackled the question of whether high TT or TTSD values do not show actual HRT but random fluctuation. The tachograms of the files with different parameter values based on #TSRR show, that these differences are mainly based on variability due to different sets of VPCSs used for calculation instead of actual HRT at the end of the VPCSs. Furthermore, the comparison of the stVPCS with averaged VPCSs grouped by TT verifies that with increasing TT the response to the VPCs decreases considerably. The same result is implied by HRT values passing their respective thresholds to an increasing degree with rising TT.
Limitations
While some of the files included in the stVPCS derive from NSR2DB that is defined as subjects with "normal sinus rhythm, " the vast majority of files belong to the CRISDB, LTSTDB, LTAFDB, and CHF2DB that include ECGs of persons after myocardial infarction, with ST-segment anomalies, atrial fibrillation, and congestive heart failure, respectively. Therefore, it is probable that files were included from persons with diagnosed pathologies that are not visible in the used autonomic markers and may bias our results. It would be interesting to repeat the analysis with data from healthy subjects to examine a possible difference in the stVPCSs.
Due to the lack of meta-data in the user databases, we did not analyze any influence of medication on #TSRR. To our knowledge, a temporal change in the HRT response has not been studied so far. The focus of HRT research rather lies on the strength of the response than its delay. The same goes for any response of the baroreflex: Baroreflex sensitivity has been shown to change with antihypertensive medication (47,48), but its temporal aspect has not been studied. Since the baroreflex response latency can be influenced through short directed intervention such as tilt or atropine administration (49), it is possible that drugs influencing the sympathovagal balance like beta-blockers can change the response delay as well. However, the temporal scale of the difference does not exceed 1 s which amounts to approximately two intervals (49) and is likely to be less with long-term medication and adapted baroreceptor sensitivity. Therefore, we expect that any medication influencing HRT does not influence our results, but this also should be investigated with appropriate data.
It is important to mention that our results allow conclusions about the behavior of the autonomic marker but not its predictive power. Since HRT is a risk marker for major adverse cardiac events, analysis without metadata about the outcome of the studied patients can only be the first step and must be verified with appropriate clinical data.
CONCLUSION
We recommend using #TSRR 15 for HRT analysis. The lower number of valid intervals results in a higher amount of VPCSs that can be used in the analysis as well as discarding of intervals that show random fluctuation instead of HRT. Therefore, it leads to more reliable data.
ETHICS STATEMENT
Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements.
AUTHOR CONTRIBUTIONS
VB conceived the project, implemented the analysis pipeline and performed all analyzes and literature research, created all figures and tables, and wrote the initial draft of the manuscript and revised it. CS provided technical advice in the implementation of the analysis pipeline. CS, GE, and AD reviewed the manuscript. AD supervised the project. All authors contributed to the article and approved the submitted version.
|
2022-04-06T13:18:05.514Z
|
2022-04-06T00:00:00.000
|
{
"year": 2022,
"sha1": "105961347e251c034712988270cdf63f284f5a9d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "105961347e251c034712988270cdf63f284f5a9d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
33827339
|
pes2o/s2orc
|
v3-fos-license
|
Tears of wine: new insights on an old phenomenon
Anyone who has enjoyed a glass of wine has undoubtedly noticed the regular pattern of liquid beads that fall along the inside of the glass, or ‘tears of wine.’ The phenomenon is the result of a flow against gravity along the liquid film on the glass, which is induced by an interfacial tension gradient. It is generally accepted that the interfacial tension gradient is due to a composition gradient resulting from the evaporation of ethanol. We re-examine the tears of wine phenomenon and investigate the importance of thermal effects, which previously have been ignored. Using a novel experiment and simple model we find that evaporative cooling contributes significantly to the flow responsible for wine tears, and that this phenomenon occurs primarily because of the thermodynamic behavior of ethanol-water mixtures. Also, the regular pattern of tear formation is identified as a well-known hydrodynamic instability.
classical hydrodynamics. Our analysis shows that evaporative cooling contributes significantly to the flow responsible for wine tears, and that this phenomenon occurs in wine and other spirits because of the thermodynamic behavior of ethanol-water mixtures. We also identify the origin of tear formation as a well-known hydrodynamic instability.
Hydrodynamic Model
In order to understand the phenomenon of wine tears, we begin with a simple fluid dynamics analysis applicable to a region of the film between falling tears. Consider a liquid film with thickness δ(z) on an impermeable, solid surface (glass) at an angle β with respect to gravity as shown in Fig. 2. The liquid (wine) is modeled as an ethanol-water mixture with ethanol mass fraction . w 0 1 that behaves as a Newtonian fluid with constant density ρ and viscosity η. We invoke the quasi-steady state approximation, and take the film (neglecting curvature of the glass) to be infinitely wide in the y-direction. Hence, the velocity field has the form: v x = v x (x, z), v z = v z (x, z) and is divergence free: The film between the meniscus and contact line has height h ~ 10 mm and characteristic thickness δ 0 ~ 30 μm. Since δ 0 /h ≪ 1, we assume the lubrication approximation 19,20 holds so that pressure varies only in the z-direction. If we further assume inertia can be neglected, the Navier-Stokes equations simplify to the following: where the prime indicates a derivative along the film: (..)′ = d(..)/dz, and g is the gravitational acceleration. Taking the solid-liquid interface (x = 0) to be impermeable and assuming no-slip we can write: Applying interfacial (jump) balances 21,22 for mass and momentum at the liquid-gas interface (x = δ), leads to the following boundary conditions: where higher order terms (products of primed quantities) have been neglected. F evap is the mass flux into the gas due to evaporation, γ is interfacial tension, and γ′ is the Marangoni stress. The second boundary condition in eq. (5) gives the capillary pressure due to curvature of the interface. In the absence of flow, eq. (2) can be combined with the third boundary condition in eq. (5) and integrated to obtain an expression for the film height at equilibrium: δ(h eq ) = 0. This well-known meniscus ridge tears Figure 1. Image of tearing wine film showing regularly spaced tears falling from ridge 4 is the capillary length, and α is the contact angle. For an ethanol-water mixture at room temperature (w = 0.1, 298 K) we find, using ρ = 973 kg/m 3 and γ = 55.4 mN/m 24 , the following κ . − 2 4 mm 1 . We have measured the contact angle at equilibrium (see Methods) and obtain the value α = 14 ± 1°, which gives . h 3 0 mm eq . Note that this is roughly three times smaller than the film height observed for wine tears. Several previous studies [7][8][9] have addressed spreading phenomena, which leads to larger film heights, in evaporating liquid films. The motion of contact lines is a complex phenomenon 25 ; for this analysis we consider the quasi-steady case where the contact line is effectively stationary.
Here we focus on regions of the liquid film away from the meniscus and ridge so that Applying this condition, integration of eq. (2) with the boundary conditions in eqs. (3b) and (5a) leads to the following expression: Hence, the velocity along the liquid film is determined by the competition between gravitational and interfacial forces (a sketch of this velocity distribution is shown in Fig. 2). From eq. (6) we can obtain a rough estimate of the interfacial stress required to overcome gravity: γ′ > ρg cos βδ/2 ~ 100 mPa. Interfacial tension is a thermodynamic property that depends on temperature and composition (ethanol mass fraction): γ = γ(T, w). For small variations of temperature and composition, we can write: Previous analyses of the tears of wine phenomenon have treated γ′ as a parameter and, as noted above, neglected the contribution of thermal effects [7][8][9][10][11][12][13][14] .
We now consider the balance equations for ethanol mass and for energy within the liquid film. To keep our analysis simple, we neglect Soret and Dufour effects, the enthalpy of mixing, and viscous dissipation 19 . The ethanol mass fraction w(x, z) is governed by where we have taken the mass diffusivity D to be constant. The temperature T(x, z) is governed by the following: where ĉ p is the specific heat capacity of the liquid, and λ is the thermal conductivity, which is taken to be constant. Note that in writing eqs. (8) and (9) we have neglected diffusive transport in the z-direction, which is consistent with the scaling used to invoke the lubrication approximation. At the entrance to the film, the temperature and concentration are uniform: velocity along the liquid film eq. (6) is, through eq. (7), coupled to the mass and energy balances in eqs. (8) and (9).
Equations (8) and (9) each require two boundary conditions. Assuming the impermeable solid is a perfect insulator, we have the following boundary conditions at the solid-liquid interface: The jump balances for mass and momentum at the liquid-gas interface (x = δ) imply the excess mass (and momentum) density of the interface is zero. Since interfacial tension is a thermodynamic variable associated with the interface γ = γ(T, w), care must be taken in writing species mass and energy balances at the interface 22 . Here, for simplicity, we assume the excess ethanol mass and energy densities can be neglected. Hence, the jump balances for ethanol mass and energy at the interface can, assuming only ethanol evaporates, be written as follows: where ∆ĥ vap is the specific enthalpy difference between ethanol in vapor and liquid states. In writing the boundary conditions in eqs. (11), we have again neglected higher order terms.
We assume F evap , the ethanol mass flux at x = δ, is given by the product of a mass transfer coefficient k g , and the difference between the gas phase concentrations at the liquid-gas interface and the bulk gas 19 . From equilibrium thermodynamics we know the gas phase concentration of ethanol is related to its concentration in the liquid phase. For ideal mixtures, the relation is linear with a proportionality factor given by the ratio of pure-component vapor pressure to total pressure: p vap /p. Deviations from ideal behavior are taken into account multiplying this ratio by a concentration-dependent activity coefficient a. Hence, assuming the ethanol concentration in the air far from the interface is negligible, we can write Figure 3 shows that ϕ, where a was determined using standard methods 26 , is a strong function of ethanol concentration w.
We are interested in regions of the film where there is a net upwards flow (in the z-direction) -for example, the thick black arrows in Fig. 1. To proceed, it is convenient to introduce the average across the film thickness: , so that from eq. (6), substituting eq. (7), we obtain where we have used the approximations δ ( , ) T z T and δ ( , ) w z w . A necessary condition for continuous tearing is that there is a net flow up the liquid film, or 〈 v z 〉 > 0.
The integrated form of the mass balance eq. (1) for the liquid film, using the boundary conditions in eqs. (3a) and (4), takes the form: Similarly, using eqs. (4), (10a) and (11a), the ethanol mass balance for the liquid film eq. (8) can be written as where we have used eq. (13) and the approximation . Equation (14) expresses the balance between convective mass transport along the film and the evaporative mass flux to the gas phase. Finally, using eqs. (4), (10b) and (11b), the energy balance for the liquid film eq. (9) takes the form . Equation (15) expresses the balance between convective energy transport along the film and the energy required for the evaporation of ethanol.
Equations (12)-(15) are a coupled system of ordinary differential equations that govern film thickness, average velocity, ethanol concentration and temperature within the film. These equations can be integrated from reference values δ 0 , 〈 v z 〉 0 , w 0 and T 0 .
Results
To examine the contribution of the temperature gradient T′ to the Marangoni stress, we have used infrared thermography to measure the temperature distribution in a tearing wine film. A typical infrared image taken from a video (see Supplementary Information) is presented in Fig. 4. This image shows two cooler regions with the shape of falling tears between which there is a region with temperature gradient in the z-direction. We interpret the region between the falling tears as the region where Marangoni flow occurs. Figure 5 shows the temperature profile along the red vertical line in Fig. 4. From this figure we find for the temperature gradient T′ ~ − 100 K/m. To estimate the concentration gradient we combine eqs. (14) and (15) For the case just considered, the change in ethanol concentration along the film is less than one percent ( ′ ) − w h 10 2 . Based on this, we seek an approximate solution to eqs (12)-(15) that is valid for small changes in w. First, we multiply eq. (12) by 〈 v z 〉 and combine with eqs. (14) and (15). Setting δ → δ 0 , 〈 w〉 → w 0 and 〈 v z 〉 → 〈 v z 〉 0 , we obtain a quadratic equation for 〈 v z 〉 0 , which has two (real) solutions. Taking the positive solution, we find is a characteristic velocity, and C = k g /(2ρηV 2 ). Note that the derivatives of interfacial tension are evaluated at w 0 . The expression in eq. (17) allows for an examination of a necessary condition for wine tears to be observed: 〈 v z 〉 0 > 0, and its dependence on the thermodynamic properties of ethanol-water mixtures.
It should be noted that ethanol-water mixtures display moderate deviations from ideal solution behavior. Setting / . p p 0 08 vap , we see from Fig. 3 for w = 0.1 we have ϕ ≃ 0.4, so that the activity coefficient a 5. Figure 3 also shows the concentration and temperature dependence of the interfacial tension 24 parameters γ ∂ ∂w and γ ∂ ∂T , the latter using the normalization implied by eq. (17). The mass transfer coefficient k g is the only unknown parameter in the model. A common approach 19 to estimate mass transfer coefficients is using correlations in terms of a dimensionless group known as the Sherwood number: Sh = k g h/ρ g D g , where ρ g and D g are the density and (ethanol) mass diffusivity, respectively, for the gas phase. The Sherwood number indicates the relative importance of convective and diffusive mass transfer, and for essentially stagnant fluids it is reasonable to set Sh ~ 1. and ρ / 1kg m g 3 we obtain the following estimate for the mass transfer coefficient: We note that this value is consistent with measured evaporation rates in ethanol-water mixtures 14 . Figure 6 shows the dependence of 〈 v z 〉 0 /V as a function of ethanol mass fraction w 0 obtained from eq. (17). From this figure we see the velocity induced by a concentration gradient alone goes through a maximum, while the velocity induced by a temperature gradient alone increases monotonically with w 0 .
The 〈 w〉 ′ contribution to 〈 v z 〉 0 scales roughly with ϕ ( − ) Fig. 6 goes through a maximum at the ethanol concentration for a typical wine. Similar calculations that include water evaporation differ by roughly 10% from the curves in shown Fig. 6. It is important to note that both the shapes and relative magnitudes of the curves in Fig. 6 are independent of the estimated model parameter k g , and instead are determined by the simple physical model and thermodynamic properties of ethanol-water mixtures shown in Fig. 3.
We have also made infrared thermography measurements on cognac ( . w 0 35 0 ), which are presented in Fig. 7. As for the wine film in Fig. 4, we see in Fig. 7 two cooler regions (falling tears evident from video in Supplementary Information) surrounding a region with a temperature gradient. The temperature profile in Fig. 8 shows a region with nearly uniform temperature for which we presently do not have an explanation. A rough estimate of the average temperature gradient from the profile in Fig. 8 gives We estimate the concentration gradient using (16), which gives w′ ~ − 0.6 1/m and a Marangoni stress of Hence, for the case of a liquid with higher ethanol concentration, we find that the relative contribution of the temperature gradient to the Marangoni stress is approximately the same as that from concentration gradient, and the combined contributions lead to a slightly smaller Marangoni stress as that for the lower ethanol content liquid. These observations are consistent with the results in Fig. 6. It is worth noting that both the value and concentration independence of w′ found in this study are consistent with measured concentration profiles in evaporating ethanol-water films 9 . We also note that previous experimental work, where thermal effects have been ignored, has been based on ethanol-water mixtures having higher ethanol concentrations w ≥ 0.5 9,14 .
We now consider the mechanisms responsible for the formation of wine tears. As shown in Fig. 1, tears are formed in the ridge near the top of the film. A simple explanation for the formation of the ridge [11][12][13][14] is that the flow rate induced by Marangoni stresses exceeds the rate at which the volume of the film increases by the motion of the contact line. This implies the contact angle α will be larger in an evaporating liquid film than at equilibrium. A force balance at a stationary contact line, in the absence of mass transfer and viscous stresses, leads to Young's equation: cos α = Δ γ fs /γ, where Δ γ fs is the difference in interfacial tensions between the fluid phases and the solid 21 . Since the liquid at the ridge has a larger interfacial tension (less ethanol), Young's equation assuming Δ γ fs is constant suggests an increase in α. We have also observed a gradual increase in the measured contact angle during ethanol ethanol evaporation. Based on these observations, we assume for the contact angle: α ≃ 30° in the analysis that follows. Studies on the dynamics and stability of free-surface flows date back to the mid-19 th century 27 and continue to be an active area of research 28,29 . As noted above, there has been interest in understanding the flow instabilities that are observed in films driven by Marangoni stresses. Much of this work has focused on the formation of regularly spaced ridges in the meniscus region that are parallel to the z-direction of Fig. 2. This instability appears to be driven by a competition between viscous, capillary and Marangoni stresses [11][12][13][14] . We have not observed this phenomenon, presumably because the spacing of the ridges decreases as the film becomes more vertical (smaller β) 9 .
It has been suggested 11 that the formation of tears from the ridge is the result of the well-known Rayleigh-Plateau instability 27 . The Rayleigh-Plateau instability, which describes the formation droplets from a liquid jet, is based on the interplay between inertial and interfacial tension forces that result from axisymmetric disturbances to the surface of the liquid jet. Rayleigh found that a liquid jet is unstable to disturbances having a wavelength λ larger than the circumference of the cylinder 2πR. The hypothesis that this mechanism is responsible for wine tears was not, however, quantitatively investigated 11 .
To investigate the instability leading to wine tear formation, we begin with a determination of the morphology of the ridge. A crude approximation is to treat the ridge as a cylinder with radius R (see Fig. 2). An estimate for R can be obtained from the width of the ridge W and contact angle: W = 2 R sinα. An analysis of images (like those shown in Fig. 1) gives W 2mm, so that R 2mm. From these images we also estimate the average spacing between the falling tears to be λ 15mm. The relative importance of viscous and interfacial forces in liquid film dynamics can be ascertained from the value of the Ohnesorge number η ργ = / R Oh . Here, we have Oh ~ 10 −3 , so it is reasonable to assume viscous effects do not play a role in the instability mechanism. The fastest growing instability for the Rayleigh-Plateau instability corresponds to λ . R 9 02 27 . Hence, the predicted wavelength for the Rayleigh-Plateau instability is λ 18mm, which is in good agreement with the observed value. A somewhat more realistic morphology for the ridge is to treat it as a cylinder that has been cut along its length and is bound by two contact lines. The stability of liquid ridges having this geometry have been investigated; the most unstable mode corresponds to a disturbance having wavelength λ . W 4 2 30,31 . For the system considered here, this gives λ 8mm, which is roughly a factor of two smaller than the observed value, but still reasonable. Based on this analysis, we believe there is strong evidence that the regular pattern of wine tear formation is due to the Rayleigh-Plateau instability.
Discussion
The tears of wine phenomenon is the result of a delicate interplay between interfacial and bulk hydrodynamics. The evaporation of ethanol induces an interfacial (Marangoni) stress that in turn induces an observable bulk flow. A common misconception is that the Marangoni stress arises because of concentration gradients alone. We have shown, using a combination of experiments and modeling, that the Marangoni flow taking place in the tears of wine phenomenon is the result of both composition and temperature gradients. Infrared thermography measurements reveal the existence of temperature gradients of sufficient magnitude to induce a Marangoni stress comparable to that from concentration gradients.
The model developed here represents a simple description of the coupling of fluid flow and energy and mass transport in evaporating liquid films. In contrast to previous analysis of the phenomenon in which the interfacial stress was treated as a parameter, the model developed here is based on a coupled set of balance equations for mass, momentum and energy so that the interfacial stress is predicted. The model does not take into account more complex phenomena in regions of the film near the meniscus and contact line, and is only able to capture qualitative features of measured temperature profiles. In particular, an explanation for the non-monotonic dependence of temperature observed is some cases requires further investigation. The evaporation of water, which was neglected in this work, further complicates the phenomenon. The large latent heat of vaporization of water means that the thermal effect is enhanced, while at the same time water evaporation will decrease the concentration contribution to the Marangoni stress. In addition, we have neglected coupling of diffusive mass and energy fluxes (Soret and Dufour effects) and the enthalpy of mixing. Nevertheless, the model semi-quantitatively predicts the conditions necessary for the observation of the tears of wine and establishes the essential nature of thermal effects in this phenomenon. The dependence of the Marangoni stress on ethanol concentration is strongly influenced by the thermodynamic properties (interfacial tension and activity coefficient) of ethanol-water mixtures. Interestingly, the combination of these properties results in a maximum Marangoni stress at the ethanol concentration found in a typical wine.
A second interesting feature of the tears of wine phenomenon is the highly-regular pattern in which the tears form. Using a rather simple analysis based on a simplified morphology for the wine film, we have provided strong evidence that the pattern observed in wine tear formation is the result of the well-known Rayleigh-Plateau instability.
Methods
Materials. The red wine (California Pinot Noir) used in this study was 13% ethanol by volume, which neglecting the volume change of mixing, corresponds to 10% ethanol by mass, and the cognac (Hennessy) contained 40% ethanol by volume, which corresponds to 35% ethanol by mass.
Scientific RepoRts | 5:16162 | DOi: 10.1038/srep16162 Procedures. The glass was cleaned by soaking in a solution of chromic acid and hydrogen peroxide followed by rinsing with deionized water. The image in Fig. 1 is taken from a movie made with a reflected light camera (Sony DCRSR64) and modified lens to enhance the camera focus deep and to avoid glass wall reflection 4 . Infrared images in Figs 4 and 7 were obtained from a video obtained using an IR Camera (FLIR A320) having a spatial resolution of 320 × 240 pixels and sensitivity of 0.1 K equipped with an 18 mm focal length lens. The infrared movies were made in a glass with a conical shape (β ≃ 45°) to facilitate imaging of the liquid film. The contact angle of wine on (borosilicate) glass was determined using reflected light differential interferometry by placing a 2 − 3 μl drop on a microscope slide. Details of the method can be found elsewhere 32,33 .
|
2018-04-03T02:31:59.702Z
|
2015-11-09T00:00:00.000
|
{
"year": 2015,
"sha1": "6ad3d392a11b2ca33ec4a56807525ae7c3ff49f9",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep16162.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "27ec63050ff6c35f8402ca3051cba5b7350cbd66",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
}
|
221844155
|
pes2o/s2orc
|
v3-fos-license
|
A continuation method for spatially discretized models with nonlocal interactions conserving size and shape of cells and lattices
In this paper, we introduce a continuation method for the spatially discretized models, while conserving the size and shape of the cells and lattices. This proposed method is realized using the shift operators and nonlocal operators of convolution types. Through this method and using the shift operator, the nonlinear spatially discretized model on the uniform and nonuniform lattices can be systematically converted into a spatially continuous model; this renders both models point-wisely equivalent. Moreover, by the convolution with suitable kernels, we mollify the shift operator and approximate the spatially discretized models using the nonlocal evolution equations, rendering suitable for the application in both experimental and mathematical analyses. We also demonstrate that this approximation is supported by the singular limit analysis, and that the information of the lattice and cells is expressed in the shift and nonlocal operators. The continuous models designed using our method can successfully replicate the patterns corresponding to those of the original spatially discretized models obtained from the numerical simulations. Furthermore, from the observations of the isotropy of the Delta–Notch signaling system in a developing real fly brain, we propose a radially symmetric kernel for averaging the cell shape using our continuation method. We also apply our method for cell division and proliferation to spatially discretized models of the differentiation wave and describe the discrete models on the sphere surface. Finally, we demonstrate an application of our method in the linear stability analysis of the planar cell polarity model.
Introduction
The development of multicellular organisms is regulated by intercellular communication and signaling pathways of various types. These include diffusible proteins acting as ligands and cell membrane proteins communicating with the neighboring cells. In the last fifty years, approaches comprising mathematical models and numerical simulations have been extensively used to understand the mechanisms underlying the biological phenomena. It is a common practice to divide a region of interest either into square or hexagonal elements representing cells, as shown in Fig. 1; this allows for the discrete spatial independent variables to be used. We also assume that the unknown dependent variables of the model are uniform on the lattices. With these preconditions, we model the phenomena on the lattices mathematically. In this paper, we label the mathematical models with the discrete spatial independent variable as discrete models, and the ones with the continuous spatial independent variable as the continuous models. Modeling the phenomena on the divided lattices often demonstrates good reproducibility and presents good agreement with experimental results.
One of the examples in which the cellular interaction is conserved in various organisms is the Delta-Notch signaling. The intercellular communication in this signaling is based on the informational exchange between neighboring cells (Collier et al. 1996;Sato et al. 2013;Yasugi et al. 2008). The function of the Delta-Notch signaling is known as lateral inhibition. During neural development of the fly embryo, binding of the Delta ligand to the Notch receptor suppresses the expression of achaete-scute complex (AS-C) proneural genes. On the contrary, AS-C genes induce Delta expression. Consequently, signal-sending cells demonstrate a high level of the AS-C genes, while signal-receiving cells express low level of the AS-C genes. During embryogenesis, neuroepithelial cells (NEs) that express high Delta and AS-C differentiate into neural progenitor cells. In contrast, the surrounding cells express low levels of AS-C genes and differentiate into non-neuronal cells. In accordance with these interactions between the neighboring cells, the expression patterns of Delta and Notch activation show a salt-and-pepper like pattern that distinguish the neuronal cells from the nonneuronal ones, as shown in Fig. 2. Information regarding discreteness, such as the size and shape of each cell, affects the entire pattern in the developmental process,
Notch mediated lateral inhibition
Salt-and-pepper pattern Delta therefore, modeling in the framework of the discrete model is compatible with the phenomenon described (Collier et al. 1996;Lehotzky and Zupanc 2019;Sato et al. 2016). A discrete model shows good reproducibility of the experimental results for the differentiation propagation in the developing fly brain (Sato et al. 2016;Tanaka et al. 2018). Using the described type of discrete model for the Delta-Notch interaction, the salt-and-pepper pattern appearance and regulated differentiation propagation in the fly brain have been explained (Collier et al. 1996;Jörg et al. 2019;Sato et al. 2016;Tanaka et al. 2018). It is well known that the function of Delta-Notch signaling is diverse and Notch activation shows several different patterns. For example, Notch activation oscillates in the segmentation in vertebrates and progresses unidirectionally in the fly optic lobe development Kageyama et al. (2012).
Another good example of a biological system that utilizes discrete modeling is the planar cell polarity (PCP) (Adler 2002;Goodrich and Strutt 2011). The intercellular proteins and membrane in the cells of the fly wing interact with each other, among the neighboring cells, and they are localized asymmetrically. Owing to this asymmetric localization, the direction of the epithelial hair in the wing of a fly is determined Ayukawa et al. (2014). It is reported that the discrete model proposed by this paper can reproduce the biological experiments of the PCP.
The analytical study of the discrete models was further conducted to specify the function of the intercellular interactions and the discreteness in the dynamics. The analytical results for the discrete type of reaction diffusion systems have been reported in Bates et al. (2001), Chow (2003). The results related to the traveling wave solutions in the system of the discrete models are reported in Guo et al. (2019), Hupkes and Sandstede (2010), Straatman and Hupkes (2019).
Although the discrete models are useful in describing the abovementioned behavior and dynamics phenomenologically, the analysis of discrete models is rather difficult in general, and the technique of analyzing a discrete model is poor compared to the analysis of the continuous models. For example, as the discrete models usually comprise numerous unknown variables, it is usually difficult to compute the eigenvalues in case of higher-dimensional space. Thus, the methods of analysis for the partial differential equation are being reconstructed, such that they are applicable to the discrete models (Chow 2003). Moreover, discrete models are not compatible with the description of regional expansion caused by cell division and the three-dimensional information. In order to overcome the difficulties mentioned, the limit of the cell and lattice size is often set to zero, and the differential operator is derived. There are some examples of the continuation. The discrete models to prion dynamics and coagulationfragmentation processes were investigated mathematically through the continuation method using the piece-wise constant functions (Crampin et al. 1999;Laurençot and Mischler 2002). In these papers, the continuous models were derived on the lattices with sufficiently small length. By taking the limit of the lattice size, the convergence is rigorously shown, and the integro-differential equations were derived.
However, the method taking the limit of the lattice size to zero may cause a problem because the patterns caused by the spatially discretized structures, such as lattice and cell membrane, sometimes disappear in the continuous models. In this paper, we questioned if it is possible to convert a discrete model into a continuous model while retaining the size and shape of cells and lattices. In the light of this question, we propose a novel continuation method for the spatially discretized models, while conserving the size and shape of cells and lattices. In our proposed method, we perform the continuation part of the process by introducing the shift operators, instead of deriving the differential operators. Thus, the nonlinear discrete models can be systematically converted into continuous models with spatially discretized structures. Moreover, by reducing the shift operators using the integral operators of the convolution type with suitable kernels, we propose a nonlocal evolution equation that can approximate the solution of the spatially discrete model, for the application to the biological experiments and mathematical analysis. The approximation of nonlinear discrete models by nonlocal evolution equations in the one-dimensional periodic region is assured by singular limit analysis. In the continuous models, the information of the size and shape of cells and lattice was reflected in the part of shift operator and the kernel in the convolutions. Furthermore, we confirmed the isotropy of the Delta-Notch signaling system for the irregularly shaped cell in the fly brain. According to the biological results, we propose a radially symmetric kernel in the nonlocal evolution equation for discrete models and prove the effectiveness of the kernel by replicating the spatially discretized patterns in the numerical simulations. As a result of the description using nonlocal evolution equation, we model the cell division and proliferation in the discrete model for the wave of the differentiation and extend the model to the sphere surface. Moreover, we show that linear stability analysis can be performed by the continuation method.
This paper is organized as follows: In Sect. 2, we first introduce the concept of our continuation method by modifying the general discrete model. Our continuation method is characterized by the combination of the shift operator and the Friedrichs mollifiers as the convolution kernel based on the lattice shape. In Sect. 2.2 we state the result of the singular limit analysis of the discrete models and nonlocal evolution equations in the general form of the spatial interactions. In Sect. 2.3 we explain that our method can be extended to the discrete models on nonuniform lattice. In Sect. 2.5.3, we introduce the radially symmetric kernel for the averaged shape of the cells, based on the results of the real biological experiments for Delta-Notch signaling in the fly brain. In Sect. 3, we show the results of numerical simulations in biological applications of the continuation method: the proneural wave in the fly brain and planar cell polarity in the fly wing. Our results indicate that the continuation method using shift operators and integral operators can be applicable for diverse multicellular systems.
2 Continuation method with shift and convolution operator for discrete models
Scalar equation in one-dimensional space
In this section, we explain the concept of the continuation method, while retaining the shape and size of cells and lattices. First, we describe the continuation method applied on a typical discrete model containing the intercellular interaction terms and the reaction term. In this paper, we do not distinguish between the spatial and intercellular interactions. Suppose N cells of the uniform length l > 0 are packed in the one-dimensional space, then the following discrete model is considered: where u i = u i (t) denotes the concentration or density of some substances on the ith cell c i at time t > 0, f : R 3 → R is the function corresponding to the intercellular interactions, and g : R → R is the function for the reactions. Setting the one-dimensional space as we impose the periodic boundary condition u 0 (t) = u N (t), and u 1 (t) = u N +1 (t).
The linear intercellular interaction can be defined as: where a i , (i = −1, 0, 1) are constants. The typical examples of the function f are diffusion and lateral inhibition such as the Delta-Notch interaction given by where the denominator of f lat is the total number of neighboring cells referred from Collier et al. (1996), and the sign of the lateral inhibition f lat can be changed in the system. If the dynamics of u i is more influenced by the other cells than by the neighboring cells, f becomes the nonlocal interactions. As introduced in Doumic et al. (2009), Laurençot and Mischler (2002), we will utilize the piece-wise constant functions for our continuation method. For equation (1) with i = 1, . . . , N on each cell c i , we define the characteristic function as and also we define at position x ∈ T and at time t > 0. For the continuous method of the discrete model, we set the following assumption: For any N there exists a unique global solution u(x, t) ∈ C([0, T ], L 1 (T)) of (1).
As in Proposition 2 and "Appendix B", the existence and uniqueness of the global solution of (1) is shown for specified functions f and g. Changing the variable in the ith equation (1) by multiplying the unknown function u i by the characteristic function
As we can compute
for j = 0, ±1, · · · , ±N , we obtain As the spatially independent variable is continuous, the discrete model (1) is successfully converted into a continuous model. The equation of (7) is point-wisely equivalent to the equation (1). Thus, if the initial conditions of equation (1) and (7) are the same, the solutions are equivalent as described by the following remark: Remark 1 Using the initial data of discrete model {u i (0)} N i=1 , and imposing the initial data as u 0 (x) , the solution of continuous model (7) is equivalent to that of the discrete model (1). Furthermore, to apply the continuous model to the experiments and analyze conveniently, we approximate the shift operator using the convolution on the mollifier. We define the shift operator as follows: The shift operator is regarded as the convolution of the shifted Dirac Delta function δ l := τ l δ = δ(x + l), and we can describe the model (7) as follows: Here we suppose that the Dirac Delta function is periodic with Nl, i.e., δ(x) = δ(x + Nl), and we define the convolution k * v with respect to x in T as where T can be replaced with a given region in this paper. Setting the Friedrichs mollifier with a small parameter 0 < ε 1 as with a constant for the normalization of integration C 0 > 0, we assume that ρ ε is also periodic with Nl. We use the symbol ρ ε for the mollifier in higher-dimensional case.
Approximating the Dirac Delta function by the mollifier ρ ε (x), we have where the shifted mollifier is given by ρ ε,l := ρ ε (x + l), and we denote the unknown variable by u ε (x, t) as the solution of this equation depends on ε. If the intercellular interaction f is linear, we derive the typical nonlocal evolution equation by summarizing the kernel of the convolution as follows: where we put the kernel as K = a −1 ρ ε,−l + a 1 ρ ε,l . Consequently, we have the following nonlocal evolution equation: Such type of a nonlocal evolution has been analyzed in numerous papers (Bates et al. 1997;Coville and Dupaigne 2007). Figure 3 shows the profile of the kernel K for f Δ . Figure 4 shows the results of the numerical simulations for both continuous and discrete heat equations fed with the spatially discretized initial data.
As in Fig. 4a, it is observed that the solution of the equation with shift operator is not continuous until the solution attains the steady-state in the numerical simulation. On the contrary, as in Fig. 4b, it is observed that the solution of the equation with the mollifier becomes continuous before the solution attains the steady-state in the numerics.
Remark 2 If f and g are linear, and if u ε = u * ρ ε , u ε becomes the solution of (7). This is owing to the linearity of the convolution operator, which can be described as follows; using the convolution of the mollifier in the equation (7), we have We compute as follows thereby satisfying Eq. (7).
Furthermore, our proposed method is consistent with the continuation method where the cell limit was set to 0 or lattice size was set to l, as we can derive the differential operator by setting the limit l → +0 after applying our continuation method.
Remark 3 If f is equal to f Δ , we see that Even if the intercellular interaction is nonlocal, which means it is affected by not only the neighboring cells but also the other cells, our continuation method is applicable to the discrete model, in a similar way. A discrete model in which intercellular interactions are influenced by the cells other than neighboring cells, is given as follows: where [·] is the Gauss's symbol, and f : R 2[N −1/2]+1 → R is a function corresponding to the interaction here. If f is linear, the function f is generally defined as where i=−[(N −1)/2] are constants. Following the calculation in (6), we derive the equivalent continuous model as follows: By describing the nonlocal interactions using the convolution of the mollifier ρ ε , the kernel is provided by and thus, the nonlocal evolution, which can approximate the solution of (P S ), is given as follows:
Singular limit analysis
In this subsection, we will explain that the solution of (P ε ) is sufficiently close to that of (P S ) in L 2 (T) by singular limit analysis. As the interaction f with the form in (P D ) includes the case of intercellular interaction in the discrete model, we deal with the equations (P S ) and (P ε ) in this analysis. We firstly assume that f is the form of (10). For the condition g, we assume that there exist positive constants g 0 , g 2 , g 4 and a nonnegative constant g 1 , g 3 such that for u, v ∈ R, A typical example of g is g(u) = u(1 − u 2 ), where g 0 = g 2 = g 4 = 1, g 1 = g 3 = 0 and p = 3. First, we calculate the fundamental solution of the (P S ) without the reaction term g(u).
Proposition 1 The fundamental solution of u t
where {q k } N k=1 are real constants determined by the initial data.
The proof is in "Appendix B". Since the equation (P S ) is equivalent to (P D ), and the associated matrix from f of the system (P D ) is the cyclic, the eigenvalue and the eigenvector can be calculated. Next, we have the uniqueness and global existence of the solutions of (P S ) and (P ε ).
Proposition 2 Assume that f is given by (10), and where C 1 is a positive constant.
The proves are in "Appendix B". We note that each global solution of (P S ) and (P ε ) is in L 2 (T) due to L ∞ (T) ⊂ L 2 (T). Thus, we have global boundedness in L 2 (T) as from the estimations (13) and (14). For the solution of the model (P S ) and (P ε ), we have the following error estimate. Setting the error between the solution of (P S ) and (P ε ) as we have the following convergence result.
Theorem 1 Suppose the same assumptions of Propositions 2 and 3. Let u(x, t) and u ε (x, t) be solutions of (P S ) and (P
where C 5 and C 6 are positive constants independent of ε. Thus, we have The calculation of the energy estimate is put in "Appendix B". From this estimation, the solution of the continuous model (8) converges to that of (7) in L 2 (T) space as ε tends to 0. This implies the solution of nonlocal evolution equation can approximate the solution of discrete model.
The typical example of above f is in the model of the PCP (38). By using the inequality (15), the proof is followed by that of Proposition 2, Proposition 3, and Theorem 1.
Nonuniform lattice in one-dimensional space
In this subsection, we introduce that our continuation method is applicable to the discrete models on nonuniform lattices by adding some conditions as a remark. Labeling the ith cell as c i , (i = 1, . . . , N ) with the nonuniform length l i > 0, we suppose N cells are packed in the one-dimensional space. Let u i = u i (t) be the concentration or density of some substances on c i at time t > 0. Imposing the periodic boundary condition u 0 (t) = u N (t), and u 1 (t) = u N +1 (t) with l 0 = l N , and l 1 = l N +1 , we consider the following discrete model in this subsection: where the definitions of f and g are same as those in (1), and the initial data are given by For the length l i , we define the following functions: for any x ∈ c i . Using the characteristic function (5), we define the piecewise constant function for the shift as follows Changing the variable in the ith equation (16) by multiplying the unknown function u i by the characteristic function χ c i (x), and adding u i (t)χ c i (x) with respect to i = 1, · · · , N , we have As we can compute that we obtain If the initial datum is given by , the solution of continuous model (16) is equivalent to that of the discrete model (17).
For the mathematical analysis, the function l(x) and r (x) can be rendered continuous by using mollifier. We see that l(x) and r (x) belong to we propose an approximation model to (17) as (18) Similarly to Sect. 2.1, we approximate the nonuniform shift operators by the convolutions. We assume (A1) for the function u ε in (18). We can rewrite (18) in the convolution form as (2), we obtain the nonlocal evolution equation where the kernel is given by With ε > 0 and η > 0 the kernel K (x, y) is differentiable. As introduced above, the discrete models on nonuniform lattice can be rendered continuous models. The nonlocal evolution equation (19) is expected to approximate the solutions of that of the original discrete models on nonuniform lattices. The error estimations, the analysis and the application to this proposed model is one of our future works.
System in one-dimensional space
In the case of system we can perform the continuation method similarly to Sect. 2.1. We will explain the method by using the typical reaction diffusion system and Delta-Notch signaling system which are often used for the mathematical modeling in the successive sub-subsections.
Reaction diffusion system
First, we explain the typical reaction diffusion system in the one-dimensional space with periodic boundary conditions. Let u i = u i (t) and v i = v i (t) be the concentration of the diffusive substances on the uniform cells or lattices c i , (i = 1, . . . , N ), respectively. The reaction diffusion system in the framework of the discrete model can be described as follows: where d u , d v > 0 are the diffusion coefficients, f Δ is defined by (3), and g 1 , g 2 : R 2 → R are the functions for reactions in this sub-subsection. For this equation, setting the variables at position x ∈ T and at time t > 0 as we derive the reaction diffusion system performed our continuation method as follows: Similarly to the previous subsection, this above equation is point-wisely equivalent to the equation (20). Indeed, if the initial conditions of (20) and (21) are same, the solutions of (20) and (21) are equivalent. Approximating the shift operator by the mollifier, and describing (20) like the form of (9) , we have the following nonlocal evolution equation which is expected to approximate the solution of the original discrete model (20):
Delta-Notch interaction system
Secondary, we consider the continuation method to the general Delta-Notch interaction system. Let D i = D i (t) and N i = N i (t) be the expression of Delta, and Notch signal in the cell c i , (i = 1, . . . , N ), respectively. The simple description for the Delta-Notch signaling in the framework of the discrete model is given by the following system Collier et al. (1996): where f is the function f lat defined in (4) or the function depending f lat , g 1 and g 2 are the functions for reactions in this sub-subsection, and we replace the literature for the cell number N with M ∈ N for the clear description in this sub-subsection. Similarly to Sect. 2.1, the changing the variables in Eq. (22) as yield the following equation Indeed, if the initial conditions of (22) and (23) are the same, the solutions of (22) and (23) are equivalent for any time t > 0. Regarding the shift operator as the convolution with the Dirac Delta function, we approximate it by the convolution with the mollifier.
If f is linear such as f lat , we have the following nonlocal evolution equation similarly to (9): where K = (ρ ε,−l +ρ ε,l )/2. Even if the number of the unknown variables is increased, our method is applicable to make the discrete model continuous.
Two-dimensional space
In this subsection we will explain our continuous method for the discrete model in the two-dimensional case. As the continuation method for the scalar equation can be applied to the system similarly to one-dimensional case, we firstly deal with the scalar discrete model in two-dimensional space. As considering the model in twodimensional case, the number of terms in the intercellular interaction is increased. Accordingly, the number of the terms of shift or convolution with mollifier is increased in the continuation method. The procedure of continuous method for the reaction term is the same as the explanation of the previous subsections. We set the square region, and impose the periodic boundary condition in this subsection.
Square lattice
We perform the continuation method for the discrete model in the square lattice. This mathematical model corresponds to the situation that square cells are packed in a plane in the development of multicellular organs. Dividing the square region into N 2 square parts of lattices, we label each lattice as c i, j , (i, j = 1, . . . , N ), and denote the horizontal and vertical length of a lattice by l x > 0 and l y > 0, respectively. Thus, the region is described by One divided region corresponds to one cell or lattice as shown in Fig. 1.
For the simplicity, we suppose that region is a regular square. Then the scalar discrete model with intercellular interaction can be described as follows: is denoted by the concentration or density of some substances on c i, j at time t > 0, f : R 5 → R and g : R → R are intercellular and reaction functions, respectively, in this sub-subsection. If f is a linear function, it can be generally written by The typical examples of f are diffusion and lateral inhibition as follows: where f Δ× is referred from Chow (2003). For this discrete model we prepare the following characteristic function at position (x, y) ∈ T 2 : For the Eq. (24) for i, j = 1, . . . , N , we change the variables similarly to Sect. 2.1 by using the characteristic function. Here setting the variable on T 2 as we have from same calculation as that on one-dimensional case. We put the specific calculation in "Appendix C". The discrete model is successfully converted into the continuous model. Similarly to the case in one dimension, approximating the shift operator by the convolution with the mollifier yields the nonlocal evolution equation: where we define the shift operator τ l,m as If f is linear, the description with the kernel is given as follows As mentioned above, our method enables us to derive the continuous model and nonlocal evolution equation for the original discrete model. The profile of the kernel for f Δ on the square lattice is shown in Fig. 5a.
Hexagonal lattice
In this sub-subsection, we explain the continuation method on hexagonal lattice. Due to the hexagonal lattice, the direction of the shift operator is different from that in the square lattice. Dividing the region into the regular hexagons, we label each mesh as c j , j = 1, . . . , N as in Fig. 1b. In this sub-subsection we use the index of j for the label of each cell instead of i. We write the neighboring cells around the cell c j as c Λ k j , k = 1, . . . , 6, i.e., k j (k = 1, cdots, 6) is the index of neighboring cells for the jth cell.
The typical discrete model on the hexagonal lattice can be described as follows: where f : R 7 → R and g : R → R are intercellular and reaction functions, respectively, here. The linear intercellular interaction f on the cell c i is generally given by Similar to the previous sub-subsection, the typical examples of f are diffusion and Delta-Notch interaction as follows: Utilizing the characteristic function χ c j (x, y), we define the variable at position (x, y) ∈ T 2 at time t > 0 as As the derivation is similar to those in previous sections, we put detailed calculations in "Appendix C". Changing the variable through the characteristic function as in the previous Sect. 2.5.1, we obtain the continuous model: We define the shift operator as Approximating the shift operator by the convolution with the mollifier, we can derive the kernel corresponding to the intercellular interaction on the hexagonal lattice. Figure 5b shows the profile of the kernel for the diffusion f Δ on the hexagonal lattice.
Isotropy of Delta-Notch signaling and radially symmetric kernel
In the previous sub-subsections we explained the continuation method for the uniform lattice with the uniform shape and size. However, the shape of the cells during development is not always uniform. Assuming the uniform lattice for the mathematical modeling for the phenomena might be artificial. Then we propose a kernel of our continuation method conserving the lattice size implicitly without assuming the uniform lattice based on the biological experiments. As explained in Sect. 1, the fly brain is often used for the study of neurogenesis mediated by the Delta-Notch signaling system. The shape of the NEs and neuroblasts (NBs), neural stem-like cells, in the fly brain looks various. Activation of Notch signaling is induced by binding of the Notch receptor with the Delta ligand expressed in adjacent cells at the cell surface. Therefore, activation of Notch signaling might be affected by the shape of the cell membrane. However, as various stochastic noise and other signaling pathway, other than Delta-Notch signaling are involved during development (Tanaka et al. 2018), we conjecture that the shape of the activated region of Notch may become isotropic and averaged. We asked how Notch signaling is activated when Delta is artificially expressed in a small number of cells. In this condition, the Notch activity was visualized by using the NRE-dVenus transgenic construct (Housden et al. 2012). As shown in Fig. 6, Notch signaling was activated in a group of cells immediately adjacent to the Delta-expressing cells through trans-activation forming a concentric distribution pattern. The inactivation of Notch signaling within the Delta expressing cells is most likely due to the effect of cis-inhibition (del Álamo et al. 2011;Sprinzak et al. 2010). This result suggests that the shape of cells do not affect the spatial activation pattern of Notch in vivo.
Based on this experimental results, we propose the following shape for the kernel The profile of this kernel is shown in Fig. 7. The donut-like pattern of Notch activation is consistent with the concentric shape of the kernel used in (30). By using this shape of kernel, the nonlocal operator becomes radially symmetric. This radially symmetric kernel is also applicable to describe the signaling system with projections of cells such as pigment cells in the skin of fishes Watanabe and Kondo
Applications
In this section we will apply our continuation method for some discrete models of previous studies, and perform the numerical simulations to investigate how patterns are generated.
Continuous model of the proneural wave
The developing fly brain looks like a hemisphere. During early stages of development, undifferentiated NEs proliferate and these NEs differentiate into NBs later stages of development. The transition from NEs to NBs propagates from the medial to the lateral directions. Since the transition is visualized by the transient expression of L'sc, which is one of the AS-C complex member and acts as a proneural factor, the propagation of the differentiation in the fly brain is called the proneural wave (PW) (Yasugi et al. 2008).
where the calculation region is set as From the assumption (A5), using the characteristic function, we change the variable as follows: We propose the following the continuous model: where the profile of the kernel K = K (x, y) is determined by the profile of the lattice. For the simple description, we impose the local term of the EGF in the max function of the fourth equation based on the assumption (A6). We perform the numerical simulation to investigate whether the continuation method for the model (31) is effective or not. First, we perform the numerical simulations by using the kernel corresponding to Delta-Notch interaction on the square a b Fig. 9 The results of the numerical simulations for the state of the differentiation A in the continuous model Figure 9 shows the numerical results of (32) with the kernel corresponding to square lattice. The uniform propagation of the differentiation corresponding to the PW is replicated with suitable parameters as in Fig. 9a. As the parameter corresponding to the strength of activation of EGF a e is decreased, the strength of the lateral inhibition by Delta-Notch becomes relatively larger. In this situation, the continuous model with the kernel reproduces the nonuniform propagation of the differentiation of AS-C corresponding to the salt-and-pepper pattern as in Fig. 9b. In this numerics, the square salt-and-pepper pattern does not depend on the numerical mesh in the code of numerical simulation. This one square region reflects the size and shape of one cell. We put the numerical results of the original discrete model for the PW reported in Sato et al. (2016), Tanaka et al. (2018) in "Appendix A". Secondary, the numerical results of the continuous model with the kernel corresponding to Delta-Notch interaction on hexagonal lattice are shown in Fig. 10. Similarly to Fig. 9, with the large value of strength of the activation for the EGF a e the continuous model with the kernel corresponding to the hexagonal mesh has replicated the mode of the PW. As decreasing the value of a e , the stripe propagation of the differentiation is reproduced. Furthermore, by decreasing the value of a e more, the nonuniform propagation of the differentiation corresponding to the salt-and-pepper pattern has been reproduced. We can observe that each differentiated region exhibits the hexagonal shape. Even though the prepared mesh in the code of the numerical simulation is square, we can replicate the hexagonal patterns in the continuous model. This numerical results suggest that we can directly introduce the information of the spatial discreteness into the continuous model, and the solution of the continuous model with the suitable kernel can reproduce the solution of discrete model.
Next, we perform the numerics with the radially symmetric kernel (30). As shown in Fig. 11, the mode of uniform PW and stripe propagation have been reproduced a b c Fig. 10 The results of the numerical simulations for the state of the differentiation A in the continuous model (32) with the kernel corresponding to the hexagonal lattice. The parameters are same as that in Fig. (9) and K (x, y) = 6 k=1 (τ Λ k ρ ε ) is used. a PW with a e = 5.0, b Stripe pattern with a e = 2.1, c Salt-and-pepper pattern with a e = 1.0 depending on the value of a e . Moreover, as in Fig. 11c, the continuous model has replicated the propagation of the salt-and-pepper pattern even by using the radially symmetric kernel (30). As shown in Fig. 11c, the profile of the differentiated region is spotted and can be interpreted that it indicates the shape of the averaged cells. Although we also use square mesh in the numerical simulation code in these numerics, the continuous model with radially symmetric kernel can reproduce the uniform and nonuniform propagations. The right side of Fig. 11 shows the section of the numerics of (b) in y = L y /2 at t = 20.0. The blue curve corresponds to the profile of Delta, and we can observe that the Delta is expressed at the wave front. These numerical results are consistent with the observation that Delta expression is localized to the cell membrane in the real fly brain. In the view of mathematical modeling, it is explained that the term of the activation from the AS-C in the front a d A(A 0 − A) is imposed in the third equation of (32). We succeeded in reproducing the realistic pattern through our continuation method.
As mentioned above, the results of our numerics suggest that we can analyze the solution observed in the discrete model in the framework of the continuous model equipping the spatially discretized initial data. In the successive subsections, we perform the numerical simulations of discrete model of the PW on growing domains and expansions of the model on the sphere by using our continuation method to investigate the realistic situation of the developing fly brain.
Modeling of cell division on the discrete model
Owing to our continuation method by the convolution with the kernel, we are able to model the cell division and proliferation in the discrete model. In this subsection, we explain this application by using the model of the PW.
During the process of the PW, the nonuniform cell division occurs on the surface. The fly brain develops via an early NE expansion phase followed by a differentiation phase from NEs to NBs (Egger et al. 2007;Hofbauer and Campos-Ortega 1990). When we try to add the effect of cell division to the discrete model, it is often artificial because we must decide the timing, direction, and number of cell division. However, we can introduce this effect naturally in our continuation method by expressing it as the domain growth. We put the explanations of the basic idea for the domain growth in "Appendix D". Using the method of the domain growth, we add the effect of cell division to (31) as follows: where K (x) = ρ ε (x −l)+ρ ε (x +l),K is the kernel with changed variable of K , Γ y is a derivative bijection function, and η is determined below. The detail ofK and Γ y are explained in "Appendix D". Since AS-C can be regarded as the level of differentiation, we suppose that the cell is the NE if the value of A(x, t) is close to 0, and the cell is the NB if the value of A(x, t) is close to 1. To express the nonuniform cell division of the PW, η = η(y, t, A) is given by the monotone decreasing function with respect to A, because NE is divided on the surface of fly brain. Here, we assume that the point satisfying A(t, x) ≥ A * is not divided on the surface. Now, we set where η 0 is a constant. Figure 12 shows the numerical results of (33) in the cases of fixed domain and the nonuniform cell division, respectively. In the beginning of the numerical simulations, we observed the similar patterns in the both (a) and (b). However, the NBs newly appear between the valleys of regions of NBs in Fig. 12b as the time passes. This numerical result can be explained in the view of mathematical modeling as follows. The differentiation of A at each point is inhibited by N in the max function of the fourth equation in (33). N in a cell is activated by D which is activated at the wave fronts of the regions corresponding to the adjacent cells. Therefore, when the region corresponding to the NEs is close to the wave front, the differentiation is inhibited. Conversely, farther from the wave front the region corresponding to the NEs is, the weaker the strength of the lateral inhibition of Notch becomes. As the EGF diffuses to the region, the differentiation of NBs occurs in the valleys of the regions corresponding to NBs. At present, we consider the continuous model of the PW in one-dimensional space. In the future, we will try to calculate in two-dimensional space and on the sphere surface in order to apply the experiment. Furthermore, in Kawamori et al. (2011), it is reported that the wave front of the PW is dented in the clone of fly brain due to the fast NE's division. Thus, we want to understand the reasons why the profile of the wave front is affected by the speed of the NE's division from the viewpoint of the mathematical model.
Description of discrete model on sphere surface
For another example of applications of our continuation method to mathematical modeling, we explain the description of the discrete model on sphere surface. We show that we can deal with the discrete model on the sphere surface by using the radially symmetric kernel (30).
Various pattern formations often occur in the region of sphere surface in the development of multicellular organisms. In the case of the PW investigated in the previous subsections, the fly brain is the hemisphere-like shape, and the PW sweeps across the surface. It is natural to construct the discrete model for the PW on the sphere surface, but the mathematical studies of the PW have been discussed on the 2D plane due to the technical difficulties of the discreteness in the numerical simulations on the sphere. Here, our continuation method can overcome these difficulties and enables us to deal with the model on the sphere surface as the continuous model equation. In practice, by applying the continuation method with the radially symmetric kernel with a radius r > 0 in Sect. 2.5.3, we can compute the continuous model (32) on the sphere surface by the spectrum method. We put the explanations of the basic idea for the spectrum method on sphere surface in "Appendix E". Using the spectrum method, we can compute the following model of the PW on the sphere surface numerically: where the equation of D and A are same as the equation (31), and r S 2 is a sphere with radius r > 0 and K : [0, 2r ] → R is defined as The Laplace-Beltrami operator Δ r S 2 on the general sphere with the radius r > 0 is given by where the definition of the Laplace-Beltrami operator is in "Appendix E". The convolution operator on the sphere * r S 2 is computed as dΩ r is denoted by the standard measure on r S 2 , and * S 2 is the convolution on the unit sphere. According to this calculation, we can rewrite the equation (35) on the unit sphere as follows: where k in is a positive constant, J = J (x, t) reproduces the profile of the JAK/STAT signaling, which cancels the biological noise in the fly brain reported in Tanaka et al. (2018), and the notation of unknown variables are based on (36). When we calculate the equation (37) by the spectrum method, the spatial noise arises from finite spherical harmonic expansion. Therefore, we need the effect of JAK/STAT as the role of the noise reduction (Tanaka et al. 2018). For simplicity, we assume that the value of J (x, t) is spatially uniform. Furthermore, as the spatial interaction of the equation (37) are the diffusion term in first equation and the convolution term in second equation, we calculate numerically the evolution of E and N by the spectrum method and compute the evolution of D and A by the Euler method. In numerical simulation of Fig. 13 at which parameters are the same as those of Fig. 11, we obtain numerical results of the propagation of AS-C similar to the case of the 2D plane when r = 10.0. We can interpret the reason of this result as follows. When the radius of the sphere r is relatively large compared to the cell size, the curvature of the cell surface becomes small. From this, the cell on the sphere surface can be regarded as the same state as the case of the plane. Therefore, we obtain similar numerical results to those of Fig. 11.
We observe the PW is accelerated as the wave directs from the equator to the north pole as in Fig. 14. We think that this arises from the diffusion of the EGF ligand. Because the space become narrower as the wavefront approaches the pole, the EGF ligand accumulates and induce faster NB differentiation. It is not clear whether the speed of the PW progression is accelerated when the PW reaches close to the pole in vivo. This will be one of the interesting questions to be solved in the future by performing live imaging of the PW and quantitatively measured the speed of the wave progression.
Application to the model of planar cellular polarity
In Ayukawa et al. (2014), a discrete model for planar cellular polarity (in short, PCP) of epithelial hair in the fly wing has been proposed by focusing on the interactions of Fig. 13 The results of the numerical simulations for the state of the differentiation A in the continuous model on the sphere surface (37). The parameters are same as that in Fig. 11, and r = 10.0 and k in J ≡ 1.0×10 −3 . a PW with a e = 2.0. b Stripe pattern with a e = 0.7. c Salt-and-Pepper pattern with a e = 0.4 Fig. 14 The velocity of PW for each position, when d t = 0 and other parameters are the same as that in Fig. 11. φ represents the latitude transmembrane proteins, distal complexes and proximal complexes. The intercellular protein and the cytoplasmic component are asymmetry localized by the intercellular interactions. Due to the asymmetric localization of the proteins, the direction of an epithelial hair in a cell is determined. If the transmembrane receptor, Frizzled (Fz) is localized in a cell, the other transmembrane protein, Strabismus (Stbm), is localized in the opposite side of the cell membrane in the same cell. Each membrane protein interacts distal and proximal complexes in a cell, which causes the localization of the polarization of Fz and Stbm in the neighboring cells. Fz and Stbm in the neighboring cells are localized near sides. Localization of these proteins between adjacent cells leads to the local alignment of PCP among small group of cells.
A simple mathematical model succeeded in describing the mechanism of the PCP Ayukawa et al. (2014). In the modeling of Ayukawa et al. (2014), the region corresponding to the fly wing is divided into hexagonal lattices and each lattice is labeled as c i , i = 1, . . . , N . By denoting the unknown variable for the direction of Fz in the ith cell c i by θ i = θ i (t) the following discrete model is proposed by the authors of Ayukawa et al. (2014): where the number of the term in the linear combination with sin function depends on the spatial dimension and arrangement of the lattice. We perform our continuation method to this discrete model for PCP. Setting as we have the following continuous model by the changing the variables as in Sect. 2.5.2, where the shift operator is defined by (28). This model is equivalent to (38) if the initial data is equal. The form of the shift operator is changed depending on the shape of the set lattice. We performed numerical simulations with the discretized initial datum on square lattice.
As in Fig. 15, we observe that the continuation model (39) with the discretized initial data can replicate the patterns in the discrete model (38). In this simulation, the color corresponding to the direction of an epithelial hair gets gradually uniform. This solution corresponds to that all epithelial hair grow in the same direction. On the other hand, as in Fig. 16, the solution corresponding to a swirl of the epithelial hairs is obtained in the steady-state as the number of cells is increased. This generation of the pole of θ is consistent to the report by Ayukawa et al. (2014).
In the discrete model, the number of the unknown variables is required for the same number of the cells. Accordingly, it is sometimes hard to compute the analytic calculations, for example, eigenvalue problem in the linear instability around the equilibrium solution. However, our method can reduce the discrete model with multiple components into scalar continuous model, and it gives us simpler calculations. We perform the linear instability of the model (38) one-dimensional space. Suppose that the number of cells is N , and that the length is l. Setting the region as T = [0, Nl], we impose the periodic boundary condition. The model of PCP in one-dimensional space is given by the following form: For this interval T, we found that one equilibrium solution is given by θ Additionally, the constant solution θ * (x) = α, α ∈ [0, 2π ] is also equilibrium solution of (40). The linear stability analysis around the equilibrium solutions is explained as follows. Letting the range of the linear operator be in R, and setting the perturbation as θ(x, t) = θ * + ε(x, t) and substituting the model (40) by it, we have Linearizing this problem around equilibrium solutions, we have the following eigenvalue problem: where ϕ = ϕ(x) is the eigenfunction associated by the eigenvalue λ. Plugging the nth term of the Fourier series expansion ϕ n = a n exp − 2nπi Nl x , a n := 1 Nl to (41), where i is imaginary number, we obtain that We have the eigenvalues λ n = 2 cos 2π N cos 2nπ N − 1 . The calculation of the eigenvalue of f Δ in the matrix form is also written in "Appendix B", and each result is consistent. From this calculation, if the number of cell N is bigger than 3, we see that equilibrium solutions is linearly stable. By the same calculus, it is shown that the constant solution θ * is also linearly stable. Even in the two-dimensional case, our above method enables us to compute the linear stability analysis if we have the equilibrium solutions.
Discussion
In this paper, we proposed a continuation method for discrete models, using shift and convolution operators while conserving the size and shape of cells and lattices. The proposed method enables the conversion of nonlinear discrete models into continuous models in a systematic manner, retaining the discreteness information. As the continuous model applied to our method with the shift operator is point-wisely equivalent to the original discrete model, the solutions are equal if the initial data are the same. The framework of the continuous model provides a few advantages, as per the analysis results. As in Sect. 3.4, we reproduced the pattern for the PCP in epithelial hair corresponding to that of the discrete model. Furthermore, we constructed the equilibrium solutions and performed the linear stability analysis in the continuous PCP model, using the Fourier series expansion. In Sect. 2.3, we showed that our continuation method can be applied to the discrete models on nonuniform lattices. Although the framework of discrete model is technically difficult to express the dynamics on nonuniform lattices, The proposed method enables us to treat the spatial non-uniformity on the continuous models mathematically. In the future, we will extend our work to the analyses and applications to this continuous models from the discrete models on the nonuniform lattices.
As a next step, we reduced the continuous model with the shift operators to the nonlocal evolution equation, using the approximation of the shift operator by convolution of a mollifier. We have also conducted the singular limit analysis of the discrete model and the nonlocal evolution equation in a one-dimensional interval with periodic boundary condition, showing that every solution is sufficiently close in L 2 (T) space. This suggests that nonlocal evolution with suitable kernels is capable of approximating the solution of discrete models, if the initial data are the same. Using the nonlocal evolution equation with the kernel of mollifiers, we have succeeded in replicating the pattern observed in the original discrete PW model. When the intercellular interaction was linear, the profile of kernel was determined by the lattice shape as shown in (26) and (29). Using these kernels in the continuous model for the PW, we reproduced the square and hexagonal shapes of the salt-and-pepper patterns.
Furthermore, we experimentally confirmed the isotropy of the Delta-Notch signaling system in the real fly brain. Based on the biological experiment, we proposed a radially symmetric kernel for the domain comprising averaged cell shapes. Even with the radially symmetric kernel for the Delta-Notch signaling interaction, we could reproduce the various propagation patterns of the PW. The radially symmetric kernel can also be applied to the discrete models on nonuniform lattices if the molecular and cellular system is not affected by the shape of cells and lattice as explained in Sect. 2.5.3. For application to the biological experiments, using a kernel with a small width, such as the Friedrichs mollifier, yields results that are more compatible with experiments than the combination of the shift operators. However, the shift operator can be more convenient for the analysis. In Ei et al. (submitted), by arranging the Dirac Delta function radially as the kernel in the continuation method, the reduction method of the continuous model for PW into 1 or 2 variable system, and its numerical simulations are addressed. Nonlocal evolution equation with certain kernel is sometimes difficult to analyze. However, nonlocal evolution equation provides us with a unifying concept to mathematical modeling and analysis as it is capable of including partial differential equations such as the reaction diffusion systems and discrete models. Moreover, various analytical and numerical methods for partial differential equations are applicable to nonlocal evolution equations as explained in Sects. 3.2, 3.3 and 3.4. In future, we also intend to extend our work to other domains and include higher dimensions of singular limit analysis.
In Sect. 3.2, 3.3 and 3.4, we applied the continuous model on the PW progression and the PCP formation. We demonstrated that the continuous model can easily include the effect of NE cell proliferation and can expand the simulation result from the 2D plane to the 3D spherical surface. In fly brain development, the final numbers of NBs and neurons are dependent on the NE proliferation. Using biological experiments, it has been shown that NE expansion is regulated by several signaling pathways. The PI3K/Akt/TOR pathway promotes NE proliferation in a diet-dependent manner in the early stages of development (Franco and Carmena 2019;Lanet et al. 2013). The Hippo pathway inhibits the overgrowth of NEs and inactivation of Hippo signaling inhibits NB differentiation (Kawamori et al. 2011;Reddy et al. 2010;Richter et al. 2011). By combining biological experiments on these signaling pathways with numerical calculation, it will be possible to understand the in vivo situation of the PW progression in more detail. In this paper, we presented two biological examples, for which our numerical method is applicable. Here, we emphasize that the numerical method is also useful for other biological systems because our method is based on fundamental and conserved intercellular interactions. The continuation method and numerical calculation with continuous models will facilitate our understanding of a wide variety of biological processes, both quantitatively and qualitatively. The propagation of the salt-and-pepper pattern, and stripe pattern are observed in the numerical simulations. We see that the profile of A i, j are uniform in each cell c i, j .
Energy estimate
We estimate error between the solutions of (P S ) and (P ε ) with (11) in the singular limit analysis with linear function f given by (10). We assume the initial condition of (P S ) and (P ε ) is the same, and given by u( First, we compute the fundamental solution of (P S ) without g (u). As the equation (P S ) and the equation (P D ) is equivalent, we solve the equation (P D ). The system (P D ) is described by where V = (u 1 (t), . . . , u N (t)), F : R N → R N and , (i f N is even).
Since the matrix A is cyclic, we obtain the eigenvalue (12) i.e., Associated eigenvector V k of which components are {v n } N n=1 with eigenvalue λ k is computed by v n = ω nk , (k = 1, . . . , N ). Thus, the fundamental solution of (42) is given by Therefore, we obtain the exact solution as Setting a vectorṼ k as {ṽ n } = ω −nk andλ k = [(N −1)/2] j=−[(N −1)/2] a j ω − jk , we find that AṼ k =λ kṼk . Thus, if a j = a − j , ( j = 1, . . . , . From this calculation, setting a vector W k as {w n } = cos(2π nk/N ) + sin(2π nk/N ). Then AW k = λ k W k . Thus the fundamenal solution of (42) is given by where the matrix (W 1 , W 2 , . . . , W N ) t is the transpose of the matrix (W 1 , W 2 , . . . , W N ). Next, we show the existence the global solution of (P S ) and (P ε ) under the above assumptions, respectively.
Proof of Proposition 2
The existence and uniqueness of the mild solution for (P S ) in C([0, T ], L ∞ (T)) is guaranteed by the fixed point theorem. We show the global existence in L ∞ (T) by the argument of the maximum principle. For a contradiction, we assume that there exists a positive finite constant T > 0 such that lim sup τ T max 0≤t≤τ u(·, t) L ∞ (T) = ∞ Then we take a sequence {T n } n∈N , T n T as n → ∞ such that as n → ∞. Indeed, for any R there exists a positive constant n 0 ∈ N such that for all n ≥ n 0 , we see that R n > R, and for any 0 < t < T n , there exist j n ∈ N such that u(·, t) L ∞ (T) = |u j n (t)|. We define the points (x n , t n ) ∈ T × (0, T n ) as satisfying |u(x n , t n )| = |u j n (t n )| = R n . As R n → ∞, we can choose Multiplying the principal equation of (P S ) by u, then a j u(x n + jl, t n )u(x n , t n ) + g(u(x n , t n ))u(x n , t n ), from (A2) and considering the degree of polynomial of R n . This yields a contradiction.
Similar to the proof of Proposition 2, we show the global existence of the equation (P ε ).
Proof of Proposition 3
The existence and uniqueness of the mild solution for (P ε ) in C([0, T ], L ∞ (T)) is guaranteed by the fixed point theorem. By the same argument as the proof of Proposition 2, we obtain the L ∞ (T) estimate for the solution of (P ε ). For a contradiction, we assume that there exists a positive finite constant T > 0 such that Then we take a sequence {T n } n∈N , T n T as n → ∞ such that as n → ∞. Indeed, for any R there exists a positive constant n 0 ∈ N such that for all n ≥ n 0 , we have R n > R, and for any 0 < t < T n , we define the points satisfying |u ε (x n , t n )| = R n by (x n , t n ) ∈ T × (0, T n ). In the case that the candidates of the maximum point is on the discontinuous point, employing the larger value at the point by taking the left-sided and right-sided limits, we define the point as (x n , t n ). As R n → ∞, we can choose Moreover, for any {a j } N j=1 , and positive constants p, g 0 , g 1 and g 2 , there exist sufficient large constant r , we see that K L 1 (T) r 2 + g(r )r ≤ ⎛ ⎜ ⎜ ⎝ N −1 2 j=− N −1 2 |a j | + g 2 ⎞ ⎟ ⎟ ⎠ |r | 2 − g 0 |r | p+1 + g 1 |r | 3 < 0 from (A2) and considering the degree of polynomial of r . Thus, there exists a positive constant n 1 ∈ N such that for n ≥ n 1 0 ≤ 1 2 replacing r with u ε (x n , t n ). This yields a contradiction. Now we show the singular limit analysis between the solutions of (P S ) and (P ε ).
Proof (Proof of Theorem 1) Taking the difference between the solutions of (P S ) and (P ε ), we have Multiplying the above equation by U ε and integrating it with respect to x, we obtain that from (A3) and where θ ∈ (0, 1), and C 7 = (g 3 sup t>0 u + θ u ε L ∞ (T) + g 4 ).
Utilizing the classical Gronwall lemma yields that
Derivation on square and hexagonal lattices
In this appendix we put the detailed calculations for the derivations of the continuous models in square and hexagonal lattices. For (25), we can calculate as N i, j=1 u i− p, j−q χ c i, j (x, y) = u(x − pl x , y − ql y , t), ( p, q ∈ {1, . . . , N }).
In the case of hexagonal lattice, the derivation is given as follows. Here for simple description, introducing the complex variable x = x + yi ∈ C, we identify the two-dimensional Euclid space R 2 with the complex plane C. Then we compute that N j=1 u Λ k j (t)χ c j (x, y) = u x + le i( π 2 − (k−1)π 3 ) , t = u x + cos π 2 − π(k − 1) 3 l, y + sin π 2 − π(k − 1) 3 l, t .
Using the shift operator (28) and approximating the shift operator by the convolution with the mollifier, we can derive the kernel corresponding to the intercellular interaction on the hexagonal lattice.
Reaction, diffusion and nonlocal interaction on growing domain
We explain the notion of the mathematical model on a growing domain by using a reaction diffusion equation with nonlocal interactions. Based on the previous reports (Crampin et al. 1999(Crampin et al. , 2002, general scalar reaction diffusion equation with convolu-tion term on a growing domain in one-dimensional space is given by ∂u ∂t + ∇(a · u) = Δu + K * u + f (u), in (0, L(t)) × {t > 0}, where u = u(t, x) is an unknown function, f : R → R is a nonlinear function, a = a(x, t) is the velocity field of the flow and satisfies dx dt = a(x, t), x ∈ (0, L(t)).
|
2020-09-23T13:06:08.510Z
|
2020-09-21T00:00:00.000
|
{
"year": 2020,
"sha1": "a181cfe35cc5976bc480ef8d5bae51c46cd21a81",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00285-020-01534-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "daad7af09ed8ac509adcbf276c417e31147203e0",
"s2fieldsofstudy": [
"Mathematics",
"Physics",
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
257152021
|
pes2o/s2orc
|
v3-fos-license
|
The Economic Impact of a Pilot Digital Day-Case Pathway for Knee Arthroplasty in a U.K. Setting
Background: Knee replacements are an increasingly common procedure in the U.K. National Health Service (NHS). Importantly, the pathway for such procedures represents a prime opportunity to leverage digital technology, modernize and streamline the approach to care, and free up resources. Methods: In this 21-patient pilot study, we assessed the impact of implementing a digital day-case pathway for knee replacement surgery at the Calderdale and Huddersfield NHS Foundation trust. Results: Fourteen (67%) of the 21 eligible patients were treated as day cases, with an average length of stay of 8.8 hours. The pilot data were utilized to model the potential impact of implementing a digital day-case program more widely across the trust. This model showed increased efficiency over the entire episode of care, with reductions in physiotherapy appointments, preoperative visits, hospital days, and face-to-face consultations. Not only would these improvements free up capacity, but they would also result in an estimated saving of £240,540 to the trust while reducing the CO2 footprint of knee replacements by 119,381 kg CO2 emitted. A sensitivity analysis revealed that, even with substantial variation of several key variables within the pathway, a trust-wide digital day-case program would still be a cost-saving measure. Conclusions: Overall, the present study supports the growing notion that digital technology can facilitate the transformation of care pathways, resulting in greater efficiency and financial savings for health-care providers while reducing the time patients spend in the hospital. Level of Evidence: Therapeutic Level II. See Instructions for Authors for a complete description of levels of evidence.
K nee arthroplasty is a common procedure that is still growing in prevalence. Following the guidance of Getting It Right First Time 1 and as a result of the COVID-19 pandemic and the budgetary constraints on the U.K. National Health Service (NHS), there has been growing interest in reducing hospital length of stay (LOS) for patients undergoing knee arthroplasty. There is a wealth of international evidence demonstrating that the use of outpatient or "fast-tracked" recovery pathways is possible following knee arthroplasty without increasing the rates of readmission or complications [2][3][4] . Furthermore, such pathways have been shown to afford savings of up to $6,800 per patient 5 .
Advancements in digital technology, alongside the generally increased uptake of smart devices across older adults, have led to a substantial increase in the use of technology for managing chronic diseases 6 . Indeed, the digitalization of care underpins the NHS Long-Term Plan, with knee replacement pathways providing an example where digital technology could reduce LOS and increase at-home rehabilitation-the latter of which has also been demonstrated to be possible without clinical cost to the patient 7 .
In the present study, we described the impact of introducing a digital day-case pathway for knee arthroplasty in a U.K. setting. We report on the main alterations to the clinical process and on the potential financial impact of more widespread adoption of such a pathway, as modeled with use of data from the initial pilot program.
to provide informed consent, were ‡18 years old, required knee arthroplasty, had access to either a smartphone or tablet, and had access to Wi-Fi in the home. Patients were excluded if they were unable for any reason to give informed consent, were unwilling or mentally/physically unable to adhere to study protocols, were undergoing any surgical procedure other than knee arthroplasty, had an American Society of Anesthesiologists (ASA) grade of ‡3, or had any other factor or comorbidity that would make them unsuitable for discharge within 23 hours (Table I). If at any point during the day-case pathway any member of the clinical team felt that the patient was no longer suitable for the day-case approach, the patient was excluded from the study.
Standard Care Pathway
In the standard care pathway at the trust, a patient's suitability for knee arthroplasty would be assessed in an outpatient setting. Standard information regarding help with weight loss and stopping smoking, as well as details regarding what to expect from the procedure, are provided to the patient at this outpatient appointment. During the preadmission period, patient pre-assessment clinics and "joint school" are undertaken, with the latter being a training day during which patients are given preoperative information and "prehabilitation" exercise regimes. Thus, patients require a total of 3 separate preoperative visits to the hospital: 1 for outpatient assessment, 1 for the preassessment clinics, and 1 for joint school.
On the day of surgical procedure, most patients in the trust would receive a spinal anesthetic. Various anesthetic options are available for use, but most patients receive 0.5% heavy bupivacaine with or without intrathecal diamorphine. In certain circumstances, sedation, a peripheral nerve block, and/or general anesthesia are required.
Postoperatively, patients are transferred to a ward in which pain management and fluids are provided. Patients are expected to extend and flex their knee and to practice leg raising by the evening of postoperative day 0. Patients are prescribed blood thinners and pain medication, as necessary, for use beginning on the first postoperative day. In addition, patient blood tests and radiographic examinations are performed. The patient is encouraged to sit in a chair and walk to the bathroom with support, with a member of the therapy team assessing whether any equipment and/or support would be required post-discharge. Between postoperative days 2 and 4, analgesia is administered according to patient-reported pain scores. The patient is encouraged to stand, and physiotherapy is continued, including stair use and functional assessments of mobility. The patient is then assessed for discharge, provided with home care instructions and TE££D. (thrombo-embolus deterrent) stockings, and a follow-up orthopaedic clinic appointment is scheduled. Finally, once discharged, patients undergo community physiotherapy, averaging 6 appointments per patient. Both the standard and digital day-case pathways are summarized in Figure 1 and Table II.
Digital Day-Case Pathway
During the initial outpatient visit, patients were advised regarding the digital day-case pathway assessed for the study inclusion and exclusion criteria. If the patient was deemed suitable for inclusion, they were booked into the appropriate theater list and provided with the patient preoperative assessment smartphone application. We introduced a one-stop preassessment clinic in which the patient met with the physiotherapy and occupational-therapy teams to discuss their home circumstances, in order to ensure they were still eligible for the digital day-case pathway. In addition, at the one-stop clinic, the patient underwent pre-assessment by the anesthetist, advanced clinical practitioner, therapy teams, and pharmacist. Preoperative and postoperative care were also discussed with the patient. Finally, the one-stop clinic included joint school, in which the care team provided the patient with the remotemonitoring BPMpathway device (270 Vision), consisting of a wearable sensor (certified European Conformity class 1) to be worn around the lower leg. Full training on the device was provided, and patients were advised to complete a series of prehabilitation exercises over at least 2 to 3 weeks while wearing the device. The remote-monitoring device had an accompanying smartphone application and dashboard software for the patient and clinician, respectively, enabling 2-way communication. During prehabilitation, the physiotherapy team monitored patient process and range of motion and was able to remotely communicate with the patient as needed. Full details on the device, its clinical performance, and patient feedback during this pilot study have been previously reported 8 .
Preoperative medication included oxycodone MR (modified release) 5 to 10 mg and omeprazole 20 to 40 mg. Care was taken to ensure that the patient was well hydrated. In contrast to the spinal anesthesia administered in the standard pathway, all patients received a short-acting spinal aesthetic of 2% prilocaine plus intrathecal diamorphine and 8 mg of ondansetron.
Minor changes were made to the surgical approach as compared with the standard pathway (see Table II). First, the tourniquet pressure was reduced to 250 mm Hg in order to minimize patient thigh pain postoperatively. Additionally, a waterproof Dermabond Prineo Skin Closure System (Ethicon) was utilized in order to suppress wound infection while still enabling wound inspection and the ability for patients to go home and shower without disturbing the dressing. Surgical drains and traditional wool-and-crepe bandages were not utilized in order to psychologically steer the patient away from the concept that they had undergone major surgery, which traditionally would be followed by the use of bulky dressings. Wound infiltration was achieved with use of 60 mL of 0.25% and 0.6 mL of 1:1,000 adrenaline. The patient was supplied with intravenous (IV) fluids (with or without sedation), paracetamol 1 g, IV ondansetron 4 mg, and IV dexamethasone 6.6 mg. Postoperative analgesia included oxycodone MR 5 to 10 mg twice daily, paracetamol 1 g 4 times daily, tramadol 50 to 100 mg 3 or 4 times daily (if an existing prescription), ondansetron 4 mg 3 times daily, and lactulose 10 to 15 mL daily. Other medication were given on an as-needed basis, including oromorphine 5 to 10 mg every 4 hours or oxycodone 5 to 10 mg every 6 hours as well as cyclizine 50 mg every 8 hours. A rapidmobility plan was implemented immediately postoperatively, while patients were in the recovery area where blood drawing was performed. Oral fluids were provided, and early eating and drinking were actively encouraged. Patients were then sent directly to the radiology department for postoperative radiographs. Because of the short-acting spinal anesthetic and the adequate analgesia, patients were mobilized more quickly than in the standard care pathway, undergoing 2 to 3 physiotherapy sessions prior to discharge, to the point of stair use.
There were no alterations to the discharge criteria, and patients were only discharged once the clinical team were satisfied with the movement and function of the knee. Discharge criteria included the ability for the patient to walk safely and to negotiate steps without a substantial issue. At this point, the A summary of the changes between the standard care pathway and the digital day-case pathway. OPD = outpatient department.
The Economic Impact of a Pilot Digital Day-Case Pathway for Knee Arthroplasty in a U.K. Setting pharmacist would provide postoperative mediation, including a dose of oxycodone 30 minutes before discharge. Nursing capacity was in place to ensure that patients were contacted by telephone 3 times in the first 24 hours post-discharge: once on the evening of discharge, once in the morning after, and once at 24 hours. In addition, a member of the physiotherapy team contacted the patient 48 hours post-discharge in order to again outline how to use the BPMpathway device. Over the following 6-week postoperative period, patients were continually in touch with their physiotherapy team and were monitored remotely via the BPMpathway. Follow-up appointments were arranged as needed. Scheduled follow-up appointments were performed via video call, whereas as-needed support was provided via the BPMpathway.
Budget and Sustainability Impact Model
To understand the wider budgetary impact of implementing the digital day-case program throughout the Calderdale and Huddersfield NHS Foundation Trust, a model was created with use of data from the pilot program. A decision tree was created that showed not all patients would be suitable for the accelerated digital day-case pathway, as patients with an ASA grade of ‡III were unsuitable for discharge within 24 hours 9 and patients without a smartphone 10 were unable to utilize the remote-monitoring system (Fig. 2). Thus, only 51.4% of all possible knee arthroscopy patients in the budget-impact model were shown to be suitable for the accelerated digital day-case program, and the remaining 48.6% were budgeted according to the standard pathway.
The clinical parameters and costs utilized in the budgetimpact model are described in Tables II and III. To test the robustness of the reported model, several parameters were subjected to univariate deterministic sensitivity analysis to determine the impact of variation in these parameters. Parameters were systematically varied between upper and lower bounds. Costs were varied by ±20% of the base case values (Table III), and LOS and the percentage of patients with an ASA grade of ‡III were varied by ±20%, according to guidance from the Hospital Episode Statistics database and National Joint Registry 11 (Table IV). Data regarding hospital LOS in the digital day-case Decision tree utilized in the budget impact and sustainability model, which revealed that only 51.4% of the knee replacement patients in the trust would be eligible for the digital day-case pathway because of the requirements of an ASA grade of <III and access to a smart device. TKR = total knee replacement. The Economic Impact of a Pilot Digital Day-Case Pathway for Knee Arthroplasty in a U.K. Setting cohort are presented as the mean and 95% confidence interval, and time spent on messaging is estimated. The underlying assumptions of the model and their justifications are provided for clinical and economic parameters in Table V and for sustainability parameters in Table VI.
Source of Funding
The BPMpathway sensors were provided free of charge courtesy of B. Braun Medical U.K., which distribute the BPMpathway. D.M.C. is employed by B. Braun, and G.W. was paid an honorarium fee by B. Braun to present this work on a B. Braun webinar.
Patient Demographics
A total of 21 adult patients representing 16 total and 5 unicondylar knee replacements were included in the study. The mean age was 57.6 years (standard deviation, 8.9 years), and there were 9 female and 12 male patients. Full patient demographics have been reported previously 8 . All patients followed the preoperative plan, attending 1 outpatient appointment prior to joint school.
Health-Care Impact of the Digital Day-Case Pathway
The digital day-case pathway resulted in no complications. The median range of motion was 109°(interquartile range, 21°) and 136°(interquartile range, 16°) at 4 and 7 weeks postop-eratively, respectively. Patient feedback was excellent, with >94% of patients stating that they were more motivated to undertake their rehabilitation exercises because of the digital day-case pathway. Full details regarding postoperative range of motion and patient satisfaction with the digital day-case program have been published previously 8 .
The majority of patients (14 of 21) were managed as day cases, with an average hospital LOS of 8.8 hours. Five patients were managed as short-stay cases, with an average hospital LOS of 36.3 hours, and 2 patients were managed as long-stay cases, with an average LOS of >72 hours. The median LOS was 9.6 hours (interquartile range, 26 hours), with a minimum and maximum of 7 and 168 hours, respectively.
Patients were seen face-to-face by a physiotherapist only if their progression or pain required additional attention. On average, patients attended 3.9 physiotherapy visits (range, 2 to 6 visits), including 3.3 community appointments and 0.5 group clinic appointment. Patients in the standard care pathway attend an average of 6.0 appointments. In addition to face-toface physiotherapy appointments, patients communicated with the physiotherapist team via the BPMPathway. A total of 100 messages were sent to the care team from patients, and a total of 112 messages were sent to patients from the care team, with an average of 5.3 messages received per patient. Details of messaging have been reported previously 8 . All patients received follow-up by means of a 6-week virtual review and a 12-month face-to-face review.
Sensitivity Analysis
The univariate deterministic sensitivity analysis investigated the impact of individual parameters on the base-case savings (£240,540); the value that had the greatest influence on the total savings was the cost of the LOS, followed by the costs of surgeon outpatient appointments and community physiotherapy visits. The cost of the BPMpathway device had a minimal impact on cost savings (Fig. 3). In addition, the duration of time spent answering each message was not a strong driver of cost savings; even when varied up to 10 minutes per message, this parameter had only a minor impact on the cost-effectiveness of the model. Discussion I n this 21-patient pilot study, we assessed the impact of implementing a digital day-case pathway for knee replacement surgery at the Calderdale and Huddersfield NHS Foundation Trust. We found that implementing such a program resulted in several service-level improvements, including reductions in LOS and the number of preoperative and postoperative in-person visits. The present results support those of similar studies at other NHS sites, which showed reductions in LOS of up to half a day following implementation of day-case pathways 12 . In the present study, the digital day-case pathway included a multifaceted set of changes to clinical care, including the use of a short-acting spinal anesthetic, early postoperative mobilization and rehabilitation, and the use of remote monitoring. Patients attended a one-stop joint school that served to educate them regarding operative and postoperative expectations; this, alongside the remote-monitoring device, allowed physicians to confidently discharge patients earlier than they would in the standard care pathway. Indeed, the use of sensor technology to digitize the rehabilitation process following joint replacement is becoming more common, with several studies reporting successful outcomes 13,14 .
We also created a budget-impact model with use of the data from the pilot program, which revealed that a fully implemented digital day-case pathway at the Calderdale and Huddersfield NHS Foundation Trust would afford cost savings of £240,540 while at the same time freeing up resources. Importantly, these savings would not be strongly influenced by market fluctuations in the cost of the device or by the amount of time taken by physiotherapists responding to patient messages via the remote-monitoring BPMpathway device.
The most notable limitation of the present study is the small sample size on which the model was built. However, the sample represents those patients who would be eligible for the digital day-case pathway, and the model replicates a real-world estimation, with almost half of the patients remaining on the standard pathway. In addition, the robustness of the model was tested with a high degree of variance in the parameters, and the model continued to report cost savings.
In response to the backlog of elective cases as a result of COVID-19, the NHS recently published recommendations for elective care moving forward 15 . The recommendation that was most pertinent to the present study was the use of digital technology to free up capacity in secondary care, including bed days and appointments, which in turn increases capacity for patients who are not suited for virtual care. Finally, the present study aligns with future NHS plans as they relate to the use of digital technology, with a government focus on the use of remote monitoring, enabling "virtual wards" that permit patients to recover in their own homes. n
|
2023-02-25T05:06:41.609Z
|
2023-02-24T00:00:00.000
|
{
"year": 2023,
"sha1": "fcac9830527b35ca1f515c4720b18c81dbea162b",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "fcac9830527b35ca1f515c4720b18c81dbea162b",
"s2fieldsofstudy": [
"Medicine",
"Economics"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
203502944
|
pes2o/s2orc
|
v3-fos-license
|
FE Analysis of a Leak Repair Clamp for Misaligned Pipelines
: The Pipelines are the most important mode of transportation for fluid and gas. The maintenance of pipelines is always critical because it has been carried out without interrupting the transportation process or other operation. The following dissertation is about misalignment in the pipeline which are working underwater and having critical operation. The most probable place for misalignment is welded joint. Due to misalignment the welded portion gets stretched and crack starts thereafter. Under such condition it is very difficult to replace the pipe or providing the alternation. The dissertation describes the possible solution of providing a clamp type structure. The model will be designed in CRE-O software and the finite analysis is carried out in ANSYS software. The validation of the solution will be made through the manual calculation as per the standards.
I. INTRODUCTION
The resources are very important to be handled. Not all the places are getting all the resources naturally. Similarly other materials are also not available at all the places. The material can be in any form solid or liquid or gas. There are number of ways for transporting material from one place to another. One of the major and most general method of transferring the needful material of liquid and gas is the pipelines. The liquid and gas are difficult to transport compared to solid. The pipelines are the heart of the petroleum (or Oil and Gas) sector.
A. Failure in Pipelines
There are major three aspects of the physical failure mechanisms in pipelines. 1) Pipe properties, material type, pipe-soil/water interaction, and quality of installation. 2) Internal loads due to operational pressure and external loads due to soil overburden, traffic loads, frost loads, corrosion and 3) Third party interference.
C. Misalignment
The pipelines are either above ground or underground. The problems on the above ground can be diagnosed and resolved by many ways. But there are many problems in maintaining the underground pipelines. The pipelines which are underground have many loads internal as well as external. The more problem occurs when the pipelines are underwater as the work capacity and use of maintenance becomes limited. The result causes the pipelines misaligned from either the weak portion or the joint. ( most at welded joints ).
D. Different Solutions 1)
The repairing of the pipe is started when the crack is detected. The most general procedure of repairing includes the providing the sleeve.
2) The sleeve is generally provided with the same thickness of pipe. The material must be same or higher grade than the pipe material for the sleeve. In most cases the standard pipes of API 5L sizes are preferred for the applications. The sleeve provides and increases the effective life of the operation. 3) Another method is to provide the clamp; the clamp is fitted on the pipes where the crack is detected. The Clamp is selected according to the pipe size and crack area. The Clamp is bolted around the pipe. The crack area is an important factor because the leak will only be effective if the crack area is properly covered. Both of the solutions are used as per the conditions and solution requirement. Generally the clamp is more preferable because it has longer life and can be replaced. The studs and nuts/bolts are disassembled in sequence to replace or remove the clamp; the sleeve is used where the crack is minor or not so much effective to the strength and operation. Tangential stress, σ t = And radial stress at any radius x, σ r = P = Internal fluid pressure in the pipe r i = Inner radius of pipe r o = Outer radius of pipe Now, tangential stress is maximum at inner surface (x = r i ) and minimum at outer surface (x = r o ). σ t(max.) = 87.9362
III. DESIGN ANALYSIS
The Design of the model has been done in the CreO software and the static structural analysis has been carried out in Ansys software. The figure shows the model for the 24 inch pipe. The assembly of model contains the following parts: 1) Clamp 2) Seal and 3) Bolt For the ease of analysis operation, clamp with 3 bolts at each side is taken into consideration for analysis purpose. The unit system is taken as metric. A single path is generated as construction geometry in the Ansys software.
Fig. 2 3D CAD Model
The path is used for linearized equivalent stress calculation result. The Coordinate system was taken as front portion of the clamp.
©IJRASET: All Rights are Reserved The first boundary Condition as Pressure applied is taken as 10.2 MPa or it can be said 102 bar. The pressure is applied on the internal surface of the Clamp. The Area which will bear most pressure will be the area between the seal in the clamp. Another Boundary condition taken is the fixed support.
IV. CONCLUSIONS
The Finite Element Analysis of the Clamp component is done by using ANSYS software for determination of stresses and deformations. As per ASME (American Society for Mechanical Engineers) Boiler and Pressure vessel standard validation, the stresses asserted on clamp body by applying high pressure of 10.2 MPa are under allowable stresses. By using clamp components on the misaligned or deformed pipe joints or sleeve, pipeline integrity can be ensured by minimizing deformations. By using Clamps as the permanent solution, Maintenance costs and damages to the pipelines can also be minimized.
V. ACKNOWLEDGMENT
We are thankful to department of Mechanical Engineering of KIT&RC-GUJARAT, India for providing the necessary facilities for the successful completion of the work. Also I am thankful to Mr. CHETAN VORA, Associate Professor, Mechanical Engineering Department, KIT&RC-GUJARAT for his cooperation and guidance.
|
2019-09-17T02:59:27.937Z
|
2019-04-30T00:00:00.000
|
{
"year": 2019,
"sha1": "8d9fce80ad21ac7b4b3ed914f3ded53712b2a4ee",
"oa_license": null,
"oa_url": "https://doi.org/10.22214/ijraset.2019.4520",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "81c199384c77b8497dbde8d3dfc744bdb6a23863",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
224876764
|
pes2o/s2orc
|
v3-fos-license
|
The chromosome-level reference genome assembly for Panax notoginseng and insights into ginsenoside biosynthesis
Panax notoginseng, a perennial herb of the genus Panax in the family Araliaceae, has played an important role in clinical treatment in China for thousands of years because of its extensive pharmacological effects. Here, we report a high-quality reference genome of P. notoginseng, with a genome size up to 2.66 Gb and a contig N50 of 1.12 Mb, produced with third-generation PacBio sequencing technology. This is the first chromosome-level genome assembly for the genus Panax. Through genome evolution analysis, we explored phylogenetic and whole-genome duplication events and examined their impact on saponin biosynthesis. We performed a detailed transcriptional analysis of P. notoginseng and explored gene-level mechanisms that regulate the formation of characteristic tubercles. Next, we studied the biosynthesis and regulation of saponins at temporal and spatial levels. We combined multi-omics data to identify genes that encode key enzymes in the P. notoginseng terpenoid biosynthetic pathway. Finally, we identified five glycosyltransferase genes whose products catalyzed the formation of different ginsenosides in P. notoginseng. The genetic information obtained in this study provides a resource for further exploration of the growth characteristics, cultivation, breeding, and saponin biosynthesis of P. notoginseng.
INTRODUCTION
The Chinese medicine Sanchi is prepared from the dried root and rhizome of Panax notoginseng (Burk.) F. H. Chen, a perennial herb that belongs to the Araliaceae ginseng species (Briskin, 2000;Ng, 2006). Generally, Sanchi is collected and washed before P. notoginseng flowers bloom in autumn and is obtained by separating the main root and rhizome after drying (Wang et al., 2016). P. notoginseng has a long history of use in China for eliminating congestion and hemostasis and reducing swelling and pain. The brilliant work of the Ming Dynasty, Compendium of Materia Medica (A.D. 1552-1578), already described P. notoginseng. The medicinal value of P. notoginseng arises from the chemical ingredients it contains. To date, the chemical components isolated from P. notoginseng include mainly saponins, flavones, sugars, volatile oils, and amino acids (Jia et al., 2019). Among these, saponin compounds are the main chemical constituents and are also recognized as the main active ingredients (Xiong et al., 2019). Modern medical research has shown that saponins from P. notoginseng improve myocardial ischemia (Zhang et al., 2017b), protect the liver (Zhong et al., 2019), defend against cardiovascular disease (Chan et al., 2002), lower blood pressure (Pan et al., 2012), and improve arteriosclerosis (Min et al., 2008); they also have antithrombotic (Dang et al., 2015) and anticancer activities. As a rare and valuable medicinal material in China, P. notoginseng is also used in various prescriptions, such as capsules, injections, and powders. It is in widespread use, with total annual output values exceeding 70 billion RMB (Gui et al., 2013;Cui et al., 2014).
To date, the principal means of obtaining saponins has been to extract and isolate them from the original plants; however, the plant saponin content is low, and this process has a low extraction efficiency and is not environmentally friendly. Therefore, reconstruction of the saponin biosynthetic pathway for heterologous production is an alternative method for obtaining these valuable resources. At present, over 80 tetracyclic triterpenoid saponins have been identified from the roots, stems, leaves, flowers, and fruits of P. notoginseng, and these saponins can be divided into protopanaxadiol (PPD) and protopanaxatriol (PPT) based on a hydroxyl substitution at the C-6 position of the molecular structure. The biosynthetic pathway of saponins in P. notoginseng is divided into four main stages. First, the direct precursors isopentenyl allyl diphosphate (IPP) and dimethylallyl diphosphate (DMAPP) are synthesized by the mevalonate and 2-methyl-D-erythritol-4phosphate pathways (Deng et al., 2017). Second, isopentenyl transferase and terpene synthases (Niu et al., 2014) catalyze the synthesis of 2,3-oxidosqualene from IPP and DMAPP (Jiang et al., 2017). Third, 2,3-oxidosqualene undergoes cyclization and hydroxylation (Han et al., 2011(Han et al., , 2012 to form the core structures PPD and PPT (Luo et al., 2011;Lu et al., 2018). Finally, the formation of various saponins is catalyzed by a number of glycosyltransferases (GTs) . The genetic and functional diversity of GTs gives rise to a variety of structurally diverse saponins.
To explore the biosynthetic pathway of ginsenosides, the genome of P. notoginseng has been explored and information mined (Zhang et al., 2017a;Chen et al., 2017). However, because of sequencing technology limitations, existing genomic information generated from second-generation shortread sequencing is insufficient (Shen et al., 2018;Zhao et al., 2019). Here, we present a high-quality P. notoginseng genome obtained using a combination of Illumina, PacBio, and Hi-C (high-throughput chromosome conformation capture) technologies; this is also the first chromosome-level genome of the genus Panax. Using comparative genomics, we explored the evolution and whole-genome duplication (WGD) events of P. notoginseng. We performed detailed transcriptional analysis and explored gene-level regulatory mechanisms that control the formation of characteristic tubercles, the biosynthesis of saponins at temporal and spatial levels, and the regulation of transcription factors. Combined with genomic analysis, we screened a series of UDP-dependent GT (UGT) candidate genes, five of which were identified as having catalytic functions. Our study provides genetic information for further comprehensive analysis of the saponin biosynthetic pathway and the evolution of the ginseng genus, and also describes useful techniques for the breeding of P. notoginseng.
Genome sequencing, assembly, and annotation
According to the K-mer distribution analysis (K = 31), the estimated size of the P. notoginseng genome (2n = 2x = 24 chromosomes) is 2.38 Gb, and the heterozygosity and repeat contents are 0.58% and 69.05%, respectively (Supplemental Figure 1 and Supplemental Table 1). We combined Illumina, PacBio, and Hi-C technologies to sequence and assemble a high-quality, chromosome-level P. notoginseng reference genome. A total of 240.22 Gb of Illumina reads, 284.07 Gb of PacBio long reads, and 340.83 Gb of Hi-C data were generated, resulting iñ 325.233 coverage of the P. notoginseng genome (Supplemental Table 2). The final assembled genome was 2.66 Gb in size and consisted of 219 scaffolds, with a scaffold N50 of 216.47 Mb and a contig N50 of 1.12 Mb (Figure 1 and Table 1). The assembled sequence was then anchored onto 12 pseudochromosomes with lengths of 176.58-295.55 Mb. The total length of the pseudochromosomes accounted for 99.89% of the genome sequences, with a scaffold L50 number of 6 (Supplemental Figure 2; Supplemental Table 3). The genome of P. notoginseng had a GC content of 34.45% (Supplemental Table 4).
To test the coverage of the P. notoginseng genome, the short reads generated from Illumina sequencing were mapped, and 99.82% of these reads could be mapped to the scaffolds with 97.97% overall coverage (Supplemental Table 5). The completeness of the genome assembly was evaluated using BUSCO (Benchmarking Universal Single-Copy Orthologs) (Simao et al., 2015). Based on BUSCO analysis, 96.6% of plant sets were identified as complete (2049 out of 2121 BUSCOs) (Supplemental Table 6). All analyses suggested a high quality of the P. notoginseng genome assembly.
Based on a combination of homology-based and de novo approaches, 85.85% of the assembled P. notoginseng genome (Supplemental Table 7) consisted of repetitive elements; among them, long terminal repeat (LTR) retrotransposons accounted for the largest proportion and made up 58.88% of the genome (Supplemental Table 8; Supplemental Figure 3A and 3B). Compared with the published reference genome version, there were more predictions of repetitive sequences, a phenomenon that also occurs in other highly repetitive genomes (Xia et al., 2017(Xia et al., , 2020Wei et al., 2018;Zhang et al., 2020a). We compared the predicted repeat sequences with the RepBase database and calculated the degree of difference between them, from which LTR retrotransposons broke out at approximately 8% and an unknown outbreak happened earlier at approximately 5% (Supplemental Figure 3C).
An integrated strategy of de novo predictions, homology-based searches, and RNA sequencing was used to predict the protein-coding genes of the P. notoginseng genome. A total of 37 606 genes were annotated, with an average length of 5059.63 bp and an average exon number per gene of 5.21 (Supplemental Table 9; Supplemental Figure 4). The number of genes was similar to the numbers reported in two articles about the P. notoginseng genome published in 2017 (34 369 and 36 790), but other values, such as the average gene length and the average number of exons per gene, have been updated (Supplemental Table 10). Compared with another Araliaceae plant, Panax ginseng C.A. Mey (59 352 genes) (Kim et al., 2018), P. notoginseng has a smaller number of genes, which may be related to the subsequent duplication event of P. ginseng after the divergence of the two plants. Among the annotated P. notoginseng genes, 36 154 (~96.14%) were functionally classified by BLASTing against various functional databases (Supplemental Table 11). We further annotated noncoding RNA genes, obtaining 14 430 microRNA genes, 1513 transfer RNA (tRNA) genes, 314 ribosomal (rRNA) genes, and 272 small nuclear (snRNA) genes (Supplemental Table 12). Landscape of the P. notoginseng genome: from outside to inside, chromosome number and length, coverage of second-generation data, density of repetitive sequences, gene density, GC content, noncoding RNA density, and genomic synteny.
Genome evolution and expansion and contraction of gene families
We compared our P. notoginseng assembly with sequenced genomes from seven other plants: P. ginseng, Daucus carota from Apiales, four dicot species (Arabidopsis thaliana, Vitis vinifera, Capsicum annuum L., and Glycyrrhiza uralensis), and a monocot, Oryza sativa. Based on gene family clustering analysis, 30 874 P. notoginseng genes (82.28%) clustered into 15 655 gene families (Supplemental Table 13 and Supplemental Figure 5), which included 7264 gene families shared by all 8 species and 1059 families specific to P. notoginseng (Supplemental Figure 6). Gene ontology (GO) and KEGG enrichment analysis of these P. notoginseng-specific gene families showed that they were mainly involved in a series of biological activities, e.g., mature ribosome assembly, cytosolic part, small-molecule binding, and RNA transport (Supplemental Table 14; Supplemental Figure 7).
We selected 458 single-copy gene families among the 8 species to construct phylogenetic trees. As expected, P. notoginseng clustered with another Araliaceae species, P. ginseng, and these two species were most closely related to the Apiales family ( Figure 2A). We estimated that P. notoginseng and P. ginseng diverged from the Apiaceae approximately 62.0 million years ago (mya), and P. notoginseng and P. ginseng diverged around 4.2 mya. These results show that the relationship between P. notoginseng and P. ginseng is very close, consistent with their very similar morphologies and secondary metabolites.
We compared expanded and contracted gene families in the 8 plant species with their most recent common ancestor. In total, 989 gene families were expanded in P. notoginseng, and 1823 gene families were contracted (Supplemental Figure 8). Compared with P. ginseng (6449), the number of expanded gene families in P. notoginseng was significantly smaller, perhaps because P. ginseng has experienced one more WGD event than P. notoginseng. We performed GO and KEGG enrichment analysis on expanded and contracted gene families in the P. notoginseng genome. The functions of the expanded gene families were mainly enriched in GO terms such as transposition, fatty acid biosynthetic process, respiratory chain, and catalytic activity (Supplemental Figure 9; Supplemental Table 15). The functions of the contracted gene families were mainly enriched in GO terms such as protein phosphorylation, protein modification process, b-glucan biosynthetic process, 1,3-b-D-glucan synthase complex, and purine nucleotide binding (Supplemental Table 16). 1,3-b-D-Glucan is reported to be involved in plant defense against fungi (Lee et al., 2006;Schober et al., 2009), and contraction in associated gene families may be related to the susceptibility of P. notoginseng to fungal pathogens and may explain why it readily develops root rot.
Analysis of WGD and its contribution to terpenoid biosynthesis
To study the WGD events that occurred during the evolution of P. notoginseng, we first analyzed the 4-fold synonymous thirdcodon transversion rate (4DTv) ( Figure 2B) of syntenic gene pairs (Jaillon et al., 2007). There were two peaks in the 4DTv distribution at approximately 0.16 and 0.50 for all syntenic gene pairs in the P. notoginseng genome. The first peak at approximately 0.50 corresponded to the core eudicot g triplication event, and the second peak at approximately 0.16 revealed that P. notoginseng underwent another WGD event after diverging from V. vinifera and D. carota. By comparing the P. notoginseng genome with the V. vinifera genome, we found that 65% of P. notoginseng gene models were located in syntenic blocks that corresponded to single V. vinifera regions. Meanwhile, 42% of the V. vinifera gene models in syntenic blocks had two orthologous regions, and 22% had one orthologous region (Supplemental Figure 10). The results of a genome collinearity analysis between V. vinifera and P. notoginseng indicated that the WGD event occurred in the P. notoginseng genome and that there was a 1:2 syntenic relationship between P. notoginseng and V. vinifera ( Figure 2C and Supplemental Figure 11). Based on the distribution of K s (Supplemental Figure 12) and 4DTv analysis, we calculated that the WGD event occurred approximately 29.6 mya in the ancestor of P. notoginseng. Compared with P. notoginseng, P. ginseng experienced one additional WGD event (Kim et al., 2018), and this recent event occurred approximately 1.85 mya after divergence from P. notoginseng. The timing of the WGD events was similar to the results of an evolutionary analysis of the P. ginseng genome (28 and 2.2 mya), confirming the accuracy of the present results (Kim et al., 2018).
Through homologous alignment and a Pfam database search, we identified gene families that were potentially involved in terpenoid biosynthesis in the eight species (Supplemental Table 17). The copy numbers of some gene families in the P. notoginseng genome were significantly greater than those in other plant genomes; these included families such as DXS, MCS, HDS, HDR, and SQE. We also observed that the average copy number of most key enzyme genes in P. ginseng was approximately twice that in P. notoginseng (Supplemental Figures 13 and 14). We next performed K a /K s analysis of these pathway genes to calculate the duplication times of their gene pairs in the P. notoginseng genome. The gene pair duplication times were concentrated around the time of the WGD event of Chromosome-level reference genome of Panax notoginseng P. notoginseng (Supplemental Figure 15; Supplemental Table 18), indicating that they may have arisen from the WGD event.
Transcriptome analysis and transcriptional regulation of saponin biosynthesis
To further explore the genetic information in P. notoginseng, we performed detailed transcriptome sequencing of P. notoginseng plants on the basis of the high-quality genome. Samples for transcriptome sequencing were obtained from 1-to 4-year-old P. notoginseng plants that were subdivided into root, stem, leaf, flower, rhizome, fibril, periderm, phloem, and tubercle (Supplemental Figures 16 and 17). Data processing (Supplemental Figures 18 and 19; Supplemental Table 19) and related transcriptome analyses, such as alternative splicing event analysis (Supplemental Figure We further analyzed the regulation of transcription factors in P. notoginseng. A total of 2150 transcription factors from 57 different families (Supplemental Table 23) were identified; we then used correlation analysis to map the gene regulation network ( Figure 2D) between terpenoid biosynthetic pathway genes and transcription factors. The transcription factor families that were highly correlated with pathway genes included mainly bHLH (Deng et al., 2020), ERF (Zhang et al., 2020b;Paul et al., 2020), MYB , WRKY (Villano et al., 2020), NAC (Jin et al., 2020), and C2H2 transcription factors (Han et al., 2020), as well as other families that play an important role in plant growth and development, stress resistance, and secondary metabolism.
Based on the expression levels of pathway genes (Supplemental Table 24), we explored the secondary metabolism of saponins in P. notoginseng plants at the temporal and spatial levels. At the temporal level, we compared the expression patterns of 29 genes from the saponin biosynthesis pathway in the same tissues of 1-to 4-year-old plants. In most tissues, highly expressed genes were concentrated in 3-or 4-year-old tissues, but in the stems, highly expressed genes were mainly concentrated in 1-to 2-year-old tissues ( Figure 3). In addition, through comparative transcriptome analysis, we identified 7792 DEGs that were highly expressed in 3-to 4-year-old plants but poorly expressed in plants of other ages. At the spatial level, we compared gene expression patterns in different tissues of same-aged plants. Except in 1-year-old plants, most of the pathway genes were specifically expressed in flowers, and a few were highly expressed in rhizomes and roots (Supplemental Figure 25).
Analyzing key enzyme genes involved in ginsenoside biosynthesis
The biosynthesis of P. notoginseng saponins is attributed to the activity of a series of key enzyme genes, among which the largest and most diverse gene families are the CYP450s and the UGTs.
We used UGTs involved in terpene biosynthesis as queries Wei et al., 2015;Yan et al., 2014) to search for homologous UGT candidate genes in the P. notoginseng genome and designed primers for cloning (Supplemental Table 27). We ultimately cloned the full lengths of 32 UGT genes (Supplemental Figure 27) and named them PnUGT1-PnUGT32. Then, by expressing their proteins in Escherichia coli, we determined that five of them (PnUGT1-5) had catalytic functions in the biosynthesis of ginsenosides. We used an E. coli-expressed empty vector as the negative control (Supplemental Figure 28). Using PPT and F1 (Monoglycoside; PPT-C20-glucosyl) as substrates, the crude enzyme of gene PnUGT3 could add a glucosyl group at the C6 position to produce Rh1 (Monoglycoside; PPT-C6-glucosyl) and Rg1 (Diglycoside; PPT-C6-glucosyl, C20-glycosyl), respectively ( Figure 4B and Supplemental Figure 29). Its functions are therefore consistent with the functions of UGTPg1 and UGTPg101 from P. ginseng Yan et al., 2014), but this is a new gene cloned for the first time in P. notoginseng. Using PPD and PPT as substrates, the crude enzyme of gene PnUGT1 could add a glucosyl group at the C20 position to produce CK (Monoglycoside; PPD-C20-glucosyl) and F1 (Monoglycoside; PPT-C20-glucosyl), consistent with the functions of UGTPg100 and UGTPg101 from P. ginseng . In addition, PnUGT1 could catalyze the production of ginsenoside F2 (Diglycoside; PPD-C3-glucosyl, C20-glycosyl) from Rh2 (Monoglycoside; PPD-C3-glucosyl) ( Figure 4B and Supplemental Figure 30), which is the first reported new function in P. notoginseng. The crude enzyme of gene PnUGT5 could catalyze the production of Rh2 from PPD, and crude enzymes of genes PnUGT2 and PnUGT4 could then extend the sugar chain and generate Rg3 (Monoglycoside; PPD-C3-glucosyl-glucosyl) from Rh2 ( Figure 4B and Supplemental Figure 31), consistent with the functions of UGTPg45 and UGTPg29 from P. ginseng . In addition, the last four genes have also been experimentally shown to perform catalytic functions in Saccharomyces cerevisiae .
S Q E 1 0 F P S 3 S Q E 6 P n U G T 1 P n U G T 3 P P D S P n U G T 5 P M K 1 C M S P n U G T 2 P n U G T 4 2-turble 3-turble 4-turble H M G S 3 P P T S P n U G T 1 F P S 3 P P D S S Q E 1 0 P n U G T 3 D S 1 P n U G T 5 G P S IP I1 M V D S Q E 3 A A C T 1 H M G S 1 S S 1 C M S P n U G T 2 G G P P S 8 G G P P S 9 S S 2 P n U G T 4 2-phloem 3-phloem 4-phloem H M G R 5 S S 1 G G P P S 9 P M K 1 C M S D S 1 H M G S 3 F P S 3 S Q E 1 0 P P T S S S 2 P n U G T 3 M V K S Q E 6 P P D S P n U G T 1 D S 2 H M G R 3 P n U G T 5 P n U G T 2 P n U G T 4 1-root 2-fibril 3-fibril 4-fibril Besides the more common ginsenoside compounds mentioned above, there are many unique saponins in P. notoginseng, such as notoginsenoside R1, notoginsenoside R2, notoginsenoside R4, and notoginsenoside Fc, which have better water solubility and good pharmacological activities (Supplemental Figure 32). To screen out more UGT genes, we conducted weighted gene co-expression network analysis (WGCNA) and expression profile consistency analysis. Through WGCNA, we constructed a correlation network between all genes annotated as UGT in the P. notoginseng genome and identified 7 gene modules with strong correlation, including 29 pathway genes and 139 UGT genes ( Figure 4C and Supplemental Figure 33). Among them, PnUGT2 was included in the blue module, and PnUGT1 and PnUGT5 were included in the green module. We further analyzed the annotation information and GO enrichment of these candidate UGT genes and found that most were enriched in GO terms such as GO:0008152 (metabolic process) or GO:0071555 (cell wall organization) and had different transferase activities (Supplemental Table 28). We then compared the expression patterns of genes in the terpene biosynthesis pathway and identified UGT genes with similar expression patterns. By comparing the expression levels in each transcriptome sample, the expression patterns of key enzyme genes could be divided into three categories ( Figure 5): most were highly expressed in flowers, some were highly expressed in roots (each part), and a small number were most highly expressed in leaves (Supplemental Figure 34). A total of 35 UGT genes that were highly expressed and clustered with the pathway genes were screened from the correlation evolution tree (Supplemental Figure 35; Supplemental Table 29). Combining the results of the two analyses above, we identified candidate UGT genes that may be involved in the notoginsenoside biosynthetic pathway, although the specific functions of the encoded enzymes have yet to be experimentally verified.
DISCUSSION
P. notoginseng, one of the most widely used Chinese medicinal plants from the family Araliaceae, is renowned in China and worldwide for its good efficacy. The main active ingredients in P. notoginseng are saponins, including higher contents of ginsenoside Rg1, ginsenoside Rb1, and notoginsenoside R1 (Su et al., 2016;Duan et al., 2017;Zhang et al., 2018Zhang et al., , 2019, and other active compounds, such as ginsenoside Rd, ginsenoside Rg3, ginsenoside Re (Xie et al., 2020), notoginsenoside R2, and notoginsenoside Fc . The biosynthesis of saponins in P. notoginseng has attracted extensive attention from researchers, and some key enzyme genes, such as HMGR, AACT, SS, PMK, MVK, IDI, and CYP450, have been identified. However, the complete biosynthetic pathways of unique notoginsenosides have not yet been resolved, and further research and exploration are needed.
Gene mining of high-quality genomic and transcriptomic data can provide resources for further exploration of plant growth and sec-ondary metabolism mechanisms (Tu et al., 2020). As early as 2017, two P. notoginseng reference genomes were published (Zhang et al., 2017a;Chen et al., 2017); however, the quality of these genomes was insufficient because of the limited sequencing capacity at that time. We therefore performed whole-genome sequencing of P. notoginseng from Genuine Producing Areas based on third-generation PacBio sequencing technology and used Hi-C technology to construct a highquality, chromosome-level genome. The assembled genome was 2.66 Gb in size, with a scaffold N50 of 216.47 Mb and a contig N50 of 1.12 Mb. In addition to the depth or accuracy of gene sequencing, this reference genome was greatly improved compared with previous genomes and was resolved to the chromosome level, which can more intuitively reveal the gene distribution and overall genomic landscape. notoginseng is in a relatively primitive evolutionary position among Panax plants. By comparing genomes, we found that after diverging from carrots, an independent WGD event occurred in P. notoginseng. We then studied the distribution of K a /K s values of key enzyme gene pairs in the saponin biosynthesis pathway and found that the WGD event may have contributed to the generation of these gene pairs, directing the metabolic flux toward the production of saponins. Based on the locations of coding genes on the chromosomes, we also found two sets of gene cluster duplication. Notably, upstream HDR, SS, and SE genes and downstream CYP450 and UGT genes that are known to be involved in ginsenoside biosynthesis are close to each other in the P. notoginseng genome (Supplemental Figure 36). The gene cluster also contains some UGT and transcription factor genes identified in this study, which are likely to participate in Based on the gene expression levels, the pattern of expression change for any one gene can be observed after the data in each column are standardized. The area marked by the red box indicates high gene expression levels. Each heatmap has its own color scale: the higher the expression, the greener the color. Plant Communications 2, 100113, January 11 2021 ª 2020 The Authors. 1.00 2.00 3.00 4.00 5.00 6.00 7.00 8.00 9.00 10.00 11.00 12.00
In addition, we also established a detailed transcript database of P. notoginseng through sequencing and analysis of different tissues from 1-to 4-year-old plants. Through comparative tran-scriptome analysis, we explored the molecular regulation mechanism of tubercles, a characteristic phenotype of P. notoginseng. The associated DEGs were mainly involved in the biosynthesis of plant hormones such as strigolactone, cytokinin, and auxin. The synergistic effects of these phytohormones result in the production of a tubercle phenotype, and further study of the functions of related DEGs will more fully reveal the molecular mechanisms of tubercle formation.
We next explored the saponin biosynthesis pathway in P. notoginseng plants at temporal and spatial levels. We compared the expression patterns of saponin biosynthesis genes The genes in the green box are the UGT genes identified in this study.
in the same tissues of 1-to 4-year-old plants and found that most genes in tissues other than stems were highly expressed in 3-or 4-year-old plants. This indicates that as plant age increases, saponin biosynthesis gene expression levels also increase, as does the content of accumulated saponins. The quality of P. notoginseng harvested after more than 3 years of growth is therefore optimal, but because of diseases, insect pests, and continuous cropping obstacles, most materials circulated in the market are 3-year-old P. notoginseng. At the spatial level, most pathway genes were specifically expressed in flowers, and a few were highly expressed in rhizomes and roots, including the postmodification UGT enzyme genes PnUGT2, PnUGT3, and PnUGT4. These results indicate that saponin compounds or their precursors may be synthesized in the flowers first and then transferred to the roots or further modified in the roots, consistent with a previous report.
The ultimate step in the saponin biosynthesis pathway is glycosylation catalyzed by UGTs. This is the most critical step in determining the structure and pharmacological effects of the compounds, and we therefore focused on identifying candidate UGT genes. First, we conducted a systematic evolutionary analysis of all P. notoginseng genes that contained the conserved GT domain. As expected, we obtained five UGT genes that catalyzed the glycosylation of ginsenosides. Second, we performed WGCNA analysis on all genes annotated as UGTs and key enzyme genes of the P. notoginseng saponin biosynthetic pathway and screened out seven modules of highly correlated genes. Among these seven modules, two (module blue and module green) contained genes with identified functions, indicating that the genes enriched in these modules were likely to participate in the biosynthesis of saponins. Third, we conducted a consistency analysis of expression profiles and identified 20 UGT genes with high expression levels and expression patterns consistent with those of pathway genes.
Combining the results of the two analyses above, we identified candidate UGT genes to lay a foundation for further comprehensive analysis of the complete notoginsenoside biosynthesis pathway.
In summary, we constructed a high-quality, chromosome-level P. notoginseng reference genome as a comprehensive genetic inventory for evolutionary phylogenomic studies of Panax plants.
Using detailed transcriptome data, we explored the molecular mechanism of tubercle formation, investigated the biosynthesis pathway of saponins, and provided many promising candidate genes to fully reveal the biosynthetic pathway of notoginsenosides in P. notoginseng.
Genome annotation
We used homology-based, de novo, and transcriptome-based predictions to predict the protein-coding genes in the P. notoginseng genome. The gene sets predicted by various strategies were integrated into a non-redundant and more complete gene set using EVidenceModeler. Gene functional annotation was performed mainly by searching against various functional databases, such as Swiss-Prot, NT (Nucleotide Sequence Database), NR (Non-Redundant Protein Sequence Database), Pfam, eggNOG (Evolutionary Genealogy of Genes: Non-supervised Orthologous Groups), and GO. Repetitive sequences were annotated using an ab initio prediction method and a homolog-based approach. We detected noncoding RNA by comparison with known noncoding RNA libraries and Rfam, and we also predicted rRNAs, snRNAs, microRNAs, and so on.
Analysis of genomic evolution and WGD events
We used the OrthoMCL package (1.4) to identify and cluster gene families (clusters) from P. notoginseng and seven other plant species: P. ginseng, D. carota, V. vinifera, C. annuum L., G. uralensis, A. thaliana, and O. sativa. After gene family clustering, we aligned all 458 single-copy gene protein sequences using MUSCLE and constructed a phylogenetic tree using PhyML. Based on the gene family cluster analysis and after filtering gene families with abnormal gene numbers in individual species, we used the CAFÉ program to identify the expansion and contraction of gene families in each species. To explore the evolution of the P. notoginseng genome, we calculated the 4DTv of syntenic blocks and the distribution of synonymous substitutions per synonymous site (K s ) to identify WGD events.
Integrated genomic and transcriptomic analysis
One-to four-year-old P. notoginseng plants were collected from Wenshan County, Yunnan Province, China. There were three biological replicates for each sample, and samples were taken at least five meters apart. After harvesting, we subdivided the plants into different tissue parts, including the root (xylem), stem, leaf, flower, rhizome, fibril, periderm, phloem, and tubercle. All samples were transported on dry ice, washed with ultrapure water three times, immediately frozen in liquid nitrogen, and stored at À80 C before RNA extraction. Total RNA was extracted from each tissue using a modified cetyltrimethylammonium bromide method. The RNA purity was checked using a kaiaoK5500 spectrophotometer (Kaiao, Beijing, China), and the RNA integrity and concentration were assessed using the RNA Nano 6000 Assay Kit for the Bioanalyzer 2100 system (Agilent Technologies, CA, USA). cDNA libraries were constructed using the NEBNext Ultra RNA Library Prep Kit for Illumina (New England Biolabs, USA) following the manufacturer's recommendations. After cluster generation, the libraries were sequenced on an Illumina NovaSeq S2 platform, and 150 bp paired-end reads were generated.
Genes encoding key enzymes thought to be involved in the saponin biosynthetic pathway were annotated by BLAST (2.2.28). Their predicted proteins were aligned with the Pfam database using HMMER (3.1b1), and their expression levels in different tissues were obtained from transcriptome data. We used MeV software (4.9.0) to create a heatmap of gene expression and analyze gene expression patterns. In addition, we identified transcription factor genes in the P. notoginseng genome by comparison with the PlantTFDB database. We used an R script to calculate the Pearson correlation coefficients between transcription factors and genes in batches and Cytoscape software to draft the correlation map.
Screening and functional verification of candidate UGT genes
Multiple sequence alignments were generated using DNAMAN to visualize the conserved motifs. For phylogenetic tree analysis, the amino acid sequences of UGTs from other species were downloaded from the National Center for Biotechnology Information (NCBI) database and aligned using ClustalW. Then, a neighbor-joining tree was built using MEGA X software (Kumar et al., 2016) with 1000 bootstrap iterations. P. notoginseng cDNA was prepared using the PrimeScript 1st Strand cDNA Synthesis Kit (Takara, Dalian, China). After designing primers, we cloned a total of 32 UGT genes, and the PCR products were ligated into the N-terminal MBP fusion expression vector HIS-MBP-pET28a (HIS, histidine; MBP, maltose-binding protein) according to the protocol of the Seamless Cloning Kit (Beyotime, Shanghai, China). We transformed the successfully sequenced positive strains into E. coli BL21 (DE3) (Transgen Biotech, Beijing, China) and maintained the cultures in Luria-Bertani liquid medium with kanamycin (50 mg/mL) at 37 C in a shaking incubator until the optical density at 600 nm reached 0.6-0.8. Then, 1 M isopropyl b-D-thiogalactopyranoside was added to a final concentration of 50 mM, and cultures were maintained at 16 C and 120 rpm for 16 h to allow expression of recombinant proteins. pET28a-transformed E. coli BL21 (DE3) cells were treated in parallel as a control. The recombinant cells were harvested by centrifugation at 10 000 g and 4 C, then resuspended in 100 mM phosphate buffer (pH 8.0) that contained 1 mM phenylmethanesulfonylfluoride and sonicated in an ice-water bath for 10 min (lysed for 5 s, paused for 5 s). The sample lysates were centrifuged for 20 min at 12 000 g and 4 C to separate crude enzymes from cell debris. A UGT activity assay was performed in a total volume of 100 ml that contained 100 mM crude enzyme buffer (pH 8.0), 1 mM UDP-glucose, and 0.1 mM acceptor substrate for 2 h in a 35 C water bath and was terminated by the addition of 200 ml methanol. Precipitated proteins were removed by centrifugation (10 000 g for 10 min) and filtered through 0.22 mm filters before injection. Glycosylated products were detected using ultra-high-performance liquid chromatography coupled with quadrupole time-of-flight mass spectrometry (UPLC/Q-TOF-MS, Waters, Milford, MA) using a Waters ACQUITY UPLC HSS T3 analytical column (2.1 3 100 mm, 1.8 mm). Data analysis was performed using MassLynx software (version 4.1). Standards of saponin compounds and UDP-glucose were purchased from Yuanye Bio-Technology (Shanghai, China). To screen additional candidate UGT genes, we also conducted WGCNA using R and expression profile consistency analysis.
Data availability
The data supporting the findings of this work are available within the paper and its Supplemental Information files.
|
2020-10-19T18:11:25.725Z
|
2020-09-20T00:00:00.000
|
{
"year": 2020,
"sha1": "b099627dc0a6434dd05e2aff103212729a907a98",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.xplc.2020.100113",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7653a6e77d8b65f5405619f42ebabaed004cb6a0",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
14064042
|
pes2o/s2orc
|
v3-fos-license
|
Biogeochemical processes governing exposure and uptake of organic pollutant compounds in aquatic organisms.
This paper reviews current knowledge of biogeochemical cycles of pollutant organic chemicals in aquatic ecosystems with a focus on coastal ecosystems. There is a bias toward discussing chemical and geochemical aspects of biogeochemical cycles and an emphasis on hydrophobic organic compounds such as polynuclear aromatic hydrocarbons, polychlorinated biphenyls, and chlorinated organic compounds used as pesticides. The complexity of mixtures of pollutant organic compounds, their various modes of entering ecosystems, and their physical chemical forms are discussed. Important factors that influence bioavailability and disposition (e.g., organism-water partitioning, uptake via food, food web transfer) are reviewed. These factors include solubilities of chemicals; partitioning of chemicals between solid surfaces, colloids, and soluble phases; variables rates of sorption, desorption; and physiological status of organism. It appears that more emphasis on considering food as a source of uptake and bioaccumulation is important in benthic and epibenthic ecosystems when sediment-associated pollutants are a significant source of input to an aquatic ecosystem. Progress with mathematical models for exposure and uptake of contaminant chemicals is discussed briefly.
Introduction
Modern societies derive many benefits from mobilization of natural geochemical resources, such as oil, and from synthetic chemicals; for example, medicine, generation ofusable energy, material for shelter, and chemicals for agriculture. The Organization for Economic Cooperation and Development (OECD) estimates that approximately 1000 to 1500 new substances enter daily use each year and add to the approximately 60,000 already in daily use (1), an estimate that is similar to that of the United States Council on Environmental Quality (2). Fortunately, only a small proportion of these chemicals pose threats to human health and living natural resources once they are released to the environment. This small proportion is still a large number. Butler estimated that 11,000 chemicals are manufactured in quantities that require assessment for environmental concern in that these quantities have the potential to pollute significant sectors of global ecosystems if released. In addition to these manufactured chemicals, there are chemicals of environmental concern generated by processes of treatment of industrial and domestic wastes by chlorination (4)(5)(6). The exact numbers are not important considerations in this paper. Rather, it is important that the assessment of toxicity and environmental fates and effects for these chemicals, and most important, assessment of human health risks associated with the presence of these chemicals in the environment, are substantial tasks requiring extensive knowledge ofthe behavior ofthese chemicals in the environment and their modes oftoxic action. Studies of the past 15 to have provided evidence of the widespread distribution of recalcitrant synthetic organic chemicals such as the chlorinated pesticides and polychlorinated biphenyls (PCBs) in ecosystems worldwide even in locations far removed from sources of input. Aquatic ecosystems are not an exception, as has been well documented in numerous studies and reviews (2, 7,8).
The presence of a xenobiotic compound in a segment of an aquatic ecosystem does not, by itself, indicate an adverse effect. Connections must be established between levels ofexposure or tissue contamination and adverse effects. The combined understanding of the inputs, fates, and effects of chemical contaminants or pollutants in the environment has been termed ecotoxicology by groups ofscientists (3,9,11). This paper focuses on a few advances in biogeochemical research concerned with the movement oforganic chemicals through aquatic ecosystems and conditions of exposure and routes of up take by aquatic organisms. The focus is mainly on nonpolar, hydrophobic, medium molecular weight compounds such as PCBs, chlorinated pesticides, and aromatic compounds found in fossil fuels and their combustion products, chemicals often identified with the current concern about human health risks associated with chemical contaminants in fish and other aquatic species.
A stylized version of the biogeochemical cycle of a group of these compounds, polynuclear aromatic compounds (PAHs) is presented in Figure 1. It would be an immense task to quantitatively measure all the pathways and rates of reaction and movement for the many thousands of chemicals of concern through each of their individual biogeochemical cycles. Although individual chemical structure confers some specificity of environmental behavior and toxic action to each compound, there are principles by which we can group chemicals of similar structure and obtain some predictive capability of their biogeochemical behavior. This knowledge can then be used in concert with knowledge derived from effects studies to provide human health risk assessments and ecological risk assessments.
Physical Chemical Research
Fundamental properties related to the biogeochemical behavior of many organic chemicals of environmental concern have not been well characterized, and it is only in the past years that substantial progress has been made. For example, water solubilities are available for only relatively few compounds, especially for the medium to higher molecular weight nonpolar compounds such as PAHs, PCBs, and chlorinated pesticides. This situation is improving rapidly (12)(13)(14)(15)(16). The influence of temperature and salinity on solubility has been less extensively investigated (16)(17)(18)(19) and requires much further research. Initial assessments of the influence of temperature and salinity on solubilities yield factors of2 to 6 for medium to higher molecular weight range PAHs (e.g., phenanthrene to benzo[alpyrene) over the range oftemperatures and salinities normally found in coastal waters and the open ocean (19). However, the anomalous behavior ofbenzanthracene compared to other PAH tested suggests that important knowledge has yet to be gained for key aspects of factors influencing solubilities in seawater (19).
Calculation or estimation ofwater solubilities from theoretical and empirical considerations is an active area of research, and some progress has been made using molecular surface area and volume and activities oforganic solutes in organic phase and activity coefficients in aqueous phase (20)(21)(22). The wide range of solubilities ofthese essentially hydrophobic compounds and the very low solubilities for compounds in the medium to higher molecular weight range, e.g., phenanthrene and 2,2',4,4',5,5'hexachlorobiphenyl, are illustrated by the examples in Table 1.
Compounds with low solubilities have a tendency to sorb to surfaces, and this is another ofthe important physical-chemical parameters needed to understand the biogeochemical behavior of organic chemicals of environmental concern. Results from several investigations during the past 10 years have demonstrated that sorption is mainly a simple partitioning ofneutral, nonpolar hydrophobic compounds from the aqueous phase to a bulk nonaqueous phase (23)(24)(25)(26). Organic constituents of natural sorbents are primarily responsible for sorption of hydrophobic organic compounds ifthe organic carbon content ofthe sorbent exceeds 0.1% (27). Thus, a partition coefficient Kp can be expressed in terms of the organic carbon content ofthe sorbent and in terms of a partition coefficient K,^b etween water and a hypothetical sorbent of 100% organic carbon of a natural organic material: The partition coefficient Kc can in turn be related to the octanol/water partition coefficient of a given compound by the following equation: where parameters a and b are empirically derived for different groups of compounds. Some of the values reported for these parameters in different studies are summarized in Table 2 to illustrate the type of variation to expect.
The K,,, octanol/water partition coefficient ofa compound has proven to be a very useful parameter in predicting the environmental behavior of hydrophobic chemicals. There are means beyond the scope of this paper to derive the relationship between K,,,, and solubility (29-3I), as reviewed in Brownawell (32) and Lyman et al. (33), among others. Thus, it is possible to determine the K,,,. of a given compound experimentally and then calculate its solubility and sorption in aqueous systems. Handbooks with some ofthese properties are now becoming available (35), and on-line computer systems for environmental data ofthis sort are also becoming available. The caveat that must be kept foremost in the minds of environmental scientists is that data are available for very few ofthe compounds of environmental concern and the concepts have been tested in relatively few of the myriad geochemical milieu found in aqueous ecosystems. For example, there have been only a few studies of compounds such as amines, carboxylic acids, and phenols (34,35). These studies demonstrate that the simple partitioning model used to describe sorption of neutral, hydrophobic molecules is oflimited applicability to compounds that are partially or fully ionizable at natural waters pH values. It is possible to account for the influence ofionization and assemble predictive relationships (35-37) but relatively few types of compounds have been studied. The influence of pH and Eh on the environmental behavior of ionizable organic pollutants needs much further research because many of the present and future problems with environmental fates and effects of compounds involve considerations of polluted sediments and estuarine and coastal waters (38)(39)(40) where substantial ranges ofpH and Eh are encountered (41,42).
Natural waters can be partitioned into particulate and dissolved phases by operational means offiltration, and the definitions of dissolved and particulate are operational. The partition parameters discussed in the preceding paragraphs, K and K, are based on the simplest partitioning process, two phases, and this works reasonably for many soil-water systems and sedimentwater systems. Three lines of evidence led Brownawell (32) and Brownawell and Farrington (43,44) to hypothesize that at least a three-phase partitioning model is needed to model the distributions of many hydrophobic compounds in interstitial waters of sediments: a) experimental evidence showing sorption of hydrophobic orpinic compounds to colloidal organic matter (45,46); b) high dissolvedorganic carbon (DOC) concentrations in the interstitial waters ofmarine sediments (47-SI); and c) the fact that 50 to 90% of the interstitial water DOC is colloidal (48,52,53). Field and laboratory evidence from several studies with PCBs has established that partitioning of hydrophobic compounds to colloids or very small particles must be considered in many natural water systems to account for the observed distributions of hydrophobic compounds in the dissolved and particulate fractions of water samples (43,44,54,55). Nomographs similar to those ofFigure 2 (32) have been presented (46,55,56) and are useful in providing first order assessments of the physical chemical state of a given hydrophobic pollutant in coastal, estuarine, riverine, lacustrine, and interstitial waters.
The association of a significant proportion of the interstitial water hydrophobic compounds such as the PCBs with a colloid fraction has important implications for the assessment ofthe flux of these compounds across the sediment-water interface (32). First-principle arguments indicate that hydrophobic compound interactions with colloids will also be important in modeling the environmental behavior ofcompounds that enter esuarine waters across a salinity and particulate matter gradient as material is carried down river, or in the case ofdischarge ofan industrial or municipal sewage effluent rich in organic matter content. Much ofwhat is knwwn about colloidal organic matter in natural waters is descriptive in nature (51,57,5). The relatively new findings and hypotheses about the importance of hydrophobic compound interactions with colloidal organic matter should add to previous observations and hypotheses concerning the importance of macromolecular organic matter and colloidal organic matter in contemporary biogeochemical cycles and stimulate new research efforts on colloids in natural water systems.
Biological Uptake, Release, Metabolism, and Distribution
Equilibrium Considerations
The concept oftreating the relationship between chlorinated pesticides in aquatic organisms and their aqueous habitat as a pseudoequilibrium process was firstproposed by Hamelink et al. (59). Stated in simple terms, exchange across membrane surfaces (e.g., gills) controls partitioning of a pollutant between the organism and water. The concept was shown to be consistent with data for DDT family compounds and PCBs in oceanic ecosystems (8,39) and was widely adapted for understanding of the relationship between hydrophobic pollutants such as PCBs, chlorinated pesticides, and PAHs in organisms and in waters of the organisms' habitat (3,8,33,60). Aquatic mammals and birds, being air breathing organisms, were exceptions to the concept.
Thus, direct uptake and release oforganic pollutants from and to waterjoined food transr and excretion as modes ofinput/output for pollutants in aquatic organisms and became the dominant modes in conceptual thinking and data interpretation. A procedure for assessing bioaccuniulation potential under equilibrium conditions was needed, and Neeley et al. (61) were among the first to introduce the useful concept that K.. could be used for this purpose. A compilation ofdata depicting the relationship between K.. and Kb (bioaccumulation = concentration in organism/concentration in water) shows that the relationship FIGURE 2. Nomograph for relationships between fraction (f,) of a given hydrophobic compound sorbed to organic colloids, organic colloid concentration, and log KOc of the compound. From Brownawell (32).
is predictive to a first approximation within a factor of2 using Eq.
log Kb = 1.00 log K,, -1.32 (3) There are also significant departures from the predictive relationship (10,63,64). These departures could be due to nonequilibrium conditions at the time ofthe measurements used to calculate Kb. Eq. (3) may not apply for hydrophobic compounds with K, > 6, i.e., superlipophilic compounds (62). Connell and Miller (65) submit that the equation must be modified to take into account molecular shape, e.g., planar and nonplanar PCBs. Hawker and Connell (63) have noted that little attention has been paid to the time required to attain equilibrium, with the result that some ofthe data in the literature may underestimate Kb. There is also the problem of the physical form of the total of the organic chemical concentration present in the water. However, the influence ofdissolved organic material in the water and colloidal organic matter cannot be ignored. It may well be that only that portion of the material in the water not associated with the organic matter is available for partitioning with the organism. Experiments with the uptake of nonionic hydrophobic organic compounds such as petroleum compounds and PAHs by organisms have demonstrated reduced uptake when DOChumiclike material is present in comparison to aqueous systems with low dissolved organic matter (DOM) concentrations (66,67). Another important physical-chemical consideration is sequestering or entrapment of an organic pollutant within a particle. Equilibrium partitioning calculations for sedimentwater interactions for the hydrophobic compounds using K,74s to be different sources and physical chemical form for the PAHs. Two types ofPAHs were thought to be present: those found principally sequestered or entrapped in soot and pyrolytic source particles but extractable with normal solvent extraction procedures; and those entering aquatic ecosystems in water accommodated form or sorbed in a relatively easily partitioned form with particulate matter, e.g., petroleum from spilled oil or waste oil in effluents. These observations and hypothesis are consistent with observation concerning physical-chemical speciation of PAHs in an estuarine ecosystem (70) and the recent data on bioavailability ofPAHs ina coastal sediment (71). The preceding are but a few examples illustrating that the accurate application of the equilibrium partitioning concept or other approaches to estimating bioaccumulation of a pollutant organic compound from a concentration ofthe compound measured or estimated for an organism's habitat must take into account the physicalchemical form of the compound.
Kinetic Considerations
The need for more accurate, predictive capabilities beyond those of equilibrium considerations seems apparent. Hawker and Connell (63) and Conneli (72) have offered an insightful review of the relationship between kinetic considerations and equilibrium considerations. Eqs. (4) to (7) are from their work: dCb/d, = kjC. -k2Cb (4) where Cb is concentration of pollutant in an organism; C,, is concentration of pollutant in the water phase; k, is a first-order uptake rate; k2 is a first-order release rate or clearance rate from the organism; and t is time.
Water in the natural environment is a very large component relative to the organism, and thus Cw is assumed to be constant and not affected by bioaccumulation or release in this treatment of the equations. It is easily shown that at equilibrium Cb/C, = kI/k2 since CC,, = Kb, then Kb = klk2 (5) (6) and substituting into Eq. (3), it can be shown that k,/k2 = 0.048 k,. Thus, k, and k2 are related to K,,,, in the equations, and recent assessments ofavailable data show that this is indeed the case (63). There is an inverse correlation between log (Ik2) and log K.. (r = 0.974) and a direct correlation between log (kl) and log K,., (r = 0.974) up to log K,,,, = 7. Equations of the form log K1 = 0.337 log K,, -0.373 (7) can be written and adequately describe most of the data in the literature. The fault with Eq. (7) is that it requires ever increasing k1 with increasing K,,,, and it is not valid above log Ko, = 6, i.e., for the extremely hydrophobic or superlipophilic compounds. Hawker and Connell (63) submit that there must be an upper limit to k, in part based onthe efficiency with which a compound can be transferred across membranes, which is in turn related to compound shape and size and in part the ventilation rate oforganisms which must have an upper limit for tissue such as gills.
First-order rate kinetics are useful in explaining some of the data concerning uptake and release ofpollutants. An example illustrates this point. Two small no. 2 fuel oil spills occurred in the Cape God Canal 3 years apart but at the same time of the year within 1 week ofeach other [nearly duplicate experiments (73)].
Biological half-lives derived from Eq. (8) and (9) were determined from data exemplified by Figure 3. A compilation of half-lives is given in Table 4. Review ofthe data for uptake and release of petroleum compounds, PAHs, and PCBs by bivalves indicates clearly that the relatively simple equations of the form of Eqs. (8) and (9) are applicable during the first 14 to 30 days . There are discernible departures from the semilog plots after 14 to 21 days in the example shown in Figure 3 depending on the compound considered. Concentrations of pollutant chemicals in the habitat and duration of exposure are known to be important considerations influencing rate ofuptake and release ofpollutants (60,77). For example, it has been suggested that a combination of high exposure concentrations and longer time of exposure, as would be expected in urban harbor areas, result in much slower overall release ofpollutant hydrocarbons when bivalve mollusks are moved to cleaner waters. The Stegeman and Teal (78) multiple-compartment model approach to explaining data from experiments and field observations derives from the fact that there are many tissue types in bivalves. It is not difficult to visualize this model, and it is supported by several studies (60,73,75,79). An example would be a simple three-compartment model bivalve of gills, circulatory fluid, and energy storage reserve lipids. Initial uptake across the gills is rapid, followed by slightly less rapid transfer to circulatory fluid, followed by much slower transfer to, and accumulation in, storage lipid reserves. Long-term exposure would result in accumulation ofpollutant in lipid energy reserves until equilibrium or saturation of the storage capacity is attained. Transfer of the bivalve to clean water or removal ofthe pollutant from the water ofits habitat reverses the process. Exchange ofthe pollutant from gill tissue to water is rapid, mobilization of the pollutant in the energy storage lipid is slower, and this accounts for an initial rapid release followed by much slower release in the longer term. Equations for assessment ofmultiple compartnent models have been described by Mackay and Hughes (80) and involve steadystate assumptions in all but the target compartrent, i.e., the compartment for which the flux ofcompound is ofinterest. Further elaboration ofthose models is beyond the scope ofthis synopsis.
It is well established that physiological status of organisms such as spawning, postspawning, prespawning, temperature of the habitat, food availability, as well as exposure concentration and duration, influence the uptake, retention, and release of pollutant organic chemicals by bivalve mollusks (60,77,79).
Metabolism ofPAHs and xenobiotics such as PCBs is thought to be much less active for bivalves than for fish, crustacea, or polychaetes (40,81). However, there are reports that bivalve tissues contain enzymatic activity related to metabolism ofPAHs, and perhaps similar compounds (82,85). This is consistent with data showing isomer-specific changes in relative abundance of C2 and C3-alkylated phenanthrenes in the latter stages ofrelease of no. 2 fuel oil compounds by mussels contaminated by an oil spill (73).
A fallacy of the use of first-order kinetics in this situation is readily apparent if we derive Eq. (10) from Eq. (4). Cb= (k,/k2) C. (l-e -k2) (10) Theoretically, equilibrium can only be attained after infinite time. It is useful to circumvent this problem and work with the concept of a close approach to equilibrium, e.g., 0.99 of value, as has been used by Hawker and Connell (63). They have then derived an equation that relates time to equilibrium to K,.: log t, = 0.663 log K.. -0.284 (11) This allowed them to predict that with compounds with log K.4 < 6 attain equilibrium witiin 1 year. Compounds for which log K,. > 6 require too much time to approach equilibrium for reasonable measurements to be made. Hawker and Connell (63) also proposed a term t,, as time to significant bioaccumulation with significant bioaccumulation being 1% ofthe bioaccumulation at equilibrium. They then used an equation relating t, to log K.. to calculate that compounds with a log K,,, of 10 are bioaccumulated significantly only after a minimum of 0.5 years.
Aquatic organisms bioaccumulate a significant amount, i.e., 1% of equilibrium bioaccumulation concentration, of compounds with log K,,., of 13 after a minimum of 50 years. The latter time exceeds the lifetime ofmost aquatic organisms ofconcern, and the former time exceeds seasonality in temperate zones. The physiological status and other factors described previously that influence concentrations of organic chemical pollutants in an organism will change conditions ofthe organism or habitat well withn the time required to reach significant bioaccumulation for hydrophobic pollutants with log K., of approximately 8 or 9 or greater. This limits the predictive capabilities ofthe kinetic equations described above.
Metabolism
Organisms other than bivalves are capable of reducing concentrations of hydrophobic organic chemical pollutants by exchange with water. In fact, some of the initial equilibrium partitioning work related to fish. In addition, fish, crustacea, and polychaetes have active enzyme systems capable ofmetabolizing substantial portions ofbioaccumulated PAHs, some PCBs, and similar compounds (39,40,71,81). Research concerned with metabolism of organic chemical pollutants has expanded greatly in the past decade, and much detailed information is becoming available. The scope ofthis paper cannot provide many details. It suffices to state the obvious: concentrations in tissues and distributions among excretion ofmetabolites can be a function ofa variety of tissues and conditions controlling enzymatic activity. These conditions include spawning, nutritional status, conditions and duration ofexposure to organic pollutants, mix ofpollutants, and life cycle stage of the animal.
Exposure to xenobiotic organic compounds or PAHs has been shown to induce activity ofmixed-function oxidases capable of metabolizing these xenobiotics. Extent ofenzyme activity can be species-specific even for related species. Reichert et al. (84) exposed two species of deposit feeding amphipods to sedimentassociated radioactively labeled benzo(a)pyrene (BaP). One species converted a higher proportion to metabolites, even though both species bioaccumulated the compound. The explicit lesson is that there are drawbacks to mathematical modeling of uptake, retention, metabolism, and release by extrapolating data from one species to another. This must be kept in mind when pragmatic approaches to the problems ofa complex environment with thousands of species forces such extrapolations.
Another important example involves interactive effects ofone chemical pollutant on another. Stein et al. (85) have shown that benzo(a)pyrene and PCBs interact and influence the extent ofuptake and metabolism ofeach. Simultaneous exposure ofEnglish sole (Paraophyrus vetulus) to radioactively labeled BaP and PCBs increased concentrations ofBaP-derived metabolites in the whole fish and decreased concentrations of PCBs and metabolites in some tissues and bile relative to results from separate exposure to radioactively labeled compounds in sediment-associated form.
Transfer in Food As a Source of Organic Pollutants
The water-organism partition hypothesis of Hamelink et al. (59) contributed significantly to the understanding of aquatic pollution and bioaccumulation and has prevailed for 15 years (39,40,86). This hypothesis evolved from experimental evidence in the laboratory and explained observations in the field obtained during the late 1960s and early 1970s when there were significant discharges of pollutant organic chemicals via effluents and transfer to water in aquatic ecosystems from atmospheric transport and runoff from land.
Several decades of input have resulted in accumulation ofsome compounds such as PAHs and PCBs and chlorinated pesticides in sediments (9,39,40,87,88). Several researchers have demonstrated that organisms living in or on polluted sediments can bioaccumulate the pollutants (71,(88)(89)(90)(91). Almost all experimental designs or field observations that were reviewed by these authors did not allow a sorting out of the relative importance ofwater-organism partitioning and sediment ingestion or water-organism partitioning and food ingestion in terms ofcontributions to bioaccumulated pollutant. In many cases, release of the pollutant from sediment by desorption was thought to cause elevated concentrations in the water followed by organismwater partitioning to achieve bioaccumulation. Oliver (92) has presented estimates ( Table 5) for Lake Ontario that loadings of chlorinated hydrocarbons by desorption from sediments are in the same order of magnitude as are Niagara River inputs. An elegant experiment by Rubinstein et al. (89) involving a multiphase experimental exposure design of a demersal fish (Leiostomus xanthurus) feeding on a polychaete (Nereis virens) with both fish and polychaete exposed to PCB-contaminated These results have very important implications for future efforts at modeling biogeochemical behavior, bioaccumulation, and, indeed, risk assessment for hydrophobic organic chemical pollutants in aquatic ecosystems. Ifa primary source of input to an aquatic ecosystem is slow release from sediments that have been polluted with inputs from various sources that have been reduced or eliminated in more recent times, then it is plausible that uptake by benthic organisms such as small bivalves, polychaetes, and crustacea followed by predation by larger organisms such as fish may be a significant source of bioaccumulated chemicals for the larger organism. In essence, food web transfer among benthic and epibenthic species may be as important as organism-water partitioning when aquatic ecosystems switch from top down (from effluent and atmospheric inputs) to bottom up (releases or transfers from polluted sediments) as regulatory controls reduce the top down sources of input. Even for aquatic ecosystems where inputs continue from effluents, the atmosphere, and runoff, there may be some ecosystems where a pseudo-steady-state approximation exists or is being approached, and food web transfer is now important for some species.
Mathematical Models of Coastal Ecosystems: From Inputs to Concentrations in Edible Tissues of Marine Organisms
It is now possible to incorporate the knowledge and predictive capabilities reviewed in earlier sections of this paper into dynamic models ofentire ecosystems involving geophysical fluid dynamics, turbulence, sediment transport, and life-cycle stages ofvarious species and arrive at some predictive capabilities with respect to contamination of edible tissues from marine organisms. O'Connor et al. (93) and Spaulding et al. (94) among others have presented such models.
One example suffices for the purpose ofthis paper. O'Connor et al. (95) have presented a mathematical model for the distribution and movement of the chlorinated pesticide, kepone, in the James River Estuary. Figure 4 shows the food chain portion of Year the model. The comparisons between the calculated kepone concentrations and the measured kepone concentrations in white perch, Atlantic croacker, and striped bass are given in Figure 5. The agreement seems reasonable to a first approximation although reasonable agreement depends on the use ofthe model and the degree of uncertainty that will be accepted, and this probably will vary depending on the perceived or real importance of relative risks to aquatic biota or human health. The types ofmathematical models exemplified by the work ofO'Connor et al. (93,95), Thomann (96), Spaulding et al. (94), and references cited therein are becoming an essential part ofenvironinental risk assessment for issues ranging from oil spills to remedial action plans for Superfund sites in coastal estuaries (94,97).
General Discussion
There has been substantial progress during the 1970s and 1980s in all aspects ofbiogeochemical research related to the issues of bioavailability and disposition of toxic organic chemicals: solubility, sorption, uptake, metabolism, retention, release/excretion. Predictive equations have been derived or have evolved empirically that tie molecular structural characteristics or properties to biogeochemical behavior. Coupling of the portions of biogeochemical models that deal with transfer back to people via consumption ofliving aquatic resources and potential impacts on human populations, e.g., consumption of carcinogens in edible portions of fish tissues (98), presents an important and powerful tool for realistic regulation of pollution for the protection of human health.
However, caution must be exercised that the elegance and complexity of a series of coupled mathematical equations does not evoke a false sense that accurate predictive capabilities of wide-ranging applicability are a proven reality in either the scientific community or in the policy, regulation, and management communities concerned with aquatic pollution problems. Thus far, the concepts and hypotheses reviewed briefly in this paper have been tested on relatively few chemicals and relatively few biota and ecosystems and, with few exceptions, for relatively short periods oftime ofdays to months. Reuber et al. (99), in the closing statement of a recent paper concerned with chemical equilibria and transport at the sediment-water interface, stated, "Finally, models as described here will remain oflimited value until they can be applied and validated in real situations." Nevertheless, an optimistic view is in order because research is gaining on the problems, and the science has evolved from mainly a descriptive endeavor to quantitative approaches involving a mature healthy mix of theory, experimentation, field observation, research, and monitoring.
|
2014-10-01T00:00:00.000Z
|
1991-01-01T00:00:00.000
|
{
"year": 1991,
"sha1": "758c4f7e9842bf9df1cf82bc9957247ebd856807",
"oa_license": "pd",
"oa_url": "https://doi.org/10.2307/3430848",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "758c4f7e9842bf9df1cf82bc9957247ebd856807",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
227255415
|
pes2o/s2orc
|
v3-fos-license
|
Pulmonary vasodilators: beyond the bounds of pulmonary arterial hypertension therapy in COVID-19
Pulmonary arterial hypertension (PAH) and novel coronavirus (SARS-CoV-2) disease COVID-19 are characterized by extensive endothelial dysfunction and inflammation leading to vascular remodeling and severe microthrombi and microvascular obliterative disease. It is hypothesized that those patients with underlying lung disease, like PAH, represent a high-risk cohort in this pandemic. However, reports of COVID-19 in this cohort of patient have been scaring and an observational survey showed that the disease was relatively well tolerated. We postulate that specific PAH vasodilator may offer some protection and/or advantage in the case of concomitant COVID-19. Here we review the literature describing mechanisms of action for each of the broad categories of PAH therapy, and offer potential hypothesis about why this therapy may impact outcomes in COVID-19.
A novel coronavirus (SARS-CoV-2) was identified in Wuhan, China in December 2019 and rapidly spread throughout the world leading to a global pandemic, impacting now more than 2 million individuals. [5][6][7][8][9][10] The SARS-CoV-2 virus leads to clinical disease (COVID- 19) which predominantly affects the respiratory system. In its most severe form, COVID-19 produces severe acute respiratory distress syndrome (ARDS), and frequently involves a viral sepsis picture, which may ultimately result in multiorgan dysfunction. 5,6 Widespread multisystem organ failure is hypothesized to result from immune-mediated cytokine release from rapid replication of the virus upon entry into the alveolar epithelial cells. 6 Patients with COVID-19 have demonstrated pulmonary vascular microthrombi, diffuse pulmonary interstitial fibrosis, alveolar hemorrhage, and inflammatory infiltrates at autopsy. 9,10 These findings likely contribute to severe hypoxemia driven in part, by ventilation-perfusion (VQ) mismatch.
Interestingly, there is overlap of some of the pathologic and physiologic findings in PAH and COVID-19, including microvascular thrombi, endothelial dysfunction and severe inflammatory response. 1,2,11,12 Although patients with PAH may theoretically represent a high-risk category if infected with COVID-19, given early reports that underlying cardiopulmonary disease increase morbidity and mortality, [6][7][8][9][10] case reports for COVID-19 infection in this group are rare. In fact, the only publication evaluating PAH during the pandemic came from the Pulmonary Hypertension Association, where 13 PAH patients infected with COVID-19 were described. Interestingly, despite the aforementioned risks, only three were intubated and one death was reported. 13 In light of this intriguing preliminary evidence, we postulate that medications targeting PAH may offer some protection and/or advantage in the case of concomitant COVID-19. Here we review the literature describing mechanisms of action for each of the broad categories of PAH therapy, and offer potential hypothesis about why this therapy may impact outcomes in COVID-19.
Nitric oxide donors
To best understand the impact of NO donors on those afflicted by COVID-19, we must first understand the pathophysiology of ARDS. Patients with COVID-19 pneumonia may not have the typical ARDS phenotype despite meeting Berlin criteria for the syndrome. Anecdotal observations describe dissociation between their relatively well preserved lung mechanics/compliance and the severity of hypoxemia, 14 suggesting the source for the acute and rapidly progressive hypoxemia is in the capillaries (profound endothelial dysfunction) and not in the alveoli. 15 Thus, NO may serve as a more suitable treatment in the COVID-19 induced respiratory failure compared to ''traditional'' ARDS.
Classically, ARDS presents clinically with acute respiratory failure with non-cardiogenic pulmonary edema, mild pulmonary hypertension, and progressive systemic hypoxemia (PaO 2 /FiO 2 ration of <200 mmHg). 16 In ARDS, there is hypoxia-induced pulmonary vasoconstriction, which is perhaps intensified by thromboxane A2 and platelet activating factors. These together limit pulmonary blood flow to poorly oxygenated segments of the lungs and further worsen VQ mismatch, thereby contributing to alveolar dead space and hypoxemia. [17][18][19] The collective result of these changes triggers elevations in PVR, ultimately leading to pulmonary hypertension and right ventricular (RV) dysfunction. In the setting of COVID-19, pathophysiologic changes triggered by ARDS may be intensified by the use of non-selective arterial vasoconstrictors required for hemodynamic support and by the deleterious effects of mechanical ventilation including: ventilator associated lung injury, bio-trauma, and elevated positive-end-expiratory pressure.
There are three potential benefits to NO in the patient with COVID-19 as discussed here: (1) direct treatment of endothelial dysfunction and VQ mismatch, (2) improvement in cardiac reserve, and (3) potential direct anti-viral properties. Treatment strategies to combat and reverse the effects of pulmonary vasoconstriction are supportive efforts directed at managing ARDS in general, but additional direct therapies may be more pertinent to the COVID patient due to aforementioned unique vascular manifestations in the affected lungs. NO and NO synthetase isoforms play an important supportive role in pulmonary vascular physiology. 20 The beneficial effects of NO involve specific mechanisms, which enhance endothelial and smooth muscle relaxation, inhibit platelet aggregation as well as direct effects on the immune system. [21][22][23] Inhaled NO (iNO) improves oxygenation and VQ mismatch in ARDS patients selectively without systemic vasodilation. 24 Selective vasodilation within the pulmonary vascular bed is due to its binding rapidly to hemoglobin hence inactivating it before it reaches the systemic circulation. 25 Subsequent studies in ARDS; however, were not uniformly positive. While there is a transient improvement in oxygenation, the effects are not long lasting. [26][27][28] According to a random-effects meta-analysis, iNO did not reduce mortality in patients with severe (risk ratio (RR), 1.01; 95% CI, 0.78-1.32; p ¼ 0.93, n ¼ 329 patients, 6 trials) or mild to moderate hypoxemia (RR, 1.12; 95% CI, 0.89-1.42; p ¼ 0.33, n ¼ 740 patients, 7 trials). 27 From a cardiac standpoint, it has been demonstrated that COVID-19 may cause direct myocardial injury resulting in cardiogenic shock. 8,29 In the patient who may have cardiac toxicity as a result of SARS-CoV-2, iNO may potentially be beneficial, as it has been shown to improve RV systolic function in patients with severe ARDS, independent of change in pulmonary pressure. 30 This improvement in cardiac activity is noted for other disease processes as well, including those with acute RV myocardial infarction and cardiogenic shock. 31 iNO is already recommended for treatment of severe RV failure per the AHA guidelines. 32 iNO's cardiac benefits likely stem from both a reduction in PVR and improved gas exchange and oxygenation, resulting in enhanced right and left ventricular function, respectively. 31,33 Thus, NO donors could mitigate some of the hemodynamic abnormalities in severe COVID-19 infection.
From a direct immune perspective, it is known that NO is a part of the acute immune and inflammatory response system, and is necessary for healing via macrophage-induced NO upregulation. 34 In vitro studies have shown that the NO donor S-nitroso-N-acetylpenicillamine significantly inhibits the replication cycle of SARS-CoV in a concentrationdependent manner. 35,36 In a clinical study of SARS patients in 2004, iNO resulted in improved arterial oxygenation, reduction in supplemental oxygen and need for ventilatory support. 37 The same patients experienced shorter duration of hospitalization, and improvements in radiographic evidence of the disease. Based on the genomic similarities between the two coronavirae, the data in SARS-CoV suggest there may also be a potential benefit in using iNO in the setting of COVID-19.
Endothelial receptor antagonists (ERA)
To appreciate the potential theoretic role for ERA therapy in the patient with PAH and COVID-19, one must first appreciate the evolving landscape of pathophysiologic mechanisms presumed at play in COVID-19. The disease itself is characterized by respiratory failure secondary to acute lung injury evolving to ARDS. 5,6,38 However, in addition to the direct pathologic effects at the level of the pulmonary system, SARS-CoV-2 has been shown to induce a system-wide coagulopathy. 39 Pathologic specimens have shown microthrombi in the pulmonary vascular system and throughout the body, consistent with the hypercoagulable state SARS-CoV-2 appears to induce. 11,40 It is generally felt that SARS-CoV-2 produces system-wide inflammation and activation of deleterious cytokines, proposed responsible for both direct viral, and immune-mediated pathologic changes, particularly evident in the pulmonary system and vasculature.
Endothelin-1 (ET-1) is a vasoconstricting polypeptide. In PAH, there is an imbalance between NO and ET-1, thereby leading to increased vascular tone and vascular remodeling. 41 mRNA ET-1 expression is increased in the presence of growth factors, cytokines, and other vasoactive substances. 42,43 It has been proposed that the ET-1-induced necroptotic pathways may be exacerbated by SARS-CoV-2 via RIP-3 activation and ORF-3a. 44 ET-1 pathway also regulates other factors that may be associated with lung injury and ARDS in SARS, such as urokinase pathwaymediated defective extracellular matrix remodeling, 45 enhanced epidermal growth factor signaling, 46 and complement activation. 47 It is therefore suggested that ERAs may be a reasonable therapeutic consideration in COVID-19.
Besides, ET-1 causes significant downregulation of angiotensin converting enzyme 2 (ACE2), similar to effects seen with angiotensin II and reduces myocyte ACE2 mRNA. 48 ACE2 is a critical enzyme of the renin-angiotensin system (RAAS) that is present in various tissues important in the regulation of cardiovascular function. This association may be crucial as SARS-CoV-2 enters lung cells by binding to ACE2 receptors. 6,49 Yet, there is much uncertainty regarding whether ACE2 expression may potentiate SARS-CoV-2 infectivity or be protective of severe infection. SARS-CoV-2 appears not only to gain initial entry through ACE2 but also to subsequently down-regulate ACE2 expression such that the enzyme is unable to exert protective effects in organs. 49 It has been postulated but unproven that unabated angiotensin II activity may be in part responsible for organ injury in COVID-19. After the initial engagement of SARS-CoV-2 spike protein, there is subsequent downregulation of ACE2 abundance on cell surfaces. Downregulation of ACE2 activity in the lungs facilitates the initial neutrophil infiltration in response to endotoxins. In experimental mouse models, exposure to SARS-CoV-1 spike protein induced acute lung injury, which is limited by RAAS blockade. 50 These hypotheses have prompted trials to test whether RAAS blockers can be used as treatment for COVID-19. Treatment with ERAs could have direct effect in ACE2 expression and theoretically be beneficial for the treatment in this disease as well.
Patients with severe ARDS can develop pulmonary fibrosis, leading to extensive and irreversible deterioration of pulmonary function that requires lung transplantation. 51 ET-1 signaling emerged as a potential target for pharmacological intervention in the treatment of fibrosis. 52,53 ET-1 promotes fibroblast differentiation to a myofibroblastic cell type, inducing the expression of proteins that contribute to a contractile phenotype including a smooth muscle actin. Moreover, ET-1 has been shown to initiate alveolar epithelial cell transition into fibroblast-like cells, a process termed epithelial-mesenchymal transition, and thereby contribute to pulmonary fibrosis. Nevertheless, the results of a number of clinical trials examining the efficacy of ERAs in the context of these diseases did not achieve the primary objectives, a reduction in morbidity/mortality, although they showed a tendency towards those goals. The ARTEMIS-IPF trial 54 was designed to evaluate if ambrisentan reduces the rate of idiopathic pulmonary fibrosis progression, however was terminated early because an interim analysis indicated a low likelihood of showing efficacy for the primary endpoint. Ambrisentan treated patients had more disease progression (p ¼ 0.01) and respiratory hospitalizations (p ¼ 0.007). Complex nature of the fibrotic disease, involving multiple factors, or alternative plausible explanations, such as the establishment of no-return points in the onset of fibrosis beyond which the pharmacological intervention offers little or no beneficial effects, might also contribute to the lack of positive results. ERAs, if initiated early in the process of acute lung injury, may have a potential effect preventive advanced disease and hypoxemia.
Prostacyclin analogs
Prostacyclins resemble endogenous prostacyclin (PGI 2 ) and are most commonly used for treatment of PAH. 2 Therapy is available in oral, inhaled, and intravenous forms. Prostacyclin is produced predominantly by endothelial cells and induces potent vasodilation of all vascular beds. 55 Prostacyclin binds to its receptor (a G-protein coupled receptor) found on the surface of vascular smooth muscle and platelets, activates cyclic adenosine monophosphate (cAMP), and results in the most potent endogenous inhibitor of platelet aggregation, vascular smooth muscle relaxation and vasodilation of the pulmonary arteries. It also appears to have both cytoprotective and antiproliferative properties.
The rationale for prostacyclin use in COVID-19 associated ARDS is three-fold. First, inhaled prostacyclin therapy has been used in the treatment of ARDS and has been shown to improve oxygenation and VQ mismatch. 56,57 While it has not been associated with improved patient outcomes and is not routinely recommended, it may be used in severe, life-threatening hypoxemia refractory to conventional ARDS management, such as has been seen in COVID-19. Much regarding the pathophysiology of severe COVID-19 respiratory failure remains unknown, but a profound out-of-proportion to lung mechanics hypoxemia is observed, 14 which is believed to be related to vascular injury and microthrombi. Similarly to NO, prostacyclins' potent endothelial effects such as prevention of vasoconstriction and platelet aggregation may be particularly important in this viral-induced ARDS. It has the additional advantage over iNO in that it does not require special equipment and may be directly administered through a standard ventilator (close circuit).
The second potential benefit of prostacyclin therapy in the management of COVID-19 is to mitigate direct SARS-CoV-2-associated coagulopathy. Microvascular thrombosis and large vessel thromboembolism have been described anecdotally and in case reports 38,40,58 and abnormal coagulation parameters are associated with increased mortality. 39,59 Prostacyclin exerts an overall control of platelet aggregability. It inhibits platelet adhesion to exposed vascular subendocardium, but at much higher concentration than those required to prevent platelet aggregation. Therefore, they allow platelet to adhere to damaged vascular tissue and participate in the repair process, while at the same time, preventing or limiting thrombus formation. Prostacyclin therapy combats prothrombotic effects of endothelin and may mitigate thrombosis in situ seen in PAH itself, and potentially in patients with COVID-19-associated respiratory illness. 60,61 Finally, prostacyclins enhance NO production, whose beneficial effects are described earlier in the manuscript. The downstream effect of prostacyclins and NO leads to the respective activation of cAMP and cyclic guanyl monophosphate in a synergistic pattern in endothelial cells. Interestingly, these mediators intertwine the activities of prostacyclins and NO in a synergistic manner. 55 The bond between prostacyclins and NO is not expected to be different for COVID-19 compared to other pulmonary infections, but their interaction is important in inflammatory processes and endotoxemic stress conditions, where there is significant increase in IFNg and/or LPS concentrations in activated cells, especially monocytes/macrophages. 62 The NO antithrombotic effect in platelets aggregation is potentiated by prostacyclins, and their vasodilatory effect in vessels is additive. 55,60 This concerted relationship between NO and prostacyclin is also evident clinically, 63 as treatment with epoprostenol restores vasodilatory responses to NO in the pulmonary vasculature. In seven PAH patients who were primary nonresponders to iNO, intravenous long-term administration of epoprostenol for a mean of 18 months, abolished the lack of responsiveness to NO, causing significant improvement in pulmonary hemodynamics and improvement in oxygenation. The underlying mechanism for desensitization of pulmonary arterial tissue to NO is yet unknown, it is conceivable that changes in the contractility mechanisms, such as intracellular calcium concentration, play a role. The synergistic relationship between prostacyclins and NO described above on vascular function would likely benefit those afflicted by COVID-19 given the known endothelial dysfunction seen in this disease, which results both form direct viral endotheliitis and the indirect effect of circulating cytokines. Lastly and speculatively, prostacyclins and NO have important antiinflammatory effects, especially on monocytes/macrophage function that may be beneficial in the COVID-infected patient 64 (Fig. 1).
New class of PAH therapy under evaluation: vasoactive intestinal peptide
It is emerging as a critical regulator of tone and structural remodeling in the pulmonary circulation. 3,65 VIP is being developed as adjunctive therapy for PAH given its vasodilatory, inotropic, and lusitropic properties. Mice lacking the gene for VIP exhibit moderate to severe PAH characteristics and administration of VIP to these animals attenuated vascular remodelin and RV hypertrophy. It also inhibits proliferation of pulmonary vascular smooth muscle cells in patients with idiopathic PAH. VIP's immunomodulatory effects over TNF alpha, IL-1 B, IL-10, and IL-6, which are believed to contribute to cytokine-mediated acute lung injury and ARDS, 65 could provide significant benefit for COVID-19 patients.
Future COVID-19 research
The medical emergency created by this viral pandemic has unified global actions to produce therapies that improve survival. Current and ongoing treatment paradigms are aimed at mitigating severe COVID-19 disease utilizing several key targets: (1) ARDS prevention and treatment; (2) mitigating post-ARDS lung fibrosis; (3) reversing or preventing coagulopathy and microthrombi; (4) enhancing anti-viral effects. PAH-specific therapeutics offer potential solutions to many of these aforementioned key targets. In fact, there are now more than 10 clinical trials utilizing various PAH specific therapeutics (iNO, inhaled epoprostenol, and VIP) in patients with severe respiratory failure (clinicaltrials.gov) associated with COVID 19, as well as a proposed scientific registry looking at the effects of COVID-19 on PAH patients. The designs of some of these innovative trials are highlighted below.
A double-blind, placebo-controlled study is being launched to assess the efficacy and safety of inhaled epoprostenol delivered via dedicated delivery system (VentaProst from Aerogen Pharma) in subjects with COVID-19 requiring mechanical ventilation. Patients will receive continuous, inhaled epoprostenol via mechanical ventilator 3.5-30.5 ng/kg/min for 10 days. The primary endpoint will be efficacy and safety, with particular attention to reduction in the need for ECMO, time on ventilator, time in the intensive care unit, and prevention of hemodynamic collapse.
An open label study to assess the efficacy and safety of pulsed iNO was designed for subjects with COVID-19 requiring supplemental oxygen. It will evaluate if iNO 20 ppm, utilized for 5-14 days, prevents progression of respiratory disease and the requirement of mechanical ventilation (Bellerophon). The proprietary INOpulse Õ technology utilizes high concentration pulses to ensure a precise and constant dose regardless of a patient's respiratory rate or inspiratory volume. The pulsatile technology enables dose titration and allows much higher doses/concentrations than currently available in hospital-based systems, as well as reduces the overall size of the therapeutic device. The potential benefits of prostacyclins and iNO in COVID-19 have been extensively discussed in this manuscript and are promising therapeutics in this disease.
Lastly, the efficacy of weekly VIP subcutaneous injections in prevention of severe respiratory failure / ARDS will be evaluated in a randomized, double-blind, parallel group, phase 2 study in hospitalized subjects with COVID-19 requiring oxygen supplementation (PB1046 VANGARD COVID-19 study). The medication will be given for a maximum of four weeks and could be administered to outpatient as well.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by NIH K08HL148701 (EAB).
|
2020-11-12T09:06:38.884Z
|
2020-10-01T00:00:00.000
|
{
"year": 2020,
"sha1": "6b0b066a1f5f0de42281686af4f2adb157976421",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2045894020970369",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "62d015714362f58255d4af29e88b2f9ed688a220",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
184488304
|
pes2o/s2orc
|
v3-fos-license
|
Bag of Color Features For Color Constancy
In this paper, we propose a novel color constancy approach, called Bag of Color Features (BoCF), building upon Bag-of-Features pooling. The proposed method substantially reduces the number of parameters needed for illumination estimation. At the same time, the proposed method is consistent with the color constancy assumption stating that global spatial information is not relevant for illumination estimation and local information ( edges, etc.) is sufficient. Furthermore, BoCF is consistent with color constancy statistical approaches and can be interpreted as a learning-based generalization of many statistical approaches. To further improve the illumination estimation accuracy, we propose a novel attention mechanism for the BoCF model with two variants based on self-attention. BoCF approach and its variants achieve competitive, compared to the state of the art, results while requiring much fewer parameters on three benchmark datasets: ColorChecker RECommended, INTEL-TUT version 2, and NUS8.
INTRODUCTION
C OLOR constancy in general is the ability of an imaging system to discount the effects of illumination on the observed colors in a scene [1], [2]. When a person stands in a room lit by a colorful light, the Human Visual System (HVS) unconsciously removes the lightening effects and the colors are perceived as if they were illuminated by a neutral, white light. While this ability is very natural for the HVS, mimicking the same ability in a computer vision system is a challenging and under-constrained problem. Given a green pixel, one can not assert if it is a green pixel under a white illumination or a white pixel lit with a greenish illumination. Nonetheless, illumination estimation is considered an important component of many higher level computer vision tasks such as object recognition and tracking. Thus, it has been extensively studied in order to develop reliable color constancy systems which can achieve illumination invariance to some extent [1], [3].
The RGB image value ρ(x, y) in the position (x, y) of an image can be expressed as a function depending on three key factors [3]: the illuminant distribution I(x, y, λ), the surface reflectance R(x, y, λ) and the camera sensitivity S(λ), where λ is the wave length. This dependency is expressed as ρ(x, y) = λ I(x, y, λ)R(x, y, λ)S(λ)dλ. ( Color constancy methods [3], [4] aim to estimate a uniform projection of I(x, y, λ) on the sensor spectral sensitivities F. Laakom where I is the global illumination of the scene. Recently, deep learning approaches and Convolutional Neural Networks (CNNs) in particular have become dominant in almost all computer vision tasks, including color constancy [5], [6], [7], [8] due to their ability to take raw images directly as input and incorporate feature extraction in the learning process [9]. Despite their accuracy in estimating illumination across multiple datasets [6], [10], [11], deploying CNN-based approaches on low computational power devices, e.g., mobile devices, is still limited, since most of the high-accuracy deep models are computation-ally expensive [6], [7], [8], which make them inefficient in terms of time and energy consumption. Additionally, most of the available datasets for illumination estimation are rather small-scale [10], [12], [13] and hence not suitable for training large models. For this purpose, many state of the art approaches [5], [6] rely on pre-trained networks to overcome this limitation. On the other hand, these pre-trained networks [9], [14] are originally trained for a classification task. Thus, they are usually agnostic to the illumination color. This makes their usage in color constancy counterintuitive as the illumination information is distorted in the early pre-trained layers. An alternative approach is of course to reduce the number of model parameters in order to use existing datasets, as shallower models, in general, need less examples to learn. Furthermore, in [13], [15] it is argued that global spatial information is not an important feature in color constancy. The local information, i.e., the color distribution and the color gradient distribution (i.e. edges) can be sufficient to extract the illumination information [13]. Thus, using regular neural networks configurations to extract deep features is counter-intuitive in this particular problem. To address these drawbacks and challenges, we propose in this paper a novel color constancy deep learning approach called Bag of Color Features (BoCF). BoCF uses Bag-of-Features Pooling [16], which takes advantage of the ability of CNNs to learn relevant shallow features while keeping the model suitable for low-power hardware. Furthermore, the proposed approach is consistent with the assumption that global spatial information is not relevant [13], [15] for color illumination estimation.
Bag-of-Features Pooling is a neural extension [16], [17] of the famous Bag-of-Features model (BoF), also known as Bag-of-Visual Words (BoVW) [18], [19]. BoFs are wildly used in computer vision tasks, such as action recognition [20], object detection/recognition, sequence classification [21], and information retrieval [22]. A BoF layer can be combined with convolutional layers to form a powerful convolutional architecture that is end-to-end trainable using the regular back-propagation algorithm [17].
The block diagram of the proposed BoCF model is illustrated in Figure 1. It consists of three main blocks: feature extraction block, Bag of Features block, and an estimation block. In the first block, regular convolutional layers are used to extract relevant features. Inspired by the assumption that second order gradient information is sufficient to extract the illumination information [13], we use only two convolutional layers to extract the features. In our experiments, we also study and validate this hypothesis empirically. In the second block, i.e., the Bag of Features block, the network learns the dictionary using back-propagation [17] over the non-linear transformation provided by the first block. This block outputs a histogram representation, which is fed to the last component, i.e., the estimation block, to regress to the scene illumination.
In most CNN-based approaches used to solve the color constancy problem [5], [6], [7], [8], fully connected layers are connected directly to a flattened version of the last convolutional layer output. This increases the numbers of parameters dramatically, as convolutional layer outputs usually have a high dimensionality. In the proposed method, we address this problem by introducing an intermediate pooling block, i.e., the Bag of Features block, between the last convolutional layer and the fully connected layers. The proposed model achieves comparable results to previous state of the art illumination estimation methods while substantially reducing the number of the needed parameters, by up to 95%. Additionally, the pooling process natively discards all global spatial information, which is, as discussed earlier, irrelevant for color constancy. Using only two convolutional layers in the first block, limits the model to only shallow features. These two advantages make proposed approach both consistent and in full corroboration with statistical approaches [13].
To further improve the performance of the proposed model, we also propose two variants of a self-attention mechanism for the BoCF model. In the first variant, we add an attention mechanism between the feature extraction block and the Bag of Features block. This mechanism allows the network to dynamically select parts of the image to use for estimating the illumination, while discarding the remaining parts. Thus, the network becomes robust to noise and irrelevant features. In the second variant, we add an attention mechanism on top of the histogram representation, i.e., between the Bag of Features block and the estimation block. In this way, we allow the network to learn to adaptively select the elements of the histogram which best encode the illuminant information. The model looks over the whole histogram after the spatial information has been discarded and generates a proper representation according the current context (histogram). The introduced dynamics will be shown in the experiments to enhance the model performance with respect to all evaluation metrics and across all the datasets.
The main contributionsof the paper are as follows: • We propose a novel CNN-based color constancy algorithm, called BoCF, based on Bag-of-Features Pooling. The proposed model is both shallow and able to achieve competitive results across multiple datasets compared to the state of the art.
•
We establish explicit links between BoCF and prior statistical methods for illumination estimation and show that the proposed method can be framed as a learning-based generalization of many statistical approaches. This powerful approach fills the gap and provides the missing links between CNN-based approaches and static approaches.
•
We propose two novel attention mechanisms for BoCF that can further improve the results. To the best of our knowledge, this is the first work which combines attention mechanism with Bag-of-Features Pooling.
• The proposed method is extensively evaluated over three datasets leading to competitive performance with respect to existing state of the art, while substantially reducing the number of parameters.
The rest of this paper is organized as follows. Section 2 provides the background of color constancy approaches and a brief review of the Bag-of-Features Pooling technique and the attention mechanism used in this work. Section 3 details the proposed approach along with the two attention mecha-nisms based variants. Section 4 introduces the datasets and the evaluation metrics used in this work along with the evaluation procedure. Section 5 presents the experimental results on three datasets: ColorChecker RECommended [12], NUS8-Dataset [13], and INTEL-TUT version2 [10]. In Section 6, we highlight the links between our approach and many existing methods and we show how our approach can be considered as a generic framework for expressing existing approaches. Section 7 concludes the paper.
Color constancy
Typically, two types of color constancy approaches are distinguished, namely static methods and supervised methods. The former involves methods with static parameters settings that do not need any labeled image data for learning the model, while the latter are data-driven approaches that learn to estimate the illuminant in a supervised manner using labeled data.
Static methods
Static methods exploit the statistical or physical properties of a scene by making assumptions about the nature of colors. They can be classified into two categories: methods based on low-level statistics [23], [24], [25], [26] and methods based on the physics-based dichromatic reflection model [4], [15], [27], [28]. A number of approaches belonging to the first category were unified by Van de Weijer et al. [25] into a single framework expressed as follows: where n denotes the derivative order, p the Minkowski norm and k the normalization constant for ρ gt . Also, ρ σ (x, y) = ρ(x, y) * g σ (x, y) denotes the image convolution with a Gaussian filter with a scale parameter σ. This framework allows for deriving different algorithms simply by setting the appropriate values for n, p and σ.
The well-known Gray-World method [24], corresponding to (n = 0, p = 1, σ = 0), assumes that under a neutral illumination the average reflectance in a scene is achromatic and the illumination is estimated as the shift of the image average color from gray. White-Patch [23] (n = 0, p = ∞, σ = 0), assumes that the maximum values of RGB color channels are caused by a perfectly reflecting surface in the scene. Therefore, the illumination components correspond to these maximum values. Besides Gray-World and White-Patch methods, which make use of the color distribution in the scene to build their estimations, Gray-Edge method [25] utilizes image derivatives. Instead of the global average color, Gray-Edge methods (n = 1, p = p, σ = σ) assume that the average color of edges or the gradient of edges is gray. The illuminant's color is then estimated as the shift of the average edge color from gray. Physics-based dichromatic reflection models estimate the illumination by analyzing the scene and exploiting the physical interactions between the objects and the illumination. The main assumption of most methods in this category is that all pixels of a surface form a plane in RGB color space. As the scene contains multiple surfaces, this results in multiple planes. The intersection between these planes is used to compute the color of the light source [27]. Lee et al. [15] exploited the bright areas in the captured scene to obtain an estimate of the illuminant color. In this work, we establish links between our proposed approach, BoCF, and several static methods. We show that BoCF can be interpreted as a learning-based extension of several of these approaches.
Supervised methods
Supervised methods can be further divided into two main categories: characterization-based methods [29], [30] and training-based methods [5], [6], [31], [32]. The former involves light training processes in order to learn the characterization of the camera response in some way, while the latter involves methods that try to learn the illumination directly from the scene.
Gamut Mapping [29], [30] is one of the most famous characterization-based approaches. It assumes that for a given illumination condition, only a limited number of colors can be observed. Thus any unexpected variation in the observed colors is caused by the light source illuminant. The set of colors that can occur under a given illumination, called canonical gamut, is first learned in a supervised manner. In the evaluation, an input gamut which represents the set of colors used to acquire the scene is constructed. The illumination is then estimated by mapping this input gamut to the canonical gamut.
Another group of training-based methods combines different illumination estimation approaches and learns a model that uses the best performing method or a combination of methods to estimate the illuminant of each input based on the scene characteristics [31]. Bianco et al. used indoor/outdoor classification to select the optimal color constancy algorithm given an input image [32]. Lu et al. proposed an approach which exploits 3D scene information for estimating the color of a light source [33]. However, these methods tend to overfit and fail to generalize to all scene types.
The first attempt to use Convolutional Neural Networks (CNNs) for solving the illuminant estimation problem was established by Bianco et al. [5], where they adopted a CNN architecture operating on small local patches to overcome the data shortage. In the testing phase, a map of local estimates is pooled to obtain one global illuminant estimate using median or mean pooling. Hu et al. [6] introduced a pooling layer, namely confidence-weighted pooling. In their fully convolutional network, they incorporate learning the confidence of each patch of the image in an end-to-end learning process. Patches in an image can carry different confidence weights according to their estimated accuracy in predicting the illumination. Shi et al. [7] proposed a network with two interacting sub-networks to estimate the illumination. One sub-network, called the hypothesis network, is used to generate multiple plausible illuminant estimations depending on the patches in the scene. The second subnetwork, called the selection network, is trained to select the best estimate generated by the first sub-network. Inspired by the success of Generative Adversarial Networks (GANs) in image to image translation [34], Das et al. formulated the illumination estimation task as an image-to-image translation task [35] and used a GAN to solve it. However, these CNN-based methods suffer from certain weaknesses: computational complexity and disconnection with both the illumination assumption [13] and the prior static methods, e.g., Grey-World [24] and White-Patch [23]. This paper attempts to cure these drawbacks by proposing a novel CNN approach, BoCF, which discards the global spatial information in agreement with [13] and [25], and is competitive with the training-based methods while using only 5% of the parameters.
Bag-of-Features Pooling
Passalis and Tefas proposed a Bag-of-Features Pooling (BoFP) layer [16], [17], which is a neural extension of the Bag-of-Features model (BoF). BoFPL can be combined with convolutional layers to form a powerful architecture which can be trained end-to-end using the regular backpropagation algorithm [17], [36]. In this work, we use this pooling technique to learn the codebook of color features. Thus, the naming Bag of Color Features (BoCF). This pooling discards all the global spatial information and outputs a fixed length histogram representation. This allows us to reduce the large number of parameters usually needed when linking convolutional layers to fully connected layers. Furthermore, discarding global spatial information forces the network to learn to extract the illumination without global spatial inference, thus improving model robustness and adhering to the illumination assumption [13]. As an additional novel feature to the prior works using Bag-of-Features Pooling [17], [36], we propose introducing an attention mechanism to enables the model to discard noise and focus only on relevant parts of the input presentation. To the best of our knowledge, this is the first work which combines attention mechanisms with Bag-of-Features Pooling.
Attention mechanisms
Attention mechanisms were introduced in Natural Language Processing (NLP) [37] for sequence-to-sequence (seq2seq) models in order to tackle the problem of shortterm memory faced in machine translators. They allow a machine translator to see the full information contained in the original input and then generate the proper translation for the current word. More specifically, they allow the model to focus on local or global features, as needed. Self-attention [38], also known as intra-attention, is an attention mechanism relating different positions of a single sequence in order to compute a representation of the same sequence. In other words, the attention mask is computed directly from the original sequence. This idea has been exported to many other problems in NLP and computer vision such as machine reading [39], text summarization [40], [41], and image description generation [42]. In [42], self-attention is applied to an image to enable the network to generate an attention mask and focus on the region of interest in the original image.
Attention in deep learning can be broadly interpreted as a mask of importance weights. In order to evaluate the importance of a single element, such as a pixel or a feature in general, for the final inference, one can form an attention vector by estimating how strongly the element is correlated with the other elements and use this attention vector as a mask when evaluating the final output [42]. Let x = [x 1 , ..., x n ] ∈ R n be a vector. The goal of a selfattention mechanism is to learn to generate a mask vector v ∈ R n depending only on x, which encodes the importance weights of the elements of x. Let f be a mapping function between x and v. The dependency can be expressed as follows: under the constraint: After computing the mask vector v, the final output of the attention layer y is computed as follows: The concept of attention, i.e., focusing on particular regions to extract the illumination information in color constancy, can be rooted back to many statistical approaches. For example, White-Patch reduces this region to the pixel with the highest RGB values. Other methods, such as [15] focus on the bright areas in the captured scene, called specular highlights. Instead of making such a strong assumption on the relevant regions, in BoCF we allow the model to learn to extract these regions dynamically. To the best of our knowledge, this is the first work, which uses attention mechanisms in the color constancy problem.
PROPOSED APPROACH
In order to reduce the number of parameters needed to learn the illumination [6], [7], we propose a novel color constancy approach based on the Bag-of-Features Pooling [17], called herein the BoCF approach. The proposed approach along with the novel attention variants is illustrated in Figure 2. The proposed model has three main blocks, namely the feature extraction, the Bag of Features, and the illumination estimation blocks. In the first block, a nonlinear transformation of a raw image is obtained. In the second block, a histogram representation of this transform is compiled. This histogram is used in the third block to estimate the illumination.
Feature extraction
The feature extraction algorithm takes a raw image as input and outputs a nonlinear transformation representing the image features. A CNN is used in this block. CNNs are known for their ability to extract relevant features directly from raw images. Technically, any CNN architecture can be used in this block. However, we observed in our experiments that only two convolutions followed by downsampling layers, e.g., max-pooling yields satisfactory results. This is in accordance with the assumption of statistical methods that the second order information is enough to estimate the illumination [13], [25]. After a raw image is fed to the feature extraction block, the output of the last convolutional layer is used to extract feature vectors that are subsequently fed to the next block. The number of extracted feature vectors depends on the size of the feature map and the used filter size as described in [17].
Bag of Features
The Bag of Features is essentially the codebook (dictionary) learning component. The output features of the previous block are pooled using the Bag-of-Features Pooling and mapped to a final histogram representation. During training, the network optimizes the codebook using the traditional back-propagation. The output of this block is a histogram of a fixed size, i.e., the size of the codebook, which is a hyper-parameter that needs to be carefully tuned to avoid over-fitting. This approach discards all global spatial information. As described in [17], the Bag-of-Features Pooling is composed of two sub-layers: an RBF layer that measures the similarity of the input features to the RBF centers and an accumulation layer that builds the histogram of the quantized feature vectors. The normalized output of each RBF neuron can be expressed as where x is a feature vector, v k is the center of the k-th RBF neuron, exp is the exponential function, and ρ k is a scaling factor. The output of the RBF neurons is accumulated in the next layer, compiling the final representation of each image: where N is the number of feature vectors extracted from the last convolutional layer for the image.
Illumination Estimation
The Bag of Features layer receives a transformation of the raw image and compiles its histogram representation. Then, this histogram is fed to a regressor that estimates the illumination. In this work, a multi-layer perceptron with one hidden layer is used for this purpose, although any other estimator with differentiable loss function can be used. Let x ∈ R n be the histogram compiled by the second block. The intermediate layer output h ∈ R m can be computed as follows where W 1 ∈ R n×m is the weight matrix, b 1 ∈ R m is the bias vector, and ϕ is the Rectied Linear Units (ReLU) activation function [43]. The final estimate I ∈ R 3 is computed as follows where W 2 ∈ R m×3 is the weight matrix, b 2 ∈ R 3 is the bias vector, and φ is the softmax activation function defined by
Attention mechanism for BoCF
We introduce a novel attention mechanism in the BoCF model to enable the algorithm to dynamically learn to focus on a specific region of interest in order to yield a confident output. We combine self-attention, described in Section 2.3, with the Bag-of-Features Pooling for the color constancy problem. We propose two variants of this mechanism which can be applied in our model. For the mapping function f in (Eq. 4), we use a fully connected layer with softmax activation.
In the first variant, we propose to apply attention on the nonlinear transformation of the image after the feature extraction block. This enables the model to learn to attend the region of the interest in the mapping and to reduce noise before pooling. By applying attention in this stage, the number of parameters will rise exponentially as we need as many parameters as features.
In the second variant, we propose to apply the attention mechanism on the histogram representation of the BoCF, i.e., after the global spatial information is discarded. This enables the model to dynamically learn to tend to the relevant parts of the histogram which encode the illuminant information. In this variant, the attention mask size is equal to the size of the histogram. Thus, the number of additional parameters is relatively small. Following the notations of (4) and (5), x ∈ R n is the histogram representation and v ∈ R n is the attention mask is obtained via the fully connected layer as follows: where W ∈ R N ×N is a weight matrix, b ∈ R N is the bias.
Using softmax as φ ensures that the masking constraint defined in (Eq. 5) is not violated. Finally, y, the final output of the attention mechanism, is computed using the following equation where is the element wise product operator and λ ∈ R is a weighting parameter between the masked histogram v x and the original histogram x. λ is a learnable parameter in our model. Not using λ and outputting only the masked histogram is also another option. However, we determined experimentally that outputting the weighted sum of both the original and the masked version is more robust and stable for the gradient-based optimizers, since it is less susceptible to random initialization weights of the attention. Parameter λ can be optimized using the gradient decent in the back-propagation process along with the rest of the parameters. Its gradient with respect to the output of the attention block can be obtained via the following equation
EXPERIMENTAL SETUP
In this section, we present the experimental setup used in this work. In Subsection 4.1, we introduce the datasets used to test our models. In Subsection 4.2, we report the network architectures of the three blocks used in BoCF. In Subsection 4.3, we detail the evaluation process followed in our experiments. Finally, the evaluation metrics used are briefly described in Subsection 4.4.
ColorChecker RECommended dataset
ColorChecker RECommended dataset [12] is a publicly available updated version of Gehler-Shi dataset [11] 1 with a proposed (recommended) ground truth to use for evaluation. This dataset contains 568 high-quality indoor and outdoor images acquired by two cameras: Canon 1D and Canon 5D. Similar to the works in [5], [6], [7], [8], for Color Cheker REComended dataset, we used three-fold cross validation to evaluate our algorithms.
NUS-8 Camera Dataset
NUS-8 is a publicly available dataset 2 , containing 1736 raw images from eight different camera models. Each camera has about 210 images. Following previous works [6], [13], we perform tests on each camera separately and report the mean of all the results for each evaluation metric. As a result, although the total number of images in NUS-8 dataset is large, each experiment involves using only 210 images for both training and testing.
INTEL-TUT2
INTEL-TUT2 3 is the second version of the publicly available INTEL-TUT dataset [10]. The main particularity of this dataset is that it contains a large number of images taken by several cameras from different scenes. We use this dataset for an extreme testing protocol, the third protocol described in [10]. The models are trained with images acquired by one camera and containing one type of scene and tested on the other cameras and the other scenes. This extreme test is useful to show the robustness of a given model and its ability to generalize across different cameras and scenes.
INTEL-TUT2 contains images acquired with three different cameras, namely Canon, Nikon, and, Mobile. For each camera, the images are divided into four sets: field (144 images per camera), lab printouts (300 images per camera), lab real scenes (4 images per camera), and field2. The last set field2 concerns only Canon and it has a total of 692 images. Figure 3 shows some samples from the field, lab printouts, and lab real scenes sets of the three cameras, while Figure 4 displays samples from field2 related to Canon camera.
We used only Canon field2 set for training and validation (80% for training and 20% for validation). We constructed two test sets. The first one, called field in this work, contains all the field images taken by the other camera models, i.e., Nikon and Mobile. The second set, called non-field in this work, contains all the non-field images acquired by Nikon and Mobile. Comparing the performance on these two sets allowed us to test both scene and camera invariance of the model. As we are using different camera models in same experiments, the variation of camera spectral sensitivity needs to be discounted. For this purpose, we use Color Conversion Matrix (CCM) based preprocessing [44] to learn the 3 × 3 Color Conversion Matrix (CCM) matrices for each camera pair.
Network architectures
The BoCF network is composed of three blocks: the feature extraction, the Bag of Features, where the Bag-of-Features Pooling is applied, and the illumination estimation blocks as described in Section 3. The feature extraction block consists of convolution layers followed by max pooling operators. We experiment with different number of layers two and three. Thirty convolution filters of size 4 × 4 are used in both layers. Max-pooling with a window size 2 is applied in both layers. For the codebook size, i.e., number of RBF neurons in the Bag of Features block, we experiment with 3 different values 50, 150 and 200. The illumination estimation block consists of 2 fully connected layers, the first (hidden layer) has a size of 40 and it takes as input the histogram representation and the second one (final output) has of size 3 to output the illumination.
Evaluation procedure
To evaluate the proposed approach, we used 2 sets of experiments. In the first set, we evaluate different variants of the model to study the effect of the hyper-parameters and validate the effectiveness of each component in our model by conducting ablation studies. For this purpose, we used ColorChecker RECommended dataset. In the second set of experiments, we compared our approach with current stateof-the-art approaches on the three datasets.
For all testing scenarios, we augmented the datasets using the following process: As the size of the original raw images is high, we first randomly cropped 512×512 patches of each image. This ensured getting meaningful patches. The crops were then rotated by a random angle between -30 • and +30 • . Finally, we rescaled the RGB values of each patch and its corresponding ground truths by random factor in the range of [0. 8, 1.2]. Before feeding the sample to the network, we down-sampled it to 227 × 227. In testing, the images are resized to 227 × 227 to fit the network model.
Our network was implemented in Keras [45] with Tensorflow backend [46]. We trained our network end-to-end by back-propagation. For optimization, Adam [47] was employed with a batch size of 15 and a learning rate of 3 × 10 4 . The model was trained on image patches of size 227×227 for 3000 epochs. The centers of the dictionary were initialized using the k-means algorithm as described in [17]. The parameter λ, discussed in Section 3.4, was initialized as 0.5.
Evaluation metrics
We report the mean of the top 25%, the mean, the median, Tukey's trimean, and the mean of the worst 25% of the recovery angular error (RAE) [48] between the ground truth illuminant and the estimated illuminant, defined as where ρ gt is the ground truth illumination for a given image and ρ Est is the estimated illumination.
EXPERIMENTAL RESULTS
In this section, we provide the experimental evaluation of the proposed method and its variants. In Subsection 5.1, different topologies for the three blocks of BoCF are evaluated on the ColorChecker RECommended dataset and the effect of each block in our model is examined by reporting the results of the ablation studies. In Subsection 5.2, we compare the performance of the proposed models with different state-of-the-art algorithms over the three datasets.
BoCF performance evaluation
We first evaluated the accuracy of the different variants of BoCF on ColorChecker RECommended dataset. Table 1 presents the comparative results for BoCF using different topologies in the three blocks. We evaluate the model using different dictionary sizes in the second block (codewords), different numbers of convolution layers in the first block, and with/without attention. Table 1 shows that the dictionary size in the Bag-of-Features Pooling block significantly affects the overall performance of the model. Using a larger codebook results in higher risk of overfitting to the training data, while using a smaller codebook size restricts the model to only few codebook centers which can decrease the overall performance of the model. Thus, the choice of this hyperparameter is critical for our model. The findings in Table 1 confirm this effect and highlights the importance of this hyperparameter. By comparing the model performance using different dictionary sizes, we can see that a dictionary of size 150 yields the best compromise between the number of parameters and the overall performance.
Using three convolutional layers instead of two in the first block yields slightly better median errors and worse trimean errors. However, to keep the model as shallow as possible, we opt for the two convolution layers. Table 1 shows that models equipped with an attention mechanism perform better than models without attention almost consistently across all error metrics. This is expected as attention mechanisms allow the model to focus on relevant parts only and as a result, the model becomes more robust to noise and to inadequate features. The performance boost obtained by both attention variants is more highlighted in terms of the median and trimean errors compared to the non-attention variant.
By comparing the performance achieved by the two attention variants, we note that the first attention variant yields in a better performance in terms of worse 25% error rate, while the second variant yields a better median and trimean error rates. It should also be remembered that the first variant applies attention over the feature map output of the first convolutional block. Thus, it dramatically increases the number of model parameters (over 20 times) compared to the second variant (doubling the number of parameters) which applies the attention over the histogram. Figure 5 presents a visualization of the attention weights [49] for both attention variants. The heat maps demonstrate which regions of the image each model pays attention to so as to output a certain illumination. We note a large difference between both attentions. The first attention variant tends to focus on regions with dense edges and sharp shapes, while the second model focuses on uniform regions to estimate the illumination.
Ablation studies
To examine the effect of each block in our proposed approach, we conduct ablation studies on the colorChecker RECommended dataset. Table 2 reports the results of the basic BoCF approach, the results achieved by removing the feature extraction block, and the results obtained by removing the estimation block, i.e., replacing the fully connected layer in the estimation block with a simple regression, We note that removing any block significantly decreases the overall performance of our models.
Comparing the model with and without the feature extraction block, we note a large drop in performance especially in terms of the worst 25% error rates, i.e., 1.8 • drop compared to 0.6 • drop when the estimation block is removed.
Comparisons against state-of-the-art
We compare our BoCF approach with the state-of-theart methods on ColorChecker RECommended, NUS-8, and INTEL-TUT2 datasets, which have been widely adopted as benchmark datasets in the literature. Tables 4, 5, and 6 provide quantitative results for ColorChecker RECommended, NUS-8, and INTEL-TUT2 datasets, respectively. We provide results for the static methods Grey-World, White-Patch, Shades-of-Grey, and General Grey-World. The parameter values n, p, ρ are set as described in [25]. In addition, we compare against Pixel-based Gamut, Bright Pixels, Spatial Correlations, Bayesian Color Constancy [11], and six convolutional approaches: Deep Specialized Network for Illuminant Estimation (DS-Net) [7], Bianco CNN [5], Fast Fourier Color Constancy [50], Convolutional Color Constancy [51], Fully Convolutional Color Constancy With Confidence-Weighted Pooling (FC4) [6], and Color Constancy GANs (CC-GANs) [35]. The results for ColorChecker RECommended and NUS-8 datasets were taken from related papers [6], [35]. From Recommended Color Checker and NUS-8 datasets results in Tables 4 and 5, we note that learning-based methods usually outperform statistical-based methods across all error metrics. This can be explained by the fact that statistical approaches rely on some assumptions in their model. These assumptions can be violated in some testing samples which results in high error rates especially in terms of the worst 25% errors. Table 4 shows that the proposed method BoCF and its variants achieve competitive results on Recommended Color Checker dataset. The only models performing slightly better than BoCF are FC4(SqueezeNet) and DS-Net. By comparing the number of parameters required by each model given in Table 3, we see that BoCF achieves very competitive results, while using less than 1% of the parameters of FC4(SqueezeNet) and less than 0.1 % of the parameters of
DS-Net.
Compared to Bianco's CNN, we note that our model performs better across all error metrics except for the worst 25% error metric. Bianco CNN operates on patches instead of the full image directly and this makes it more robust but, at the same time, it increases its time complexity as the network has to estimate many local estimates before outputting the global one.
TABLE 3 Number of parameters of different CNN-based approaches
Method # parameters Bianco [5] 154k Fc4(SqueezeNet) [6] 1.9M FC4 (AlexNet) [6] 3.8M DS-Net [7] 17.3M BoCF(2conv+150 words + no attention) 20k BoCF(2conv+150 words + attention1) 376k BoCF(2conv+150 words + attention2) 43k Results for NUS-8 dataset are similar to their counter parts on ColorChecker RECommended, as illustrated in Table 5. Our models achieve comparable results with FC4 and overall better results compared to DS-Net across all error metrics. Bianco's CNN outperforms all the other CNNbased methods. As discussed earlier, this can likely be explained by the fact that Bianco operates on patches while BoCF and FC4 produce global estimates directly. Table 6 reports the comparative results achieved on INTEL-TUT2 dataset. We note that all the error rates are high as this is an extreme testing senario. The models are trained and validated using only one type of scene (field2 set) acquired by one camera model (Canon) and then evaluated over different scene types and different camera models not seen during the training as described in Section 4.3. The proposed BoCF model achieves better overall performance compared to Bianco's CNN and Color Constancy Convolutional AutoEncoder (C3AE) methods and competitive results compared to FC4.
By comparing the performance achieved by BoCF with and without attention, we note both the attention mechanisms proposed in this paper significantly boost the performance of our model for all datasets. It should also be mentioned that despite requiring much less parameters, the second variant of our attention model, where the attention is applied over the histogram representation, performs slightly better than the first variant, where the attention is applied over the feature extraction block.
DISCUSSION
When comparing our approach to the competing methods, it must be pointed out that our approach can be linked to many previous static-based approaches. In Grey-World [24], one takes the average of the RGB channels of the image. In the proposed method, this corresponds to using the identity as a feature extractor and using equal weights in the estimation block. This way all the histogram bins will contribute equally in the estimation. White-Patch [23] takes the max across the color channels, which corresponds to giving a high weight to the histogram bin with the highest intensity and giving zero weights to the rest. Greyedge and its variants [25] correspond to using the first and second order derivatives as a feature extractor. Thus, BoCF approach can be interpreted as a learning-based generalization of these statistical based approaches. Instead of using the image directly, we allow the model to learn a suitable non-linear transformation of the original image, through the feature extraction block, and instead of imposing a prior assumption on the contribution of each feature in the estimation, we allow the model to learn the mapping dynamically using the training data.
It is interesting to note that the attention variants in our approach can be tightly linked to the confidence maps in FC4 [6]. In FC4, confidence scores are assigned to each patch of the image and a final estimate is generated by a weighted sum of the scores and their corresponding local estimates. This way the network learns to select which features contribute to the estimation and which parts should be discard. Similarly, attention mechanism learn to dynamically pay attention to the parts encoding the illumination information and discarding the rest.
CONCLUSION
In this paper, we proposed a novel color constancy method called BoCF, which is composed of three blocks. In first block, called feature extraction, we employ convolutional layers to extract relevant features from the input image. In the second block, we apply Bag-of-Features Pooling to learn a codebook and output of histogram. The latter is fed into the last block, the estimation block, where the final illumination is estimated. This end-to-end model is evaluated and compared with prior works over three datasets: ColorChecker RECommended, NUS-8, and INTEL-TUT2. BoCF was able to achieve competitive results compared to state-of-the-art methods while reducing the number of parameters up to 95%. In this paper, we also discussed links between the proposed method and statistic based methods and we showed how the proposed approach can be interpreted as a supervised extension of these approaches and can act as a generic framework for expressing existing approaches as well as developing new powerful methods.
In addition, we proposed combining the Bag-of-Features Pooling with two novel attention mechanisms. In the first variant, we apply attention over the nonlinear transform of the image after the feature extraction block. In the second extension, we apply attention over the histogram representation of the Bag-of-Features Pooling. These extensions are shown to improve the overall performance of our model.
In future work, extensions of the proposed approach could include exploring regularization techniques to ensure diversity in the learned dictionary and improve the generalization capability of the model.
|
2019-06-11T08:47:49.000Z
|
2019-06-11T00:00:00.000
|
{
"year": 2019,
"sha1": "878f1f8852b1700219057d270d98e7fdf651aa25",
"oa_license": null,
"oa_url": "https://ieeexplore.ieee.org/ielx7/83/8835130/09130881.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "878f1f8852b1700219057d270d98e7fdf651aa25",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
52120695
|
pes2o/s2orc
|
v3-fos-license
|
Researches on the State of Medical Science in the Early Period of the History of the Hindoos
the relation between it and Greek medicine; but it is a complement to the essay previously noticed on the medical knowledge of Homer ; and he follows up, as far back as he can, the thoughts of man on disease and on healing, in that primitive dawn of civilisation which preceded and prepared the way for the forms of social existence of which Greece, as painted by Homer, presented one type, and India another. The most ancient period of the history of Greek medicine he looks for in the oldest literature of India, the Yedic hymns. There is, of course, much less definite information in lyric poems than in highly picturesque and detailed narrative. " When a people only sings the gods, it is that men have none but the gods to look to for aid in all the things of this lifeand in the eai'lier hymns disease is only spoken of in the most general terms, and the only healers and preservers thought of are the divine powers invoked. There are special deities of health; the winds and the sacred Soma are addressed as its sources and guardian; but Dr. Daremberg observes that it is impossible to distinguish whether the health and healing prayed for mean general welfare or special immunity from bodily disease. The curing of wounds makes scarcely any appearance in the Hig-Veda; Dr. Daremberg has detected only one surgical allusion,
to the essay previously noticed on the medical knowledge of Homer ; and he follows up, as far back as he can, the thoughts of man on disease and on healing, in that primitive dawn of civilisation which preceded and prepared the way for the forms of social existence of which Greece, as painted by Homer, presented one type, and India another. The most ancient period of the history of Greek medicine he looks for in the oldest literature of India, the Yedic hymns. There is, of course, much less definite information in lyric poems than in highly picturesque and detailed narrative. " When a people only sings the gods, it is that men have none but the gods to look to for aid in all the things of this lifeand in the eai'lier hymns disease is only spoken of in the most general terms, and the only healers and preservers thought of are the divine powers invoked. There are special deities of health; the winds and the sacred Soma are addressed as its sources and guardian; but Dr. Daremberg observes that it is impossible to distinguish whether the health and healing prayed for mean general welfare or special immunity from bodily disease. The curing of wounds makes scarcely any appearance in the Hig-Veda; Dr. Daremberg has detected only one surgical allusion, and that a purely mythological one. The " physician ' is named in the Yedas, but only in the later ones. Dr. Daremberg traces a change of feeling from the earlier hymns, where simple prayer is the only remedy thought of, to the more definite formulas and charms, ap-Bibliographical Record.
[July, proxiraating to magical spells, which appear in the more recent ones. Of definite diseases, he finds traces of leprosy and consumption; the external parts of the body are named; the physiological notions on life and reproduction are expressed very generally, and appear to be those common to all the early races with which we are acquainted.
The ' Rig-Veda represents the earliest ideas, but its different portions represent the progress of the human mind. " Aux premieres lucurs de la civilisation la nature etonne, charine ou epouvante, mais on n'a pas meme l'idee de la maitriser, et on en divinise toutes les manifestations ; un peu plus tard on commence a s'apercevoir que l'homme dispose des forces qui souvent peuvent contre?balancer avec avantage les forces du monde exterieur ; mais presque aussitot et presque en meme temps l'homme se laisse a son tour maitriser par les chefs?surtout par les ministres des dieux ; il n'a pas assez de science pour observer avec surete et pour diriger ses instinct vers l'emploi naturel de sa puissance ; il reneontre alors plus de sujets de terreur que d'admiration et de confiance ; la theologie spontanee, naive, devient une theologie ealculee, reglementee, ou la superstition penetre de tout cote par l'influence des castes sacerdotales. L'action de ces castes, d'abord salutairo, nait directement et spontanement des sentiments religieux primitifs ; mais, peu a peu, elles prennentune suprematie tyrannique en entretenant la pusillanimite de l'esprit, et en etouffant les efforts naturels dela pensee. Cette marche de l'esprit huinain on peut la suivre pas a pas dans les Vedas; et meme d'une partie a l'autre dans le ' Rig. Veda,' on observe des nuances tres sensibles et fort eurieuses a etudier. Dans les hymnes qu'on tient pour les plus anciens, les Aryas ne paraissent avoir eu, en ce qui touche leurs maladies, aucun intermediaire entre euxmemes et les dieux seeourables ;?tandis que dans les hymnes qui passent pour les plus recents on rencontre, en meme temps que la mention expresse des medecins, un culte plus fortement organise, mille details de la vie publique ou privee, des essais de cosmogonie et de doctrines philosophiques qui trahissent un second degre de civilisation, des formes litteraires plus travaillees et parfois moins pures, enfiu des passions plus ardentes et souvent plus mauvaises." Dr. Daremberg finds in a still later collection, the ' Atharva Yeda/ the representative, in chronology and civilisation, of the times of the Odyssey, the epoch of magic and theurgic rites; but he observes that, while among the Greeks magical ideas vainly tried to supplant natural medicine, they conquered in India and perpetuated themselves there for ages. At length, in the third period ol the history of Indian medicine, represented in the ' Agur-Veda' of Susruta, while medicine is viewed as a matter of divine revelation, science regains some portion of its rights over the purely theurgic idea.
Dr. Daremberg attributes this to foreign influences, for nothing short of such influences could have forced Brahmins to admit the scientific spirit into even a supplementary Yeda, after having so
1869.]
Williamson's Chemistry for Students. 199 long maintained a monopoly of exorcisms and miraculous remedies. The work of Susruta, the most interesting document on Indian medicine, is reserved by Dr. Daremberg for future examination.
|
2018-10-18T21:26:54.008Z
|
1869-07-01T00:00:00.000
|
{
"year": 2016,
"sha1": "ab5f0b8bdb199c34d37aadb48a03e061aa456038",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "ab5f0b8bdb199c34d37aadb48a03e061aa456038",
"s2fieldsofstudy": [
"History",
"Medicine"
],
"extfieldsofstudy": []
}
|
38205300
|
pes2o/s2orc
|
v3-fos-license
|
UDP-N-acetyl-α-D-galactosamine:polypeptide N-Acetylgalactosaminyltransferase IDENTIFICATION AND SEPARATION OF TWO DISTINCT TRANSFERASE ACTIVITIES
Abstract Using a defined acceptor substrate peptide as an affinity chromatography ligand we have developed a purification scheme for a unique human polypeptide, UDP-GalNAc:polypeptide N-acetylgalactosaminyltransferase (GalNAc-transferase) (White, T., Bennett, E. P., Takio, K., Sorensen, T., Bonding, N., and Clausen, H.(1995) J. Biol. Chem. 270, 24156-24165). Here we report detailed studies of the acceptor substrate specificity of GalNAc-transferase purified by this scheme as well as the GalNAc-transferase activity, which, upon repeated affinity chromatography, evaded purification by this affinity ligand. Using a panel of acceptor peptides, a qualitative difference in specificity between these separated transferase preparations was identified. Analysis of GalNAc-transferase activities in four rat organs and two human organs also revealed qualitative differences in specificity. The results support the existence of multiple GalNAc-transferase activities and suggest that these are differentially expressed in different organs. As the number of GalNAc-transferases existing is unknown, as is the specificity of the until now cloned and expressed GalNAc-transferases (T1 and T2), it is as yet impossible to relate the results obtained to specific enzyme proteins. The identification of acceptor peptides that can be used to discriminate GalNAc-transferase activities is an important step toward understanding the molecular basis of GalNAc O-linked glycosylation in cells and organs and in pathological conditions.
Glycosylation of proteins in eukaryotes is fundamental for the integrity of the individual cell and the organism as a whole (Varki, 1993). A number of different types of protein glycosylations have been identified (for a recent review see Lis and Sharon, 1993). Biosynthesis of the initial glycosylation of the protein backbone has been established in most cases and the involved glycosyltransferases partly characterized. In several cases characterization of glycosylation sites has identified peptide motifs that suggest the nature of the acceptor substrate specificities of transferases initiating protein glycosylation.
Thus N-linked asparagine glycosylation is restricted to the sequence -Asn-Xaa-Ser/Thr-(where Xaa may be any amino acid except proline). Proteoglycan-type glycosylation of serine is restricted to -Ser-Gly-Xaa-Gly- (Bourdon et al., 1987). The Glc-NAc-type glycosylation of serine or threonine appears to be adjacent to an acidic amino acid and within two residues of a proline (Haltiwanger et al., 1992). The fucose-type glycosylation of serine/threonine seems to be restricted to the peptide sequence -Gly-Gly-Thr/Ser-Cys-, although the enzyme has yet to be characterized (Harris and Spellman, 1993).
In contrast, a defined peptide motif for GalNAc O-glycosylation (mucin type) and the equivalent yeast Man-type glycosylation of serine/threonine has not emerged. A number of studies have attempted to identify a consensus sequence for mammalian GalNAc O-glycosylation by studying sequences around identified glycosylation sites (Gooley et al., 1991;O'Connell et al., 1991;Wilson et al., 1991;Elhammer et al., 1993) as well as by testing the peptide substrate specificity of the GalNActransferase 1 activity in crude and pure form (O'Connell et al., 1992;Wang et al., 1992Wang et al., , 1993Elhammer and Kornfeld, 1986;Hagen et al., 1993;O'Connell and Tabak, 1993;Gooley and Williams, 1994;Nishimori et al., 1994aNishimori et al., , 1994b. It is clear from these studies that the GalNAc-transferase must have broad acceptor substrate specificity, but it is likely that our understanding of this broad motif is shadowed by the involvement of several GalNAc-transferases. As described in the accompanying paper (White et al., 1995) a novel GalNAc-transferase has been isolated and cDNA cloned, which, together with the previously cloned bovine GalNAc-transferase (Homa et al., 1993), clearly establishes the existence of at least two distinct enzymes. By analogy, Strahl-Bolsinger et al. (1993) provided evidence that more than one polypeptide O-mannosyltransferase exists in yeast.
The total number of existing GalNAc-transferases is unknown, but it is very likely that our knowledge of this family of transferases will expand rapidly. Assigning detailed acceptor substrate specificity toward the cloned and expressed Gal-NAc-T1 and -T2 awaits comparative studies of recombinant transferases, and data obtained so far using purified enzyme preparations are likely to be biased by copurified mixtures of enzymes (Homa et al., 1993;White et al., 1995;Wang et al., 1993). To begin to understand the potential differential specificity of multiple GalNAc-transferases we have begun search-* This work was supported by the Danish Medical Research Council, the Danish Natural Science Research Council, the Lundbeck Foundation, Ingeborg Roikjer's Foundation, and the Danish Cancer Society. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
Here we present evidence that affinity chromatography using a defined synthetic acceptor substrate peptide resulted in separation of two different GalNAc-transferase activities and that these appear to be differentially expressed in organs. The results thus provide the first evidence for the involvement of at least two GalNAc-transferase specificities in GalNAc O-linked glycosylation initiation. The study identified acceptor peptides capable of discriminating different GalNAc-transferase activities, which should prove valuable for detailed studies of the specificity of identified and cloned enzymes in this family.
Materials
Human placentas were collected (6 -24 h postdelivery) after informed consent at the Herlev Hospital, Copenhagen; human liver tissue was obtained at routine autopsy. Porcine and ovine submaxillary glands were bought from Pel-Freez. Synthetic peptides were custom synthesized by Carlbiotech (Copenhagen, Denmark) or Neosystems (Strassburg, France) with amino acid and mass spectrometry analysis for sequence confirmation. The peptide sequences studied are listed in Table II.
Polypeptide GalNAc-transferase Assay
The standard enzyme reaction mixture consisted of 25 mM Tris-HCl (pH 7.4), 0.25% Triton X-100, 5 mM MnCl 2 , 5 mM CDP-choline, 5 mM 2-mercaptoethanol, 0.05 mM UDP-[ 14 C]GalNAc (4,000 cpm/nmol), 250 M peptide, and enzyme in a final volume of 100 l. Unless otherwise stated, assays were incubated for 10 min at 37°C followed by Dowex 1 ion exchange (formic acid form) chromatography and scintillation counting. Combinations of substrate and enzyme source were evaluated at least once by C-18 chromatography (C2C18 3.2 Smart System, Pharmacia Biotech Inc.) to ensure stability of peptide and that incorporated [ 14 C]GalNAc was associated with the peptide. Furthermore, to exclude fully the possibility that crude enzyme preparations from organs degraded or by other means blocked the acceptor substrate peptides, the human liver enzyme assay was studied in two ways: 1) time course preincubation of peptides with human liver enzyme 0 -24 h at 37°C without sugar nucleotide followed by evaluation of acceptor substrate accessibility using human placenta enzyme preparation with sugar nucleotide; 2) mass spectrometry of the time course preincubation study.
When measuring K m and V max for different peptide substrates the UDP-[ 14 C]GalNAc concentration was increased to 0.2 mM (4,000 cpm/ nmol), and the enzyme concentration was 0.5 milliunits/ml. Preparative glycosylation of peptides was performed with 0.5 mol of peptide, 5 milliunits of enzyme, and 5-100 mol of UDP-[ 14 C]GalNAc in a final volume of 1 ml. The glycopeptide was purified by C-18 reverse phase chromatography, and the glycopeptide-containing fractions were detected by scintillation counting.
Structure Determination
Matrix-assisted Laser Desorption/Ionization Mass Spectrometry-Samples were dissolved in 0.1% trifluoroacetic acid to a concentration of approximately 0.05 g/l. One l of sample solution was applied to a stainless steel probe tip precoated with 1 l of matrix solution (␣-cyano-4-hydroxycinnamic acid dissolved in acetone, 15 g/l) and washed thoroughly before introduction into the mass spectrometer (Vorm et al., 1994).
All mass spectra were obtained on a Bruker reflex Time of Flight mass spectrometer (Bruker-Franzen Analytik, Bremen, Germany). Data were acquired by a LeCroy 9450A 400 megasamples/s digital storage oscilloscope (LeCroy Corporation, Chestnut Ridge, NY) from which single shot spectra were transferred to a MacIntosh Quadra 950 computer (Apple Computer Inc., Cupertino, CA) via a National Instruments NI DAQ GPIB controller board (National Instruments, Austin, TX).
Control of data acquisition parameters, the transfer and subsequent averaging of spectra, as well as further data processing were carried out using the computer program LaserOne, which was written in ThinkC (Symantec Corporation, Cupertino, CA) by M. Mann and P. Mortensen, EMBL, Heidelberg, Germany.
All mass spectra were obtained in the linear mode and calibrated using a singly charged matrix ion, which provided a mass accuracy of approximately 0.1%.
Amino Acid Sequencing-Peptides and glycopeptides (GalNAc-glycosylated) were sequenced automatically (Applied Biosystems 470), and the phenylthiohydantoin derivatives were analyzed by on-line high performance liquid chromatography. GalNAc-glycosylated sites were identified by the loss of the Ser/Thr-phenylthiohydantoin signal and the appearance of pseudo-peaks.
Protein Determination
Protein concentrations were determined by the method of Bradford (Bio-Rad protein assay) using bovine serum albumin as standard.
Purification of GalNAc-transferase from Different Tissues
The human placenta, and ovine and porcine submaxillary GalNActransferase was purified as described in the accompanying paper (White et al., 1995). Briefly, Triton X-100 extracts of organs were applied to Cibacron blue 3GA-agarose and transferase eluted with 1.5 M KCl (step 1, see Table I). The eluted and dialyzed preparation was passed through a DEAE-Sephacel column (step 2). The pass-through was applied to an S-Sepharose column after pH adjustment to 6.5 and the transferase activity eluted by a NaCl gradient (step 3). The pooled enzyme fractions were then applied to a Muc2 affinity column (Muc2 acceptor substrate peptide coupled to cyanogen bromide-activated Sepharose) in the presence of UDP and Mn 2ϩ , and bound enzyme eluted with EDTA in the absence of UDP (step 4). The eluted enzyme was dialyzed and concentrated (dialysis concentrator Spektor) and is hereafter referred to as the purified transferase preparation. Triton X-100 (0.1%) was included throughout the entire purification procedure.
In some experiments Triton X-100 was exchanged at the S-Sepharose step (step 3) by slow overnight washing with n-octyl glucoside and maltoside, and further purification was performed in n-octyl glucosidecontaining buffers that were otherwise as described.
Analysis of GalNAc-transferase Specificity in Rat and Human Organs
Male rat or human organs were homogenized in water. After centrifugation at 10,000 ϫ g the pellet was resuspended in water and centrifuged. The final pellet was resuspended in extraction buffer containing 1.5% Triton X-100, 2 mM EDTA, 100 mM NaCl, and 25 mM Tris-HCl (pH 6.5) and extracted for 2 h at 4°C. Because of the limited stability of the organ homogenate, detailed substrate analysis was performed on the Cibacron-purified preparations. Cibacron-purified transferase refers to the pooled 1.5 M KCl eluate dialyzed and concentrated by a dialysis concentrator. No significant differences in the substrate specificity between the crude homogenate and the Cibacron-purified transferase preparations could be demonstrated; however, the crude transferase homogenate showed quite variable activities and was difficult to characterize. Table I summarizes the purification of ovine and porcine submaxillary GalNAc-transferase activity from 500 g of tissue using Muc2 affinity chromatography. Purification of the human placenta GalNAc-transferase was similar, as reported in the accompanying paper (White et al., 1995).
Purification of Porcine and Ovine GalNAc-transferase
The purification scheme used gave a quantitatively different result for the submaxillary gland activity compared with human placenta. The initial Cibacron chromatography of ovine and porcine gland extracts yielded the same results as for the human placenta transferase. However, the yields of the ovine and porcine transferases were considerably lower at the peptide affinity chromatography step compared with human placenta. Since the human transferase purified by the same procedure using Triton X-100 as detergent in the affinity chromatography was found to be the soluble fragment without the hydrophobic transmembrane segment, this difference in yield could be related to a relatively lower ratio of soluble versus membrane-bound transferase. Human placenta tissue was obtained 6 -24 h after delivery (stored at room temperature) and after freezing was subsequently thawed at 4°C for 1-3 days before extraction. In contrast the animal glands were quick frozen by the supplier (Pel-Freez) and thawed at 4°C overnight.
In separate experiments (not shown) the unbound fraction of the Muc2 affinity chromatography of porcine gland enzyme was reapplied to the Muc2 affinity column after detergent exchange from Triton X-100 to n-octyl glucoside and maltoside. Detergent exchange of the nonretained material from the Muc2 peptide column using n-octyl glycoside and maltoside followed by repeated Muc2 peptide affinity chromatography run in these detergents resulted in a considerably higher purification yield, although even repeated application on the column failed to absorb more than 50% of the transferase activity measured with the Muc2 peptide substrate. The human placenta preparation behaved similarly except for the initially higher yield of the first chromatography run in Triton X-100. Ion exchange Mono S chromatography performed without detergent on the porcine and ovine preparations obtained after detergent exchange and Muc2 affinity purification resulted in total loss of activity. Gel filtration of the same preparations gave a spread of enzyme activity from the void volume to a molecular weight of approximately 100,000 (not shown), all suggestive that the transferases were bound tightly to the detergent and therefore likely to include a strongly hydrophobic transmembrane segment.
Separation of two GalNAc-transferase Activities by Affinity Chromatography on Muc2 Peptide
As shown in Fig. 1 the apparent K m for the Muc2 peptide of the Muc2 affinity chromatography-purified transferase was significantly lower than the crude transferase before chromatography and the unbound transferase activities. Combined with the finding that the Muc2 affinity chromatography appeared to bind only a fraction of the GalNAc-transferase activity even after repeated chromatography led us to study the substrate specificity of the GalNAc-transferase activities during this purification step. Table II shows the peptides, their sequences, and the apparent K m with human placenta transferase as a dialyzed S-Sepharose eluate (step 3) before the Muc2 affinity step.
Analysis of the substrate specificity of enzyme preparations before the Muc2 affinity purification, the unbound flowthrough fraction, and the bound and eluted enzyme preparations from the column using various synthetic peptide substrates is presented in Table III and Fig. 2. Human placenta and ovine and porcine submaxillary transferases in the Muc2 affinity-purified form all contained both threonine and serine transferase activity as measured by the human chorionic gonadotropin- peptide, which has only serine acceptor sites, although with a very low activity. Strikingly, all of the Muc2 affinity-purified transferase preparations (step 4) failed to glycosylate the HIV-V3 peptide sequence, whereas the enzyme preparations before affinity chromatography as well as the nonretarded materials from the affinity chromatography column readily utilized this substrate. The product of the HIV-V3 peptide glycosylated by pre-Muc2 affinity-purified enzyme preparations (step 1 or 3) was confirmed as containing a single GalNAc residue attached to the single Thr in an undegraded peptide by amino acid sequencing (Fig. 3) and mass spectrometry (not shown). Although the HIV-V3 peptide is an acceptor of in vitro enzymatic glycosylation it is not known if indeed this site serves as an in vivo O-glycosylation site. HIV gp120 is, however, O-glycosylated, and GalNAc-Ser/Thr epitopes have been identified (Hansen et al., 1992;Merkle et al., 1991).
To exclude that this difference could be ascribed to soluble versus membrane forms of enzymes tested, the apparently membrane-bound form of the GalNAc-transferase was purified and analyzed after detergent exchange. As shown in Table III the difference in substrate specificity was consistent also for the porcine transferase purified after detergent exchange. Several peptides with sequences overlapping the HIV-V3 peptide were analyzed to exclude a specific problem with the peptide design of HIV-V3, and all peptides showed the same reaction pattern with the transferase preparations tested. In Table III only the peptide HXB2 with the same internal sequence as HIV-V3 but with extended sequence in both NH 2 and COOH termini is shown.
The HIV-V3 acceptor substrate peptide was identified by a One unit of enzyme is defined as the amount of enzyme that will transfer 1 mol of GalNAc from UDP-GalNAc in 1 min using the standard reaction mixture as described under "Experimental Procedures" with 25 g of Muc2 peptide as acceptor substrate.
FIG. 1. K m determination of ovine GalNAc-transferase at different stages of purification. Ovine GalNAc-transferase was purified on the Muc2 peptide affinity column (step 5). K m values were measured as described under "Experimental Procedures" and calculated from a Lineweaver-Burk double reciprocal plot to 231, 254, and 50 M for the precolumn preparation (S-Sepharose eluate, step 4), the nonretarded material, and the eluate, respectively. application of 15-mer overlapping peptides covering the entire sequence of the HIV IIIB envelope protein gp120 as acceptor substrate in the GalNAc-transferase assay in an attempt to predict O-linked glycosylation sites by in vitro enzyme assay (Clausen et al., 1994). Subsequent analysis of these peptide sequences by the prediction model of Elhammer et al. (1993) has shown that the glycosylation probability of the HIV-V3 peptide sequence is 0 and is therefore not predicted by the model.
To characterize the different transferase activities further, competitive glycosylation experiments were performed. Since the GalNAc-transferase preparation before the Muc2 affinity chromatography was capable of glycosylating the HIV-V3 peptide as well as the Muc2 peptide, it was pertinent to analyze if one enzyme utilized both of these substrates. As shown in Fig. 2A the Muc2 peptide was a competitive inhibitor of HIV-V3 peptide glycosylation suggestive of one enzyme utilizing both substrates. Control experiments using Muc1 and Muc2 peptide with Muc2 affinity-purified transferase also showed competitive inhibition of Muc2 glycosylation (Fig. 2B). Surprisingly, the HIV-V3 peptide, which was not glycosylated by the Muc2 affinity-purified transferase, was found to be an inhibitor of Muc2 glycosylation using this enzyme preparation. Adding increasing amounts of HIV-V3 peptide to Muc2 affinity-purified transferase showed competitive inhibition of Muc2 glycosylation. These results suggest that Muc2 peptide affinity chromatography separates two distinct transferase activities with overlapping specificity concerning the Muc2 acceptor substrate but which are distinguishable with respect to the HIV-V3 peptide.
Organ Differences in GalNAc-transferase Activities
The findings that different GalNAc-transferase activities could be separated and that acceptor peptides that could differentiate these were identified made it possible to analyze potentially differential organ expression of these activities.
As described under "Experimental Procedures" the GalNAc-transferase preparations used were partially purified either to step 1 or to step 3 (inclusive). As such transferase preparations may include interfering proteolytical activity, the following steps were taken to exclude this. 1) Routine assays included identification of product by Dowex 1 chromatography, but in all combinations of peptide and enzyme preparations C-18 chromatography of the reaction mixture revealed the same UV profile. Potentially incorporated [ 14 C]GalNAc was always found associated with the peptide peak (only when peptides containing multiple acceptor sites were used, were products clearly separable from the unglycosylated peptide peak by the conditions used). 2) Mass spectrometric analysis of substrate peptides after prolonged incubation with one of the organ enzyme preparations (human liver, purification step 1) showed no evidence of degradation (Fig. 4). 3) Sequential incubation of acceptor substrate peptides first with an organ enzyme preparation not capable or only poorly capable of incorporating Gal-NAc (human liver, purification step 1) and second with an organ enzyme preparation capable of incorporating GalNAc (human placenta, purification step 3) showed little or no loss/ decreased accessibility of acceptor substrates (Fig. 5). Table IV the four rat organs tested showed both quantitative and qualitative differences in Gal-NAc-transferase specificity of Cibacron-purified extracts. The HIV-V3 peptide found to distinguish the two transferase preparations separated by the Muc2 affinity chromatography was glycosylated by three of the four organs, the exception being kidney extracts. The two peptides Muc1a and Muc1b showed different acceptor substrate capacity in the different rat organs, with kidney preferring Muc1b peptide over the Muc1a peptide and the opposite being the case for the other organs assayed. The relative ratio of activity toward Muc1a and Muc1b appeared to correlate with the missing activity toward the HIV-V3 peptide. Thus the enzyme activity that preferred the Muc1b peptide may correspond to the enzyme incapable of glycosylating the HIV-V3 peptide. This is in agreement with Birken et al., 1981; (2) Myers et al., 1991; (3) Overbaugh et al., 1992; (4) Gendler et al., 1990;(5) Gum et al., 1989. b ND, not determined. c The HIV-V3 peptide was N-acetylated and in studies in Figs. 4 and 5 an analog without the most c-terminal R residue was used. the specificity of the Muc2 affinity-purified transferase. Human Organs-Human placenta and liver were chosen as human counterparts for organ substrate analysis because human liver was found to express significantly more GalNAc-T2 than GalNAc-T1 by Northern analysis (Homa et al., 1993;White et al., 1995) (Table IV). The two organs clearly differed in GalNAc-transferase activity, as liver was unable to glycosylate the HIV-V3 peptide and preferred Muc1b peptide over the Muc1a peptide, as seen for rat kidney, whereas the placenta preparation was able to glycosylate the HIV-V3 peptide and preferred Muc1a to Muc1b.
Species Differences in GalNAc-transferase Activity
Comparison of porcine and ovine submaxillary glands shows no significant variation, since they both contain HIV-V3 activ-ity, but the relative activities with the HIV-V3 and Muc1b substrates were significantly lower.
DISCUSSION
In the accompanying paper (White et al., 1995) we describe the purification of a human GalNAc-transferase using a defined acceptor substrate peptide as a major affinity ligand. Muc2 peptide using S-Sepharose-purified enzyme (step 4). A competitive transferase assay was performed with a constant amount of HIV-V3 peptide (1 mg/ml) and an increasing amount of Muc2 peptide. Incorporation of GalNAc into HIV-V3 peptide decreased with an increasing amount of Muc2, indicating that the peptides were substrates for the same enzyme. Panel B, Muc1 peptide versus Muc2 peptide using Muc2 affinity-purified enzyme (step 5). A competitive transferase assay was performed using a constant amount of Muc1 (150 g/ml) and an increasing amount of Muc2. Incorporation of GalNAc into Muc1 peptide decreased with an increasing amount of Muc2, indicating that the peptides were substrates for the same enzyme. Panel C, Muc2 peptide versus HIV-V3 peptide using Muc2 affinity purified enzyme (step 5). A competitive transferase assay was performed using a constant amount of Muc2 (150 g/ml) and an increasing amount of HIV-V3. Incorporation of GalNAc into Muc2 decreased with increasing amounts of HIV-V3 without incorporation of GalNAc into HIV-V3. For comparison, Muc2 peptide versus HIV-V3 peptide using S-Sepharose-purified enzyme (step 4) showed no significant effect on the incorporation into Muc2.
FIG. 3. Amino acid sequencing of the glycosylated HIV-V3 peptide.
The HIV-V3 peptide was terminally glycosylated as described under "Experimental Procedures." The GalNAc glycosylated threonine is "seen" as a markedly reduced threonine peak in the cycle corresponding to this amino acid compared with the amount of amino acid in the adjacent amino acid cycles. The upper, middle, and lower panels show the cycles corresponding to valine, threonine, and isoleucine, respectively. Note the pseudopeaks with a retention time of 9.67 close to the Gln-phenylthiohydantoin derivative.
During the purification work we found that only a small fraction of enzyme activity was bound to the column even upon repeated chromatography, suggesting that possibly different transferase activities were present. Further support for this hypothesis was found by an observed lower K m value toward the peptide used (Muc2) for the affinity chromatography in the Muc2 affinity-purified preparation compared with the enzyme preparation immediately before this step and the enzyme preparation that passed through the column. Clearer evidence for the existence of two GalNAc-transferase activities was found by analyzing the acceptor substrate specificity of these enzyme preparations at different stages of the purification. This resulted in the identification of a qualitative difference in glyco-sylation of a HIV gp120 peptide. This peptide was an excellent substrate during the early steps of purification of the human placenta enzyme as well as the enzyme preparation that passed through the Muc2 affinity column (step 4), whereas the Muc2 affinity-purified enzyme lacked such activity. This phenomenon was found to be species-independent as both ovine and porcine submaxillary gland Muc2 affinity-purified transferase showed the same pattern of activity.
In the course of this work two independent groups have reported the isolation and cloning of a bovine GalNAc-transferase (GalNAc-T1) (Homa et al., 1993;Hagen et al., 1993) that is different from the human GalNAc-transferase reported in the accompanying paper (GalNAc-T2) (White et al., 1995), thus establishing that at least two members of this transferase family exist. The fine specificity of these two GalNAc-transferases has yet to be established in a comparative study of recombinant expressed enzymes. The present results show independently that two distinct transferase activities may be recognized, and substrates capable of distinguishing these have now been identified. Preliminary data suggest that neither human GalNAc-T1 nor GalNAc-T2 utilizes the HIV-V3 peptide (soluble constructs expressed in a baculovirus system). 2 The presented data clearly establish that at least two distinct GalNAc-transferase activities can be identified and separated. The specificity of these activities appears to be overlapping to a large extent as evidenced by competitive substrate analysis (Fig. 2). The competitive inhibitor effect of the HIV-V3 peptide on the Muc2 affinity-purified GalNAc-transferase indicates that the purified transferase recognizes and binds the HIV-V3 peptide but cannot transfer GalNAc to it. Inhibition of glycosylation by nonglycosylating peptides has been noted previously by O'Connell et al. (1992). In addition, an observed de novo appearance of GalNAc-transferase activity during the purification of UDP-Gal:GalNAc-Ser/Thr 1-3-galactosyltransferase may be relevant to this (Brockhausen et al., 1992). 2 E. P. Bennett and H. Clausen, unpublished observation.
FIG. 4. Mass spectrometry analysis of acceptor-substrate peptides during incubation with crude GalNAc-transferase preparations.
Human liver enzyme (purification step 1) and the reaction mixture including acceptor substrate peptides were incubated at 37°C; aliquots were taken at 0, 60, and 120 min, and 6 and 24 h. These aliquots were passed through a Dowex 1 column, and the pass-through was used for measuring the incorporation of [ 14 C]GalNAc as well as for mass spectrometry. Spectra of HIV-V3 and Muc1a peptide are shown for 0 and 120 min. There is no significant fragmentation of the two peptides during this time interval. Both peptides contain a terminal cysteine amino acid and therefore easily form dimers, which are seen in the spectra. Reduction in the amount of dimer is due to the presence of 2-mercaptoethanol in the reaction mixture. The human liver enzyme was eluted from the Cibacron column with 1.5 M KC1 (purification step 1) and used directly in the assays. This may account for the strong cationization of the peptide with two K ϩ ions and the peak corresponding to (M-Hϩ2K ϩ ) being dominant. Surprisingly, there is no cationization of the dimer, and at present we have no explanation for this finding.
FIG. 5. Sequential enzyme assays to monitor the stability of acceptor substrate peptides. Peptides Muc1a and HIV-V3 were incubated at 37°C with human liver enzyme (purification step 1) in the standard reaction mixture for various times. Immediately after each time interval human placenta (purification step 4) and UDP-[ 14 C]Gal-NAc were added and incubated as described for the standard polypeptide GalNAc-transferase assays under "Experimental Procedures." The ability of the placenta enzyme to glycosylate the peptides after incubation with the liver enzyme as a percentage of the initial ability at t ϭ 0 is presented.
Analysis of the substrate specificity of impure GalNAc-transferase preparations may thus be biased by inhibiting factors. The observed difference in specificity of the separated GalNActransferase activity may be associated with a cofactor/modulator as found for UDP-Gal GlcNAc 1-4-galactosyltransferase (McGuire et al., 1965); however, several lines of evidence indicate this is not the case. First is the finding that the GalNActransferase activities can be physically separated yielding two activities with the purified activity having apparent lower K m and otherwise broad specificity. Second, transferase activity with a specificity similar to that of the purified transferase is found in crude extracts of certain organs (Table IV). Finally, to date two distinct GalNAc-transferase proteins have been isolated and cDNA cloned, and these show differential organ distribution by Northern analysis (White et al., 1995;Homa et al., 1993).
Detailed understanding of the acceptor-substrate specificity of different GalNAc-transferases clearly has to await cloning and expression of all members of this family of enzymes. It was previously expected that a difference in substrate specificity of multiple enzymes could be related to Ser and Thr acceptor sites (Wang et al., 1992;O'Connell et al., 1992;Harada et al., 1985); however, purified GalNAc-transferase from bovine colostrum has been recently shown to exhibit specificity for both (Homa et al., 1993). The present data corroborate this for GalNAc-T2, showing a near proportional purification of both Thr and Ser activities (Table III), and recombinant expressed GalNAc-T2 also showed Ser activity (White et al., 1995). Recently, Wang et al. (1992) showed that purified porcine submaxillary gland GalNAc-transferase, which is reported to be identical to Gal-NAc-T1 (Roth et al., 1994), exhibits very high substrate specificity for the human erythropoietin sequence -Ala-Ala-Ser-Ala-Ala-. Our preliminary data indicate that recombinant GalNAc-T1 is devoid of such activity, and recombinant Gal-NAc-T2 is very poor in utilizing this substrate. 3 The reason for these discrepancies are presently unknown but are under study.
In the present study we have worked primarily with acceptor peptides derived from proteins with unknown in vivo O-glycosylation patterns. The peptides derived from human mucin tandem repeats are likely to be glycosylated in vivo at least partly, but for the HIV and SIV peptides it is only known that a few O-glycosylation sites are utilized in vivo on these large glycoproteins (Hansen et al., 1992;Merkle et al., 1991). Recently, Elhammer et al. (1993) proposed a prediction model for O-glycosylation based on the occurrence of amino acid residues positioned Ϯ4 to identified O-glycosylation sites. The predic-tion model does not identify the HIV-V3 sequence. However, a comparison of the prediction model with an in vitro GalNActransferase assay using 32 15-mer peptides covering the entire HIV gp120 protein allowed identification of three out of four sites by both methods; additionally, three sites were identified only by the in vitro enzyme assay (Clausen et al., 1994). Thus, some correlation was found between the statistically predicted sites and in vitro glycosylation, but the in vitro glycosylation assay using crude GalNAc-transferase (Cibacron eluates corresponding to step 1 in Table I) identified additional sites. Whether these sites indeed are glycosylated in vivo is under study, but difficulties in obtaining pure viral envelope proteins in sufficient quantity for structural analysis have hampered this effort. Importantly, the in vitro GalNAc-transferase assay identified sites in the hypervariable V3 loop of different HIV and SIV isolates, and O-glycosylation could therefore mask this principal neutralizing epitope. In fact, the HIV-V3 sequence has been shown to contain a T-cell class I epitope, and the predicted glycosylation sites are positioned in the middle, thus presumably being able to mask the site (Ishioka et al., 1992;Mouritsen et al., 1994).
Our analysis of the acceptor-substrate specificity of GalNActransferase preparations from different organs demonstrated both quantitative and qualitative differences (Table IV). A number of control experiments ruled out that these differences in specificity were a result of degradation and/or unknown blocking of acceptor substrate peptides. The most striking finding was that the HIV-V3 peptide indicated a qualitative distinction between enzyme extracts, thus agreeing with our interpretation that the Muc2 affinity chromatography results in the separation of two distinct GalNAc-transferase activities.
The differential organ expression of GalNAc-transferase activities using HIV-V3 peptide was further corroborated by analysis of two partial sequences of the Muc1 20-mer tandem repeat (Table IV). Interestingly, the ability to glycosylate HIV-V3 correlated with Muc1a glycosylation and lack of HIV-V3 enzyme activity with Muc1b glycosylation. Northern analysis of GalNAc-T1 (Homa et al., 1993) and GalNAc-T2 (White et al., 1995) expression indicated that human kidney and liver preferentially express GalNAc-T2, and these organs (rat kidney, human liver) appear to express a substrate specificity in agreement with that found for GalNAc-T2. The finding of apparent differential organ glycosylation of the in vitro identified glycosylation sites in the Muc1 tandem repeat (Muc1a: -Thr-Ser-; Muc1b: -Ser-Thr-) may be important for understanding the molecular basis of cancer-associated epitopes mapped to the Muc1 tandem repeat (Gendler et al., 1990). A number of antibodies to Muc1 have been generated, and most of these map to the knob-like structure defined by -Ala-Pro-Asp-Thr-Arg-Pro- (Taylor-Papadimitriou et al., 1993). Flanking these a GalNAc-transferase preparations were dialyzed, concentrated Cibacron eluates (step 1) were used for rat organs and S-Sepharose eluates (step 3) for porcine and ovine salivary glands and human placenta. Peptide substrate concentrations were 50 g in a standard reaction assay. S.D. is given for triplicate assays.
repeated knobs are the -Thr-Ser-and -Ser-Thr-motifs, which the present study indicates are differentially glycosylated by independent GalNAc-transferases. Structural analysis of in vitro glycosylated Muc1 tandem repeats using breast and pancreatic cell line extracts as well as semipurified human placenta GalNAc-transferase (purification steps 1 and 3) indicates that only the flanking sites are glycosylated and that the single Thr in the knob tip (-Pro-Asp-Thr-Arg-) is left unglycosylated (Nishimori et al., 1994a(Nishimori et al., , 1994b. 4 If the observed difference in in vitro glycosylation reflects the in vivo processing this finding may have implications for the structure of Muc1 expressed in different organs and in cancer cells.
In conclusion, the present study provides evidence that different GalNAc-transferase activities are involved in the initiation of GalNAc O-glycosylation and that these are differentially expressed in cells and organs. Identification of suitable acceptor substrates capable of distinguishing such transferase activities is believed to be a significant step forward in the understanding of GalNAc O-glycosylation processing and will be valuable for characterization of the substrate specificity of different GalNAc-transferase genes as these are cloned and expressed.
|
2019-03-21T13:05:05.503Z
|
1995-10-13T00:00:00.000
|
{
"year": 1995,
"sha1": "51b6ef50c0b29c6168e1936c7b25493f2b0b63b2",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/270/41/24166.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "e6933fa644727396ea805bca3f9f3303cff15db3",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine",
"Biology"
]
}
|
247813272
|
pes2o/s2orc
|
v3-fos-license
|
Cutaneous Pili Migrans: A rare case from North India
Cutaneous Pili Migrans (CPM) is a rare creeping eruption with only 40 cases reported so far. CPM is defined as a condition where a hair shaft or fragments gets embedded in superficial skin. It is known to affect both adults and children. We report a 23-year-old male patient who presented with sudden sharp pain in the foot on walking. Cutaneous examination revealed a black linear thread-like lesion on the medial plantar aspect of the right great toe associated with mild to moderate tenderness. The particle was removed by superficial paring along with gentle forceps extraction. Microscopic examination confirmed it to be a hair shaft fragment. While the exact etiology of CPM is unclear, it is proposed to be an acquired condition. Since it can mimic other creeping eruptions like Larva migrans, it is important to be aware of this condition.
Introduction
C utaneous Pili Migrans (CPM) is a creeping eruption due to hair shaft/fragment embedded in superficial skin or dermis which manifests as an actively linear or serpiginous cutaneous track with slightly elevated erythema. 1 Asians are predisposed as Asian hair has higher tensile strength and can be bent to great degrees without sustaining fractures. 2 This, coupled with the fact that CLM is a differential, makes it important to know about this condition, specifically in the Indian context.
Case Report
A 23-year-old male presented with sudden sharp pain in the right great toe on walking for 3 days. Recently he noticed a black, curved, thread-like mark on his toe, which appeared to be under the skin (Figure 1). There was no history of trauma.
On examination, there was a black, semi-circular thread-like lesion on the medial border of the plantar aspect of the right great toe, associated with mild to moderate tenderness. An erythematous zone surrounded the advancing edge of the dark line.
The patient was subjected to superficial paring, and gentle extraction with forceps revealed a straight, linear, black strand, around 1.5cm in length. Microscopic examination of the extracted foreign body strand using Dinolite AMZT73915 showed a hair shaft with a sharp end lacking the hair follicle. The hair shaft diameter was 0.086mm ( Figure 2). On follow-up after 1 month, the patient was asymptomatic and cutaneous examination was unremarkable Till date, less than 40 cases have been reported worldwide, possibly due to its rare occurrence and asymptomatic nature. 8 Predisposing factors for CPM include friction, wet feet or walking in waterlogged areas, contact with pets, and walking over recently clipped hair (like after haircuts). Hairdressers, barbers, and dog groomers or handlers are more at risk. 1,5,9 CPM can affect people of all ages. Common sites include ankle, sole, toe, breast, cheek or neck, and abdomen. 1,9 Histopathology is usually not required, but shows cross-sectioned hair fragment or compact keratin in a tiny empty space in the superficial dermis. 9,10 A close differential is cutaneous larva migrans (CLM), and the differences have been tabulated. (Table 1) As noted by Luo et al, the absence of hair follicle implies an acquired etiology. 1 The lack of inflammation is also attributable to the relatively short time that the hair has been embedded for and the lack of Langerhans cells in superficial layers of the epidermis. 10 Embedded hair can trigger foreign body reaction, leading to the formation of epithelioid tracts around hair shaft, further leading to secondary infection, inflammation, and formation of branching sinuses. 5 Interdigital trichogranulomas or sinuses, thus formed, were initially considered an occupational disorder more common among hairdressers. 10
Conclusion
In conclusion, CPM is a rare, creeping eruption that occurs more commonly in Asians and people of ethnic background. It can be easily diagnosed based on a thorough history and examination. Extraction by forceps provides immediate relief. Differentiation from Cutaneous Larva Migrans is important, specially in the Indian setting.
Discussion
The consequences of the embedded hair shaft in the form of reactive hyperkeratosis and interdigital pilonidal sinuses seen in hairdressers were first reported as far back as 1954. 3,4 In 1957, Howard Yaffee reported the first case of CPM and likened the clinical presentation to larva migrans.5 In 2001, Thai and Sinclair reported case of a 37-year-old Indian-origin man having a 7-cm long submerged, migrating hair and formally called the entity 'Cutaneous Pili Migrans'. 6 As noted by Luo et al., the absence of hair follicles implies an acquired etiology. 1 Disorders like Psuedofolliculitis barbae are known to be highly inflammatory owing to the presence of foreign body as well as secondary infection. When associated with infection, embedded hair is also known to trigger foreign body reaction and inflammation, consequently leading to sinus formation.5 Interdigital trichogranulomas or sinuses seen in hair dressers are one such form. However, CPM is also due to a foreign body, but demonstrates a relative lack of inflammation, which is postulated to be due to multiple factors like absence of secondary infection and lack of Langerhans cells in superficial layers of the epidermis. 6,7
|
2022-03-31T15:18:54.817Z
|
2022-03-29T00:00:00.000
|
{
"year": 2022,
"sha1": "4b470eeb8cb84574e45a72a52137f45277ae577f",
"oa_license": "CCBY",
"oa_url": "https://www.nepjol.info/index.php/NJDVL/article/download/39345/33331",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "771b97dcff85b11a5853191d9e9b79f1a9c9eef7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
225464830
|
pes2o/s2orc
|
v3-fos-license
|
BULGARIAN MUNICIPALITIES: KEY PLAYERS IN THE PROCESS OF THE IMPLEMENTATION OF EU FUNDS AT THE NATIONAL LEVEL
The role of Bulgarian municipalities in the processes of programs that implement EU funds is absolutely undeniable. They are direct or potential beneficiaries of the major part of the operational programs – both in the previous and current programming period. More so, they are the beneficiaries of so-called “big projects” (according to the European legislation, projects with budgets over €50 million), which are key infrastructure projects in priority sectors such as transport and water infrastructure. This paper is devoted to these municipalities and their attitudes towards European funds in the context of the overall development of the municipalities. The study is based on empirical research among representatives (n = 73) of the Bulgarian municipalities, and their perceptions on the importance of EU funds and programs for the municipalities’ development.
Introduction
Bulgarian municipalities are key beneficiaries at the national level of a major part of the programs financed by the European Structural and Investment Funds. It could be said that their role and importance for the overall process of the implementation of these programs was underestimated during the previous programming period (2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014), which was Bulgaria's first programming period as a regular member-state of the European Union. Currently we are almost at the end of the present programming period, and lessons have to be learnt including with the provision of a special centralized governmental policy, and measures that support the administrative capacity of the Bulgarian municipalities as key beneficiaries. Therefore, the progress of implementation projects by municipalities, especially those concerning infrastructure, could be interpreted as indicators for their readiness and project capacities in this important field. For a national economy such as the Bulgarian one, European funds are absolutely crucial for the development of society in almost all essential aspects, both at the national and regional levels.
On the other hand, the municipal authority is the closest public structure to the everyday life of their citizens. The importance and role of the municipality is undisputable in modern democracy, which proclaims independent (from the centralized authority) local government. Local authorities are responsible for the overall development of the region, ensuring that the prosperity of citizens is provided by different policies and measures. For this major purpose local governments possess, as general rule, two basic sources of finance: from one side -the revenues according to local taxes and fees; and from the other -European funds. The present paper primarily explores European funds and their role in the development of Bulgarian municipalities in priority sectors, but also presents the bidirectionality of these relations (municipalities are just as important for the overall implementation of the operational programs as the resources provided by the programs are important for the local development of the concrete municipalities).
Literature overview
The role of European Funds and Programs and their impact on the European, regional, and national economy is a subject that is explored in depth, and involves many different authors` points of view. One of the major scientific foci in the sphere of European funds, logically, is the impact that EU funds have on the economy, and more specifically on the SMEs which are usually described as a backbone of the national and the European economy. For instance, Bostan et al. [6] present the impact of these funds on the competitiveness of SMEs in the specific geographic area of the Danube Delta. They conclude that the Structural Funds have made a major contribution to achieving this goal, being aimed mainly at meeting European standards on the environmental protection and economic development of the area, while respecting its biodiversity and the inhabitants' general interest in improving the quality of life in the Danube Delta [6]. The interest of another study is again focused on the role of EU funds in SMEs at the national level, this time in Hungary [4]. After a profound analysis, the scholars concluded: "According to our results, economic development funds had a significant positive effect on the number of employees, sales revenue, gross value added and, in some cases, operating profit. However, the labor productivity of enterprises was not significantly affected by any of the support schemes. Furthermore, by explicitly comparing non-refundable subsidies (grants) and refundable assistance (financial instruments), we find that there is no significant difference in the effectiveness of the two types of subsidy" [4]. The survival rate of the enterprises of four relatively new EU member states (from the EU wave enlargement of 2004 -Czech Republic, Poland, Slovakia, and Hungary) are explored in detail in another survey. Using and analyzing data from an impressive sample of almost 42,000 companies, Baumöhl, Iwasaki, and Kočenda [5] claim that large shareholders, solvency, and more board directors are preventive factors; foreign ownership and higher ROA also increase survival rates; and larger firms and those hiring international auditors have lower survival chances. Further research is dedicated to the field of SMEs and specific instruments supporting their activities, this time in the specific context of open innovation (OI). After a comprehensive analysis especially focused on the companies operating in digital sectors, the authors found that the SMEs awarded the grants are less engaged in the challenging dimensions of Open Innovation than companies that did not receive any funding. This is contrary to the intended goals of the grants. They also provided policy and methodological implications relevant for the design of better OI-informed policy and the more effective evaluation of companies participating in the SME Instrument [9].
Kalfova has provided a multifactor analysis on regional policy in Bulgaria with a focus on the implementation of EU funds. The author claims that structural funds are the main tool of EU Regional policy, and the level of absorption is accepted as a substantial indicator of the successful implementation of Regional policy [14]. An interesting approach is proposed by Kiryluk-Dryjska and Beba [15] in the process of identifying the budget and its allocation for rural areas within the Common agriculture policy of the EU. The authors propose a method for the region-specific budgeting of European Union rural development funds, based on objectively measured indexes of rural development. The indexes are calculated based on statistical data with the use of factor analysis, and the results demonstrate that the proposed approach allocates the funds according to an assumed logic that supports the weaker and underdeveloped regions and features of agriculture. In the field of ensuring sustainable development through the landscape in the context of the European Union, Mann et al. [19] have provided a special comprehensive study. The authors identify three major conflict zones: "(1) agricultural production versus nature conservation, (2) urban sprawl and rural land abandonment versus landscape integrity, and (3) renewable energy generation versus landscape aesthetics." On this basis, they have proposed measures to improve European landscape policy through Integrated Landscape Management that combines and fosters collaboration between all stakeholders. Again in the field of agriculture is another study, this time dedicated to special policy on agroforestry, which is considered by the authors to be one of the active tools for achieving sustainability of land management. The scholars explore European policy in the field of agroforestry and conclude that agroforestry was poorly adopted in the CAP 2007-2013, having better success in the CAP 2014-2020 due to the recognition of woody vegetation and the compensation of 5 years given for maintenance once agroforestry is established. However, policy rules ensuring Pillar I payment when agroforestry measures are adopted, such as a management plans ensuring that maximum tree density (100 trees per hectare) is not reached, should be pursued. [22] The subject of the public system responsible for EU management and its possible improvements as well as proposals in terms of policy and procedures for beneficiaries are the focus of many scholarly works. In this regard, a plethora of scientific analysis can be outlined -for instance Anguelov [1,2] and Dobrovolskienė, and Tamošiūnienė [10].
The work of public authorities (including municipalities and centralized institutions) as beneficiaries under different EU funds and programmes is not so well explored, espe-cially in comparison with the interaction between SME-EU funds. On the other hand, a large majority of recent research is focused on very specific aspects of the work of public authorities. For instance, Olanubi, Osode, and Adegboye [20] have explored the efficiency of the public sector in a very specific time period. On the basis of their analysis, the scholars concluded that their results reveal large-scale inefficiencies in the use of funds allocated to the scheme during the great recession and euro area sovereign debt crisis that followed, with member states wasting on average 34.6% of funds allocated to it.
Naterer, Žižek, and Lavrič have explored the urban strategies prepared by the municipalities and their accordance with the general strategy at the EU level -Europe 2020 -a strategy for smart, sustainable, and inclusive growth. The scholars explored a number of new integrated urban strategies (IUSs) prepared by the Slovenian municipalities, and considered that their results show that the IUSs of Slovenian cities are generally of low quality and that they conform to the Europe 2020 strategy poorly, but rather more to national guidelines defined by the Slovenian government [19]. New research on urban development sheds more light on the modern practices of cities (and of their government in local municipalities) in the context of entrepreneurship. After a profound comparative analysis covering 60 EU cities, it was noted that in the contemporary global economy, cities are essentially competing with each other in terms of attracting investments, businesses, inhabitants, tourists, as well as improving citizen satisfaction. Cities use different tools to compete: strategic planning, marketing strategies, or city branding, for exam-ple… "Our results confirm that the top cities are located in Northern Countries" [21].
The requirements of sustainability that are characteristic of all EU funded projects are extremely important in terms of community development due to the fact that through sustainability there is a guarantee that public money is spent towards a visible purpose. In this regard, interesting analysis has been conducted by Dobrovolskienė, Tvaronavičienė, and Tamošiūnienė [11]. On the other hand, the role of EU funds both in the public and private sectors in Lithuania is described elsewhere [23].
The role of municipalities in the field of waste management and related services and their implementation by different stakeholders are the subject of vivid scientific interest by different authors within the countries of the European Union. For instance, Chamizo-González, Cano-Montero, and Muñoz-Colomina [7] have explore the type of management and taxes in this field. After their comprehensive analysis they concluded that their results reveal, first, that the most widely-used solution at local government level is the easiest to apply-namely, a flat rate per household or a step-variable flat rate covering on average 59.03% of the cost (in 2012); and, second, that Madrid's waste stepflat rate cannot be considered a PAYT system, despite the fact that it covers up to 70% of the cost (in 2012) [7]. Another piece of research dedicated to the subject of the role of municipalities in the field of solid-waste recycling and the correspondence of practice to the guidelines of Europe 2020 has been developed by two Spanish authors. Expósito and Vlasko have explored, in depth, the experience of Spanish municipalities, and have provided on this basis a comprehensive regional efficiency analysis at the national level.
In conclusion, the scholars claim that their results confirm that Catalonia, Navarre, and Madrid function as benchmark regions to be emulated by the remaining inefficient re-gions. The necessary regional investments and output projections to reach an efficient development of the recycling sector are also estimated. Additionally, it is found that per capita income and population density significantly explain differences in regional efficiencies [13].
On the other hand, different types of public institutions predetermine the role of the process of the implementation of EU funds. For instance, Higher Education Institutions, which also are potential beneficiaries under EU programs managed at the national level, are essentially placed under very different conditions than municipalities in their role as potential beneficiaries. Unlike municipalities, Higher Education Institutions are not direct beneficiaries of any operational programmes. More information on the role of Higher Education Institutions in the process of the implementation of EU funds has been developed by other scholars [3].
In summarizing the literature overview, it is clear that the role of modern municipalities is complex, dynamic, and difficult, and covers different aspects of modern life, including economic, social, culture, urban etc. Therefore, scientific interest is absolutely logical when taking into account the fact that local authorities are the closest public institutions to their citizens. In the present paper we will explore the role of Bulgarian municipalities as key stakeholders in the process of the implementation of EU funds at the national level, and their assessment of the central management of EU funds.
Methodology
For the purpose of our current research a special questionnaire was developed, devoted to the different aspects of the overall process of the implementation of EU funds and focused on the municipalities and their role as potential and direct beneficiaries of operational programs. The questionnaire included 35 questions, aiming to understand self-assessment from two sides. Firstly, from the perspective of the activities of concrete municipalities in the field of the preparation and implementation of EU projects; and secondly from the perspective of the assessment of representatives of the local authorities on the overall work of central administration in managing EU funds in Bulgaria. These two types of assessment are desperately needed, especially taking into consideration the final timing of project implementation during the current programming period (2014-2020).
The questionnaire developed consisted of 3 types of questions. The first type of question involved closed questions, where the respondents had to choose among different options of predefined answers. The second type of question was open, and respondents were asked to give their own original answers. The third and final type of question was designed using the rating scale, where the respondents were asked to evaluate, using the scale presented, different key elements of the overall project cycle -from the project preparation phase to the process of submission, evaluation, implementation, reporting and monitoring, final evaluation, and sustainability.
The questionnaire was sent to all 265 Bulgarian municipalities via e-mail in two major phases: the first period saw it distributed among big municipalities, which are also district centers and of which there are 27; and then in the second period it was distributed to the remainder of all Bulgarian municipalities. In order to facilitate access to the questionnaire for the different representatives of the Bulgarian municipalities, as well as to simplify the process of its fulfillment, we used the online platform Google Forms.
Due to the specifics of the information provided by the questionnaire, as well as in order to ensure reliable and quality primary information, our respondents are anonymous. We collected information only on the name of the municipality and the role of the representative in terms of EU implementation projects. We have not yet noted the limitation of one municipality: only one answer was received, and therefore we have several municipalities where different experts have completed the questionnaire. For the purposes of this research, this fact only brings more clarity and gives more reliable information to the situation of the respective municipality in its specific role as a potential or direct beneficiary of EU funds through operational programs. On the other hand, despite targeting every Bulgarian municipality, we received 112 responses from 73 municipalities.
Results and Discussions
The profile of our respondents covers essential information such as sex, age, level of education, and their position within the municipality. In terms of sex, the demographics are clear -there is a dominance of female respondents, with almost 77% (76.7%) of respondents being female and 22.3% male (Figure 1). This finding in fact repeats the reality of Bulgarian administration, where the predominant number of employees are women.
Figure 1. Sex of respondents
The second question concerning the profile of respondents is regarding their age. These results are presented in Figure 2. As can be seen from the figure, the majority of respondents are in the age group of between 41 and 50 years old. This fact could be considered as positive in terms of the level of experience of employees of municipalities, including in the field of project preparation and implementation. The next group according to their number consists of people aged 31-40, followed by the group of respondents aged 51-60. This finding again could be considered to be a strength of the municipalities -it could ensure the succession between different generations of employees and the transfer of knowledge, specifically knowledge achieved on the basis of experience and sufficient practice in team work between different age groups. Young people in the local authorities among our respondents form a share of 12%. Therefore, our sample has representatives from all age groups -a prerequisite for the quality of the primary information collected from the survey.
Figure 2. Respondents' ages
The next question on the profile of respondents is regarding their educational level. Our findings here categorically identify that all respondents have tertiary education, and one among the 112 respondents has a PhD degree. It is interesting to note that this holder of a PhD is a representative of a big municipality administration, in a district center with many universities.
The next question collected information on the positions of the respondents. These results are presented in Figure 3. For the specific purposes of our survey, we predefined four answers and the respondents were asked to choose which best represented their position in implementing EU funded projects from four options: Manager of the administration (i.e., the mayor and deputy mayors); EU Project Team Manager; EU Project Team Member; or Final beneficiary of EU funded project. As can be seen from Figure 3, the largest share of respondents was formed by EU Project Team members -almost 50%, or one half of respondents -followed by the group of EU Project Team managers (22.2%). Our third most represented group of respondents are managers of the overall local authority, which means that 18.6% of the respondents are mayors or deputy mayors in the different types of municipalities. The smallest group includes respondents who are the final beneficiaries of the EU funded projects, and work in local administration. This group's size suggests the overall quality of the collected information and ensures its reliability. This structure to our respondents is absolutely sufficient in terms of their competence, experience, and position within municipalities.
The respondents were asked to compare the changes made by the managing authorities and National Coordination Unit within the Council of Ministers Administration, between the two programming periods of Bulgaria as a regular member state of the European Union. The results are presented in the Figure 4. The question through which we collected information on this topic was as follows: "According to your personal opinion, compared to the previous programming period, in general, the procedures related to preparation and monitoring processes are…." The respondents were again given predefined answers and the option to choose only one response among five different assessments (two positive, two negative, and one neutral). According a significant majority of our respondents, the changes between the two programming periods initiated by the central authorities are considered in a positive light. 38.6% of our respondents declared that the changes made by the managing authorities significantly improved upon the initial situation, and at the same time another 42.8% of respondents were positive but more moderate, claiming that there was an improvement but it could have been better. These findings reveal that, in total, 81.4% of respondents positively assessed the changes made by the managing authorities in the application and monitoring phase. For 14% of respondents there were no significant changes, and the remaining 4.5% of the total evaluated the changes negatively.
Figure 4. Assessments of the respondents on the changes made by Managing Authorities in the preparation and monitoring phases, %
This impressive support by the representatives of local authorities for the decisions of the central administration responsible for EU funds in Bulgaria is, in fact, mainly due to the electronic procedures for project application submission, as well as monitoring reports introduced with the beginning of this programming period by the Central Coordination Unit within the Council of Ministers' administration. The Unified Managing Information System (UMIS) operates at the national level, and this programming period was developed through new functionalities so that we now have electronic procedures for the submission of project application forms and the monitoring of the funded project. These huge changes, especially in comparison to the previous programming period, were accepted with enthusiasm by all types of beneficiaries, including enterprises, non-governmental organizations, and different types of public institutions which are potential beneficiaries. On the other hand, the municipalities, which are some of the biggest beneficiaries including of big (over €50 million) infrastructure projects, for instance in the field of the environment (for different types of waste infrastructure etc.), have enormous documentation for reporting that has to be included in one interim request for payment sent to the Managing Authority of the responding programmes. If we imagine the very real situation that one municipality can be a beneficiary of three or four projects at the same time under different operational programmes, then the volume of documentation that has to be sent to the Managing Authorities accumulates drastically. Here we do not even consider the situation that each Managing Authority could ask for the same type of document. All of these problems have, in fact, been overcome by the usage of the new functionalities of the UMIS 2020. They are undoubtedly in favor of the beneficiaries, but are also in favor of the Managing Authorities and audit institutions as well.
These conclusions are supported by the answers received to the special questions dedicated to the new functionalities of UMIS 2020. All of the respondents were asked to evaluate the new functionalities of UMIS with the following question: "Do you think that the electronic submission of project proposal launched, as well as the electronic monitoring of an implemented project, support the preparation and project implementation processes?" Evaluation was executed through a ranking system from 1 to 7, where 1 indicated a "very slight benefit" and 7 a "very strong benefit." The results are visualized in Figure 5, and form a very clear evaluation of the municipalities as beneficiaries of EU funded projects. As we can see from the data, approximately half of the respondents (almost 51%) indisputably evaluated the new functionalities of the system with the highest score. This result could be considered, with great confidence, to indicate that these changes are broadly accepted by the experts of the municipalities responsible for the preparation and implementation of EU funded projects. The next assessment asked the representatives of municipalities the following question: "According to your personal opinion, what would be the effect of shortening the deadlines for the evaluation and approval of project proposals?" Again, the respondents were asked to evaluate this effect through the 7-degree scale, where 7 indicated a "very strong effect." Our results ( Figure 6) again indicate positive assessments -41% of our respondents evaluated the potential effect from the shortening of deadlines in the procedures of evaluation and approval of project proposals from the Managing Authorities as potentially having a very strongly positive effect for beneficiaries. In fact, the relatively long deadlines for approval are one of the most common criticisms from enterprises aimed at the work of the Managing Authorities. This is logical, having in mind the strong competition and the speed of business, for instance in an open call for innovations in a project proposal. As far as the municipalities are concerned, we can again see their opinion on the potential effect of shortening the timing for project proposal approval. The next question for which evaluation was requested through the same system is the following: "According to your personal opinion, what would be the effect on beneficiaries if the requirements of all operational programmes were standardized?" The results in Figure 7 show the most categorically expressed opinion on the potential for eventual change thus far. One frequent criticism during the previous programming period was connected to the fact that each Managing Authority has its own procedures, rules, and requirements of the beneficiaries that can differ drastically from one to the other. In practice, this leads to confusion among beneficiaries that have many projects under different operational programmes (and all municipalities are guilty of this), resulting in the making of frequent mistakes due primarily to these different rules.
The next possible change put to our respondents for evaluation was the following question: "In your personal opinion, what would be the effect if the amount of advance payment to municipalities was further increased?" The support expressed in the assessment scores is again very clear -the representatives of municipalities that responded to the questionnaire found this eventual change to be very positive (Figure 8). In fact, in the previous programming period this was one of the most common recommendations to the Managing Authorities. The problem usually arises for big infrastructure projects, where the necessity of operational financial resource is strongest. Now, however, the municipalities already have good experiences of and collaboration with the FLAG fund, which is designed especially for the needs of local governments and local authorities.
The next evaluation was again connected to payments, but this time it concerned the final payment. The respondents were asked to evaluate the effect of reducing the time for final payment. The question was phrased: "According to your personal opinion, what would be the effect if the deadline for the final payment was reduced to 30 calendar days?" This potential measure in the vein of supporting beneficiaries is commented on and offered by all types of beneficiaries -they share the same opinion on the deadlines needed by the Managing Authorities to make final decisions on the concrete project and to proceed to the final payment ( Figure 9). It is common practice by all Managing Authorities, in order to ensure and to secure public resources, to unnecessarily complicate the procedures that lead to the final payment. Therefore, a reasonable solution that is accepted by both sides has to be developed. For instance, for problematic projects the overall final procedure should be absolutely obligatory, and for the rests of the projects -another principle should be developed that guarantees the required level of risk.
The next evaluation is on the very sensitive subject of the implementation of Public Procurement legislation as the major tool for spending public money by different types of public authorities and institutions. One of the major burdens related to the delays, and often to the impossibility of executing some of the initial planned project activities, is the difficulty of the procedures of the Public Procurement Law. Over the years there have been different changes to the Bulgarian Public Procurement Law, but in fact these changes have not led to better procedures and implementations. The question that was posed for evaluation by our respondents was: "According to your personal opinion, what would be the effect of improving the procedures under Public Procurement Law?" The results of this evaluation are presented in Figure 10. During the previous programming period, and indeed during the current one, the municipalities have rich experiences in the different procedures under Public Procurement Law. Some of the Managing Authorities execute ex-ante control of the overall documentation of concrete procedures prepared by beneficiaries, but there are two absolutely opposite opinions on this practice. From one side, ex-ante control is perceived as some kind of initial insurance on the public procurement procedure. From the other side, ex-ante control usually takes extra time than the beneficiary has planned. Perhaps the most common criticism of ex-ante control is the fact that there is no shared responsibility. Once one procedure has approval from the Managing Authority's ex-ante control, there is no guarantee that any of the responsible audit institutions impose financial corrections due to imperfections in the same procedure that has passed the ex-ante control of the Managing Authorities. Therefore, this impressive level of approval of the potential improvements to the Public Procurement Law is no surprise.
The final evaluation of potential change is connected to the major subject of sustainability, which is another field in the process of implementing EU funds on which Managing Authorities have differing interpretations. The question used to collect the distributions of opinions is the following: "In your personal opinion, what would be the effect if the institutions responsible for the control of the sustainability of the projects unified their requirements?" The results achieved from this question are presented in Figure 11, and reveal the most categorically clear picture made across all of the evaluations. In comparing all of the evaluations, the results categorically indicate that, according to the representatives of the municipalities, the most desirable change is the unification of sustainability requirements. This finding in fact corresponds to the recent practice and financial corrections of the projects of the municipalities that have already been implemented which, however, fail on the issue of sustainability. One of the possible solutions here is connected to the centralized guidelines approved by the deputy-ministers of the EU funds in Bulgaria, which are compulsory for all institutions at the national level. These guidelines have to be in accordance with the European and national legislation in the field, and approved by the majority of stakeholders in a broad public discussion.
Conclusion
Local municipalities and local governments have key roles in the overall process of programs for the implementation of EU funds at the national level. Their opinions are very important as they already have rich experiences, and lessons have to be learnt in order to improve the environment, applicable legislation, and procedures.
The findings from our research indicate that the representatives of municipalities have very clear understandings of the specific requirements that have to be achieved in preparing, implementing, and reporting a project financed by the European Structural and Investment funds. However, there must be an intersection between the requirements of the Managing Authorities in terms of securing the legitimacy of every public euro spent on a project, and the proposals of municipalities as one of the major players in the field of EU funded projects at the national level. Representatives of the different municipalities declare their clear appreciation for the changes that have already been made to procedures, especially those on e-project submission and e-monitoring. On the other hand, they also point out the need for significant improvement in terms of clarifying unified practice on the sustainability of projects, and the specific requirements therein for beneficiaries.
|
2020-08-13T10:10:38.697Z
|
2020-08-10T00:00:00.000
|
{
"year": 2020,
"sha1": "44b8eab7f12691a9d06ba394501582b97b456895",
"oa_license": null,
"oa_url": "https://ojs.mruni.eu/ojs/intellectual-economics/article/download/5715/4896",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8fd3d71fb8e1cd8449bf314ceccbef54b72dfd1d",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Business"
]
}
|
73681016
|
pes2o/s2orc
|
v3-fos-license
|
Crop Breeding and Applied Biotechnology - 16: 95-101, 2016 Flowering traits in tetraploid Brachiaria ruziziensis breeding
: Tetraploid Brachiaria ruziziensis genotypes which reproduce sexually are essential for the breeding of other species of the Brachiaria genus which reproduce by apomixis. Aiming at studying the available phenotypic and genetic variability in the breeding population of B. ruziziensis, it was estimated the parameters heritability and genetic and phenotypic correlations between the traits associated with flowering, and the traits responsible for forage yield and nutritional quality. Seventeen traits in 1180 individuals from 59 open pollinated families were studied, and the data were analyzed by mixed model methods. Individuals with sparse flowering presented higher breeding values for total dry matter, yield, and total number of panicles per plant than individuals with early or late flowering. Considering breeding population differences on flowering behavior, on individual narrow sense heritability and on genetic correlations between flowering, agronomic quality and nutritional quality traits have to be considered in intrapopulation breeding and in intrapopulational recurrent selection. Samples of leaves obtained from cut 2, which was carried out on 03/04/13 and 03/05/13, were analyzed using the Near Infrared Spectroscopy (NIRS) (Marten et al. 1985). The Van Soest (1994) sequential method was used to estimate the fiber components, which are associated with the nutritional quality of the forage grass. The total fiber components were obtained: neutral detergent fiber (NDF), lignocelluloses (acid detergent fiber, ADF), cellulose (Cel), lignin content via sulfuric acid (Lig S), lignin via permanganate (Lig P), crude protein content (CP), and silica (Sil). In this study, it was used the variables ADF, Cel, Lig S and Lig P expressed as a proportion of the neutral detergent fiber (NDF), in order to characterize the fiber quality, rather than only estimating the absolute values of these components. The in vitro dry matter digestibility (IVDMD) was also determined, which represents the potential for digesting fiber in ruminants, and therefore is useful as an index of biomass quality (Gouy et al. 2013). 21 the beginning of
INTRODUCTION
sexual individuals, independently of being the initial sexual population, such as purely tetraploid B. ruziziensis, or the interspecific sexual individuals generated in exploratory crosses. Aiming at achieving more efficient breeding methods, high level knowledge on the inheritance and genetic variability for economically important traits associated with yield is necessary, as well as on traits associated to cross ability, such as distribution of flowering in the species, and if there are genetic factors which cause differences.
In breeding programs of tropical forage grasses, the flowering period and the time of the year in which it occurs are not generally target traits for selection. Selection criteria are more associated with traits of forage yield and nutritional quality ) than with flowering time. This is because under cultivation and with grazing pressure, flowering is practically null. According to Santos et al. (2004), by using correct management of cultivated tropical grasses, flowering is controlled (or prevented) once the length of stems is constantly reduced by herbivory. Such management results in higher nutritional value and quality of the forage offered to animals, and also reduces the losses caused by dead matter accumulation (Marcelino et al. 2006).
However, the phenology of reproduction in forage grasses is an important component to be considered in breeding, which involves carrying out controlled crosses to obtain full-sib or open pollinated progeny. The importance is also extended to the production of seeds, both in the quantity and quality needed for commercialization, which will determine success in adopting the improved cultivar.
In this context, the objectives in this work were to determine the flowering period in open pollinated B. ruziziensis progenies, in order to estimate genetic and phenotypic parameters of traits associated with flowering, and to estimate the correlation between these traits and those associated with the yield and nutritional quality of forage.
Progenies and phenotypic evaluation
The 59 open pollination progenies of sexual and tetraploid Brachiaria ruziziensis evaluated in this experiment were obtained as described by Simeão et al. (2012). In the progeny test, 1180 individuals germinated from seeds were experimentally evaluated. The experiment was carried out in a randomized block design, with 20 replications, with one plant per plot, spaced at 1.5 m x 1.5 m, and planted in November 2012. The experiment was carried out at Embrapa Gado de Corte, in Campo Grande, state of Mato Grosso do Sul (lat 20° 28' S, long 55° 39' W, alt 530m asl). Field soil is classified as Haplic Ferralsol (Rhodic) (FAO 2006). According to Köppen, the climate type is Aw, humid tropical, with rainy summer and dry winter.
Nine cuts were carried out to evaluate forage yield, at a height of 15 cm, in individual plants, in the period from January 2013 to January 2014. After the ninth yield evaluation cut, which was carried out on 1/22/14, phenological evaluation of flowering started for the 1180 individuals. This evaluation took place weekly over a period of 84 days, on the following dates: 02/12//2014, 02/19/2014, 02/26/2014, 03/05/2014, 03/12/2014, 03/19/2014, 03/26/2014, 04/02/2014, 04/09/2014, and 04/16/2014. The following morphological traits associated with flowering were evaluated: number of panicles per plant (NPP), evaluated per week, and total number of panicles in the period; number of ears per raceme (NER), in a sample of five racemes per plant; mode of flower insertion in spikelet (IES) (0 -uni-serial; 1 -bi-serial; 2 -mixed; 3 -complex); stigma color (CEST) (0 -white; 1 -pink; 2 -purple; 3 -dark purple; 4 -black; 5 -other); anther color (CANT) (0 -white; 1pale yellow/gray; 2 -yellow; 3 -brown; 4 -bluish/other). The number of days to the first flowering (DTF) was counted from the date of the last cut, and only after the presence of at least three racemes per plant. Individuals which started flowering from the 21 st to the 42 nd day were classified as early; those that flowered from the 49 th to the 70 th day, as intermediate; those which started after the 77 th day, were considered late; in addition, individuals which flowered over more than one period were classified as of sparse flowering.
Samples of leaves obtained from cut 2, which was carried out on 03/04/13 and 03/05/13, were analyzed using the Near Infrared Spectroscopy (NIRS) (Marten et al. 1985). The Van Soest (1994) sequential method was used to estimate the fiber components, which are associated with the nutritional quality of the forage grass. The total fiber components were obtained: neutral detergent fiber (NDF), lignocelluloses (acid detergent fiber, ADF), cellulose (Cel), lignin content via sulfuric acid (Lig S), lignin via permanganate (Lig P), crude protein content (CP), and silica (Sil). In this study, it was used the variables ADF, Cel, Lig S and Lig P expressed as a proportion of the neutral detergent fiber (NDF), in order to characterize the fiber quality, rather than only estimating the absolute values of these components. The in vitro dry matter digestibility (IVDMD) was also determined, which represents the potential for digesting fiber in ruminants, and therefore is useful as an index of biomass quality (Gouy et al. 2013).
Agronomic traits of green matter (GM -in g plant -1 ), total dry matter yield (TDMY -g plant -1 ) and regrowth capacity (Reg), as described by Figueiredo et al. (2012), were analyzed for cut 9, carried out on 01/22/14, that is, 21 days before the beginning of the phenological evaluation of flowering.
Statistical methods
All the univariate analyses were carried out using mixed linear models. The following statistical model was used in the analysis of the nutritional quality traits (NDF and proportions of ADF, Lig S, Lig P, Cel, PB, Sil), of the flowering traits (NPP, NER, IES, CEST and CANT), and of the agronomic traits (GP, TDMY and Reg): y = Xr + Za + e , in which y is the vector of data, r is the vector of the effects of replication (fixed) added to the general mean, a is the vector of the (random) individual additive effects, and e is the vector of random residuals. The upper-case letters represent the matrices of incidence for the effects. The individual narrow sense heritability (h 2 a ) was estimated considering a correction by the Wright's kinship coefficient (Resende 2002a), due to the 1/7 proportion of crosses among related individuals considering the seven initial genitors that gave rise to the 59 progenies. The selective accuracy was estimated according to Resende (2002a), based on the parameter prediction error variance (PEV), via elements of the inverse coefficients matrix in the mixed model equations. PEV statistics is related to the accuracy by means of the equation: r âa =(1-PEV/σ 2 a ) 1/2 , in which σ 2 a is the genetic variation among the progenies under evaluation. Genetic correlation among these traits was estimated according to Falconer and Mackay (1996): r a = cov a(x,y) / σ ax σ ay , in which cov a(x,y) is the additive genetic covariance between traits x and y; σ ax and σ ay are the additive genetic standard deviations for traits x and y, respectively. Student-t test was used aiming to test whether or not the correlation differs significantly from zero (Steel and Torrie 1980). Deviance statistics was used for genetic effects hypotheses tests. The fit of different statistical models to the data was tested using the Wilks Likelihood Ratio Test (LRT) (Dobson 1990, Resende 2007. All the statistical analyses were carried out using the Selegen software -Reml/Blup (Resende 2002b).
RESULTS AND DISCUSSION
Individuals of B. ruziziensis progenies presented high amplitude in number of days to flowering, with an interval that reached 63 days between the earliest and latest flowering ( Figure 1). The highest percentage of individuals flowering at the same time occurred at 84 days, and those individuals also presented the highest number of panicles per plant. Of the 1180 individuals evaluated, 22% did not flower during the period under study, and 1% presented early flowering; that is, they flowered from day 21 to day 35. Late flowering, from 70 to 84 days, was observed in 43% of the individuals. Sparse flowering was observed in 34% of the individuals. This information is important for intraspecific breeding in tetraploid B. ruziziensis, especially if it is considered that this species is an essential component in crosses with apomictic species of commercial importance. Since Figure 1. Flowering distribution of B. ruziziensis progenies. RM Simeão et al. pollination progenies generated in intraspecific breeding cycle will be used and, depending on the individuals selected, there may be asynchronous flowering, resulting in inappropriate genetic sampling and compromising the predicted genetic gains in subsequent cycles. Furthermore, knowledge about the flowering period for both the sexual and the apomictic components in interspecific hybridization is an essential condition for planning controlled crosses and achieving success in obtaining hybrids. The effects of genotype x environment interaction in the expression of flowering are well known in grasses , Arnout et al. 2014, and this interaction may result in the different expression of traits per local. This can be exploited in practice, if necessary, by carrying out crosses between asynchronous elite individuals. However, GxE interaction was not quantified in this research, which was carried out in a single year and only at one site.
Breeding values of the individuals analyzed for the traits crude protein content and acid detergent fiber were similar to those between the previously defined distribution groups for flowering period (Table 1) and were not influenced by them. These results differ from those obtained by Casler et al. (2014), in which sparse flowering cultivars presented 9% more crude protein and 3% less NDF in orchardgrass, but they corroborate with previous studies in which there was no evidence of the effect of flowering on the forage quality in the same species (Berg et al. 1981).
It was observed that for the traits total dry matter yield (TDMY) and number of panicles per plant (NPP) breeding values means differed among groups (Table 1). Individuals with sparse flowering presented mean breeding values for TDMY 19% higher than those in late-flowering individuals, 49% higher than those in early-flowering individuals, and 36% higher than those in individuals that did not flower during the total period considered. For NPP trait, sparse flowering individuals presented 64% more panicles than the late flowerers, and 70% more than the early flowerers. However, it should be emphasized that the reference for forage yield used in this experiment was taken before flowering began. In this case, the valid association is that individuals with greater forage mass in the cut previous to the beginning of the flowering period were those which had more intense flowering for a longer period. Results observed for TDMY and NPP in B. ruziziensis differ from those obtained by Casler et al. (2013) in orchardgrass (Dactylis glomerata L.), whose cultivars with sparse flowering presented 57% fewer panicles than the cultivars with concentrated flowering. They also presented forage yield 24% to 32% lower, depending on the cutting management.
The main factors of flowering in forage grasses, both temperate or tropical, are the length of the day (photoperiod) and the temperature (Humphreys et al. 2006), and their interaction with genes associated with this trait, activating or deactivating them. In this context, the investigation of genetic variability and its quantification is essential for breeding purposes. Among the studied traits associated with flowering, only IES did not present genetic variability in the studied population ( Table 2). The narrow sense heritability corrected for endogamy for the traits NER, CEST, CANT, DTF and NPP presented low (0.14) to high (0.90) magnitudes. Narrow sense heritability for the number of days to flowering (DTF) was of low magnitude (h 2 a = 0.20) in B. ruziziensis, and of similar magnitude (h 2 a = 0.17) for the same trait in tall fescue (Festuca arundinacea Schreb.), obtained by Amini et al. (2013). For the number of panicles per plant (NPP), narrow sense heritability found in B. ruziziensis presented higher magnitude (h 2 a =0.74) than that obtained by Amini et al. (2013) for tall fescue (h 2 a = 0.46). The high heritability magnitude of trait CANT (h 2 a = 0.90) is an indication of few genes determining it. Furthermore, narrow sense heritability varied from 0.24 to 0.75 for the agronomic traits, and from 0.12 to 0.20 for the nutritional quality traits. Low heritability traits demand more efficient and accurate breeding methods, as well as the appropriate use of all genetic information available in the experiments (Simeão-Resende et al. 2013). In this context, the regrowth capacity, all traits associated with forage quality, and the number of days to flowering should use methods such as combined selection and best linear unbiased prediction to obtain greater gains per cycle.
Genetic variation coefficients among individuals (CV gi ) and among progenies (CV gp ) revealed marked difference among traits. Nutritional quality traits presented the lowest magnitudes for these parameters, below 16%. The traits NER and NPP presented high magnitudes for both genetic variation coefficients. Due to the importance of these last two traits for yield and commercialization of forage seeds, the detection of this variability represents contribution to breeding Table 3. Phenotypic (above diagonal) and genetic (below diagonal) correlation among traits associated with flowering phenology, nutritional value and associated with yield evaluated in progenies of tetraploid B. ruziziensis programs for B. ruziziensis, and also to its use in directing controlled crosses with apomictic accessions of species of greater commercial importance. The agronomic traits GM and TDMY presented higher genetic variation (>29%) than nutritional quality traits for this forage grass, meaning higher genetic variation available in biomass yield selection. High magnitude accuracy (>85%) for an efficient selection was obtained for the traits GM, TDMY, NER, and NPP, regardless of the fact that relative genetic variation coefficients were lower than 1.0. High accuracy may occur due to the large number of replications used in the experiment, given that accuracy and number of replications are interconnected (Resende and Duarte 2007). This evidence cannot be extended to the nutritional quality traits or to CEST and DTF, due to the lower genetic variability expressed for these traits in the progenies evaluated.
Significant genetic correlations of high magnitude were observed between the traits NPP and TDMY (Table 3) and indicate that direct selection for trait TDMY would be effective in improving panicle yield. It still needs to be investigated whether there is positive and significant genetic correlation between the number of panicles and the production of viable seeds in B. ruziziensis. There is strong negative genetic correlation between panicle density and flowering date, which reinforces the evidence of differences between the means of the genetic values for NPP between the flowering distribution groups, as presented above. The phenotypic correlations between the traits of nutritional quality and those associated with flowering were not significant. The genetic correlations between these same traits in many two-on-two combinations presented low to moderate magnitudes, positive or negative, and were significant. Given the importance of genetic correlations between traits for breeding, knowledge of this aspect supports and directs the selection for each trait, and should be considered in practice, especially if the aim is to increase the number of panicles per plant, which will promote reduction in the percentage of crude protein and an increase in NDF. In either case, the result will be undesirable.
An overlap in flowering between individuals selected for the obtainment of open-pollination progenies is essential for the intrapopulational recombination cycle, and also to carry out controlled crosses with these apomictic species. In the first case, it is because all individuals in the population must be able to recombine and maintain the effective population size through various breeding generations (Johnson et al. 2004). In the second, obviously because there can be no cross without overlap. Knowledge on the inheritance mode of the traits associated with flowering, their correlation with other economically important traits, and the variability available in the breeding population of B. ruziziensis will allow its proper use in breeding species from the Brachiaria genus by interspecific hybridization.
|
2019-02-26T16:14:21.698Z
|
2016-06-01T00:00:00.000
|
{
"year": 2016,
"sha1": "fe5dc246e1605290eda156e5e5e055bbe4e1e50d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1590/1984-70332016v16n2a15",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "d952572da40f3fe8c8fa2b04e5eb5e41477ad7be",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
220794648
|
pes2o/s2orc
|
v3-fos-license
|
Hierarchical Relevance Determination based on Information Criterion Minimization
This paper addresses the issue of hierarchical relevance determination (HRD), which boils down to determining all degrees of freedom in a supervised mixture distribution automatically. Such relevance determination is useful for wide range of machine learning applications. However, it is difficult to solve the HRD task because its objective function includes L0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L_{0}$$\end{document} norm such as the number of models in the mixture and the optimal features to use. Our contribution is twofold. As the main contribution, we formally defined the HRD problem and subsequently proposed a solution strategy, the SICM (sequential in formation criterion minimization) algorithm. The SICM algorithm enables us to continuously minimize an information criterion such as AIC or BIC, both of which includes the number of parameters, and therefore it enables us to determine all the degrees of freedom automatically. As another contribution, we realized a concrete implementation of the ideas, and tested this on actual data. Experiments using a hierarchical model which is constructed using the SICM algorithm have revealed that SICM has capability of constructing interpretable and highly accurate model.
Introduction
Recently, demand has been growing for the use of interpretable prediction models in many machine learning problems, whereas uninterpretable models with high prediction accuracy such as deep learning have been widely used. For example, when constructing a credit rating model, it is legally required that the model be made interpretable. Interpretable models include decision tree and regression models such as linear regression and logistic regression.
Such frequently used interpretable models are simple. Therefore, they are usually inferior to uninterpretable models in terms of prediction accuracy. One solution for prediction accuracy improvement is to construct an interpretable model by combining multiple interpretable models.
In addition, it is important to conduct relevance determination to construct an interpretable model. Relevance determination is useful to prune irrelevant features and to lead sparse models. Consequently, it is an important machinelearning tool.
Motivated by these backgrounds, we address the issue of relevance determination in supervised mixture distributions [1]. We designate this issue as hierarchical relevance determination (HRD). The HRD task is simultaneous optimization of the number of mixture components, model parameters of the individual components and latent variables for assigning observations to the components, and to select an optimal subset of the input variables used in the individual components. 1 As described below, the task is decomposed into a mixture model selection task and a variable selection task.
Mixture model selection task (model selection task hereinafter) is an estimation task for a mixture distribution, which is to optimize the number of mixture components, model parameters of the individual components and latent variables simultaneously. The expectation-maximization (EM) algorithm [2] has been used widely to estimate the parameters of mixture distributions. However, the number of components can not be optimized using the EM algorithm. Model comparison approaches are often used to determine the number of components automatically. A mixture with the minimum criterion value is selected as the best distribution from a set of candidate mixtures with different numbers of components. As a criterion, an information criterion such as AIC (Akaike information criterion) [3] and BIC (Bayesian information criterion) [4] has been usually used. AIC and BIC are model estimation measures, and they penalize mean training fit by the number of model parameters. Therefore, by choosing the minimum AIC/BIC model, we can determine the model complexity such as the number of components. The model comparison approaches are computationally expensive because we must estimate all candidate mixtures. We can avoid computational cost problems using Bayesian inference methods that directly minimize the upper bound of a negative marginal log-likelihood and can therefore estimate the number of components through a single mixture estimation. As the Bayesian inference methods, variational Bayesian inference (VB) [1,5], collapsed variational Bayesian inference (CVB) [6] and factorized asymptotic Bayesian inference (FAB) [7] have been proposed. The first, VB entails the shortcoming that the upper bound is loose because the latent variables and model parameters are assumed to be independent. In the case of CVB and FAB, the independence assumption is not required. Therefore the upper bounds of CVB and FAB are tighter than that of VB.
The variable selection task is an estimation task for a single distribution, which is to optimize the parameters and to select an optimal subset of input variables. Model comparison approaches such as the forward backward algorithm [8,9] have often been used for variable selection. As in the case of model selection, model comparison approaches are computationally expensive. Sparse estimation methods such as Lasso [10], SCAD [11] and least angle regression [12] have often been used for variable selection. The methods can solve the variable selection task because they can derive sparse parameters by minimizing the regularized objective functions. We can avoid the computational cost difficulties posed by model comparison approaches using sparse estimation methods, which directly minimize the regularized objective functions and therefore only require single estimation.
Then, how can the HRD task be solved? Apparently, we can readily construct a solution by simply combining the model selection and variable selection methods described above. However, simple combinations entail technical difficulties as discussed below. It is unrealistic to solve the HRD task using the model comparison approaches because the computational cost becomes too large. In the case of HRD, the number of candidate models becomes much larger than those in the cases of model selection and variable selection. It is also unrealistic to accomplish the task using sparse estimation approaches because it is not trivial how to solve the model selection task using a sparse estimation method. Using VB, we can solve the HRD task. However, VB presents the following two shortcomings. First, it is difficult to apply VB widely because update equations must be derived analytically. Second, it is required that the latent variables and the model parameters must be assumed as independent, although their dependence is fundamentally important for the true distributions. We can also solve the HRD task using the FAB-based method proposed in an earlier report of the literature [13]. The method solves the model selection task through continuous optimization (by variational inference) and the variable selection task by discrete optimization (using a model comparison approach).
Using these inference methods, hierarchical models have been developed and applied to many areas. Such models include Gaussian process mixtures [14], hidden Markov mixtures [15], gamma mixtures [16], and hierarchical multinomial-Dirichlet model [17].
As described in this paper, we propose the sequential information criterion minimization (SICM) algorithm, which is a method for relevance determination in mixture distributions. In addition to being a "consistent" framework for solving the HRD task, SICM has the following properties. First, the objective function of SICM is consistent. Therefore, for example, SICM is NOT a method which solves the model selection task by VB and solves the variable selection task by Lasso. Actually, SICM solves the HRD task as a minimization problem of an information criterion. Second, it is easy to derive the model parameter update equations used in SICM. In fact, SICM estimates the model parameters by iterating L 1 -regularized sparse estimation. It is, therefore, not necessary to derive the parameter update equations analytically as in the VB case. Third, SICM require no assumption of independence between the latent variables and the model parameters. Fourth, the optimization method used in SICM is consistent. Therefore, for example, SICM is NOT a method which solves the model selection task by variational inference (by continuous optimization) and solves the variable selection task by conducting model comparison (by discrete optimization). SICM minimizes the objective function continuously using variational inference and L 1 -regularized sparse estimation.
The key ideas of SICM are summarized as follows ("Overall flow of the SICM algorithm" presents details). First, we use an information criterion, AIC or BIC, as the objective function. SICM solves the HRD task by directly minimizing an information criterion. When minimizing BIC, SICM corresponds to the Bayesian inference methods such as VB and FAB because BIC is an approximate representation of a negative marginal log-likelihood. Second, in SICM, the difficulty of the L 0 term included in an information criterion is overcome by a concave continuous approximation.
SN Computer Science
The resulting optimization problem is solved using a method based on the majorization-minimization (MM) algorithm [18]. The MM algorithm minimizes an objective function approximately by minimizing an upper bound of the objective. Third, independence between the latent variables and the model parameters is not assumed because only the marginal distribution of the latent variables is used (that of the model parameters is not used) when variational inference is used in SICM.
We propose a method for constructing an interpretable hierarchical model by the application of SICM. The model is supervised and is represented as a combination of a decision tree and regression models. When conducting prediction, we assign a regression model to an observation by selection from a set of regression models in accordance with the decision tree. By SICM, the number of regression models used in the hierarchical model and subsets of input variables used in the individual regressions are determined automatically.
We demonstrate the utility of the SICM algorithm through experiments conducted using nine UCI datasets [19]. The results indicate that we can construct an interpretable prediction model with higher prediction accuracy than frequently used interpretable models such as decision tree and logistic regression. This is true because, in binary classification problems, the above-mentioned interpretable hierarchical model based on SICM (1) outperformed decision tree and logistic regression, (2) outperformed VB, and (3) performed comparably to support vector machine (SVM), which is representative of uninterpretable models having high prediction accuracy.
In summary, the main contributions of this study are the following.
1. We propose the SICM algorithm, which solves the HRD problem by continuously minimizing an information criterion: AIC or BIC. 2. As an SICM application, we propose an interpretable hierarchical model represented as a combination of decision tree and regression models. 3. Through binary classification experiments, we demonstrate that SICM is useful for supporting construction of an interpretable model with higher prediction performance than either decision tree or regression because, in the experiments, the earlier described interpretable hierarchical model based on SICM outperformed decision tree and regression and performed comparably to SVM, an uninterpretable model that exhibits high performance.
The remainder of this paper is organized as follows. The next section provides a problem setting of HRD. The following section explains the proposed SICM algorithm, which solves the HRD problem by minimizing information criteria continuously. The next section proposes a method for constructing an interpretable hierarchical model based on the SICM algorithm. The following section explains the experimentally obtained findings. The last section presents concluding remarks.
Problem Setting
This section presents a description of our problem settings. We consider the hierarchical relevance determination task for supervised mixture distributions, which we explain below.
Hierarchical Relevance Determination Problem
Presuming that we are given a dataset of N observations, each of which consists of D numerical input variables x and a target variable y: We regard supervised mixture distributions to be represented as where p, p k , K, Z, and k respectively represent a probability density function (pdf) of a mixture, a pdf of the kth mixture component, the number of components, latent variables explained below, and parameters of the kth mixture. Let us denote a set of latent variables as Z = {z nk } (n = 1, … , N, k = 1, … , K) , where Z is a set of binary variables representing the component assignments. If the nth observation (y n , x n ) is generated from the kth component distribution p k , then z nk = 1 , and z nk = 0 otherwise. Latent variables of one observation are mutually exclusive. Therefore, ∑ K k=1 z nk = 1 holds. We assume that the latent variables follow the following distribution: As described herein, we consider a task of hierarchical relevance determination (HRD) for mixture distributions. The HRD task is to optimize, simultaneously, the number of mixtures K, the parameters , the mixing coefficients and the latent variables Z, and to select an optimal subset of the input variables used in the individual components. The HRD task is to determine automatically all the degrees of freedom in the mixture distribution.
As discussed below, we solve the task by minimizing an information criterion. Therefore, when solving the task, we assume that the following two conditions hold. First, p k (y n |x n , k ) K k=1 satisfy the regularity conditions by which the Fisher information matrices of p k (y n |x n , k ) K k=1 are nonsingular around the maximum likelihood estimators. Second, the optimal assignment Z is unique.
Decomposition of the Hierarchical Relevance Determination Problem
We decompose the HRD problem into two: variable selection and model selection.
Variable Selection Problem
Next we consider the HRD problem where the number of components K is fixed to 1 and the pdf is represented as We define variable selection as the relevance determination problem in this case. The variable selection task is to optimize parameter and to select an optimal subset of the input variables simultaneously.
Model Selection Problem
Next, we consider the HRD problem for which all the input variables are used in the individual components. We define model selection as the relevance determination problem in this case. The model selection task is to optimize, simultaneously, the number of mixtures K, the parameters , the mixing coefficients , and the latent variables Z.
Proposed Method: SICM Algorithm
In this section, we propose the sequential information criterion minimization (SICM) algorithm, which solves the HRD task by continuously minimizing an information criterion of a supervised mixture distribution.
We derive the SICM algorithm for HRD by the following procedure. First, we propose an algorithm for solving the variable selection task by minimizing an information criterion continuously. Second, we propose an algorithm for solving the model selection task by continuously minimizing an information criterion. Third, we derive the SICM algorithm for solving the HRD problem by combining these two algorithms.
Overall Flow of the SICM Algorithm
We briefly review the SICM algorithm. We denote parameters included in the information criterion as . In the case of variable selection, = . In the case of model selection and HRD, = (Z, ).
The overall flow of the SICM algorithm is summarized as presented below.
1. We use the objective function E, which satisfies min E( ) = IC * , where IC * represents the minimum of an information criterion. We then formulate a minimization problem of the information criterion as a minimization problem of E. 2. We approximate the L 0 term of the parameters in the objective function by a function of L 1 norm, and numerically stabilize the problem as 3. We introduce new parameters 0 and derive an upper bound of the objective function: 4. By iteratively minimizing F, we estimate as where s represents an update step. The minimization method by which an upper bound of an objective function is iteratively minimized is called the majorizationminimization (MM) algorithm [18].
Objective Function
Next, we present a definition of the objective function of SICM. Let us denote as where h and v respectively represent a subset of the parameters used (selected) in a model and values of the "selected" parameters. In the case of regression models, h represents a subset of input variables which have non-zero regression coefficients, and v represents the values of the non-zero coefficients. An information criterion and its minimum are expressed as where f represents a model complexity, an L 0 term included in the information criterion, and v ML (h) represents a maximum likelihood estimator where the subset is fixed to h: Model complexity f is dependent on h and is independent of v. Therefore, f is expressed as From Eqs. (10) to (14), the minimum of the information criterion is expressed as As described in this paper, we solve the HRD problem by minimizing the information criterion of the mixture distribution. For this purpose, we use E in Eq. (17) below as the objective function to be minimized. We consider the following minimization problem: By solving the minimization problem in Eq. (16), we can minimize the information criterion for the following reasons. The objective function E is derived by replacing v ML of the information criterion with v. Therefore, E is not equal to the information criterion itself. However, the minimization problem of E coincides with that of the information criterion because the minimum E is equal to the minimum information criterion value IC * , as shown in Eq. (15).
SICM for Variable Selection
In this section, we propose a method for solving the variable selection problem. We consider the following information criterion minimization problem: We use AIC or BIC as the information criterion to be minimized. We set c IC = c AIC ( c IC = c BIC ) when we use AIC (BIC).
It is difficult to treat E( ) numerically because || || 0 is not always continuous with respect to . Therefore we use the following approximation: where i and respectively represent the ith component of and a user-defined small positive constant. The approximation in Eq. (23) is justified because ||s|| 0 and |s| |s|+ (s is a scalar) share the following properties: (1) they become 0 when s = 0 ; (2) they rapidly approach 1 when |s| increases; (3) they are convex upward functions with respect to |s|; and (4) they become nearly equal when approaches 0.
We introduce new parameters 0 , which have the same dimension as .
We propose an algorithm for minimizing F, i.e., for solving the variable selection task. The algorithm sequentially minimizes F. It therefore approximately minimizes the information criterion. Therefore, we designate it as the sequential information criterion minimization (SICM) algorithm.
We summarize the SICM algorithm for variable selection as Algorithm 1. The algorithm consists of the following two steps. First, F is minimized with respect to . As shown in Eq. (26), the problem in Eq. (27) is an L 1 -regularized maximum likelihood estimation problem with respect to . The L 1 regularization term has weights (| 0i |+ ) 2 corresponding to | i | like that of adaptive lasso [20]. It is important that, different from adaptive lasso, the weights be determined automatically based on the information criterion. Second, F is minimized with respect to 0 . The solution of this problem is 0 = because the equality in Eq. (24) holds if = 0 holds, as discussed above.
SICM for Model Selection
In this section, we propose a method for solving the model selection problem. We consider the following information criterion minimization problem: where D k represents the degrees of freedom of the k-th mixture component p k (y|x, k ) and is expressed as In the case of the model selection task, D k is a constant. We set g IC = g AIC ( g IC = g BIC ) when we minimize AIC (BIC). We derive an upper bound of the objective function E(Z, ) , as in the case of variable selection. We denote the minimum of E(Z, ) as E(Z * , * ) . A minimum is always less than or equal to an expectation. Consequently, the following inequality holds Therein, ⟨⋅⟩ p represents the expectation of ⋅ with respect to p. By applying Jensen's inequality (see Appendix A) to −log of Eq. (38), we obtain the following upper bound as
SN Computer Science
where q(Z) represents a pdf of Z.
As in the variable selection case, we use the following approximation of the L 0 term: where is a user-defined positive constant. Also, g(N k ) is a convex upward function of N k . Consequently, by introducing a pdf q(Z) , the following upper bound is derived: By combining inequalities (39) and (41), the following upper bound of the objective function E is derived: where we set g = g AIC and g � = g � AIC ( g = g BIC and g � = g � BIC ) when we minimize AIC (BIC).
Using the upper bound in Eq. (45), we define the model selection problem as the following minimization problem: We propose an algorithm for solving the problem in Eq. (53). We designate the algorithm as SICM algorithm because it minimizes the information criterion sequentially and approximately as in the variable selection case.
We summarize the SICM algorithm for model selection as Algorithm 2. The algorithm consists of the following four steps. First, we minimize F with respect to by solving the following maximum likelihood estimation problem: Second, F is minimized with respect to by maximum likelihood estimation: Third, F is minimized with respect to q(Z). We assume that q(Z) factorizes so that Then, using the variational inference method explained in Appendix B, q(Z) is optimized as (53) min , ,q,q F( , , q Z ,q Z ).
Fourth, F is minimized with respect to q(Z) . The solution of this problem is expressed as This is true because (1) q Z appears in F( , , q Z ,q Z ) with the form of ⟨G k (Z,q)⟩ q ; (2) G k (Z,q) is represented as Eq. (46); (3) Inequality (41) holds for g(N k ) , which is a convex upward function with respect to N k .
SICM for Hierarchical Relevance Determination
In this section, we propose the SICM algorithm for solving the HRD problem. We construct the algorithm based on the SICM algorithms described above, i.e., those for variable selection and model selection, which are summarized as Algorithms 1 and 2.
Let us start from the SICM for model selection. When we consider the HRD task, it is necessary to treat || k || 0 as variables, whereas they are constants in the model selection case. Therefore, when considering the HRD problem, it is necessary to make the following alterations to the SICM for model selection: By these changes, the upper bound to be minimized for solving the HRD problem is derived as
SN Computer Science
Under the alterations, the individual steps in Algorithm 2 are changed as follows. First, the estimation step of q is invariant except for the expression of g � (N k ) . It is necessary to replace D k included in g � (N k ) with || k || 0 . Second, the estimation step of q is invariant because (1) the optimal q is expressed as q = q , as discussed above, and (2) this result is independent of the expression of D k (independent of the described above alterations). Third, the estimation step of is invariant because the estimation of is only dependent on {⟨z nk ⟩ q } and is independent of D k . Fourth, the estimation step of k changes because ∑ K k=1 G k (Z,q, k ) becomes dependent on k . Because of the dependence, the estimation step of in the HRD case is expressed as Therefore k is estimated as presented below: where we set h = h AIC ( h = h BIC ) when we minimize AIC (BIC). If we make the following alterations, then the upper bound F in Eq. (26) in the variable selection case coincides with F k in Eq. (77) in the HRD case, except for their constant terms: Therefore we can estimate k using the SICM for variable selection (Algorithm 1) with the alterations in Eqs. (81) and (82). As a result, the problem for estimating k becomes a weighted maximum likelihood estimation with L 1 regularization. The minimization of F k is interpreted as a minimization of the information criterion, where the pdf is p k (y|x, k ) and the number of observations is N k because (1) r nk log p k (y n |x n , k ) is considered to be a weighted log-likelihood of N k observations because 0 ≤ r nk ≤ 1 and ∑ n r nk = N k ; (2) N k N k + c IC is approximately equal to c IC when N k is larger than , while we can adopt small and we need not estimate Summarizing the points presented above, the SICM algorithm can be constructed for solving the HRD problem in the following way. First, we replace Step 5 of Algorithm 2 (SICM for model selection) with Algorithm 1 (SICM for variable selection). Second, we make the alterations in Eqs. (81) and (82) to Algorithm 1. As an example, Appendix D summarizes the estimation step of k , which corresponds to Step 5 in Algorithm 2 (corresponds to Algorithm 1 with the alterations), for the HRD problem of a logistic regression mixture. In Algorithm 3, we summarize the overall flow of the SICM algorithm for solving the HRD problem. The objective function F in Algorithm 3 is represented as which is derived from Eqs.
Properties of SICM
In this section, we describe some properties of the SICM algorithm for HRD.
Sparsity
The SICM algorithm derives a sparse solution theoretically as discussed below. Automatically, SICM determines the number of mixture components. In fact, Eq. (65) in Algorithm 2 plays a fundamentally important role for the determination. By the effect of the term exp −g � (N k ) , a mixture component with a few observations is erased. Thereby, the number of the components is optimized because exp −g � (N k ) becomes close to 0 if N k , representing the number of observations of the k-th component, becomes small. Consequently, nk rapidly converges to 0 if N k becomes smaller. The parameters of the individual components are made sparse by SICM. As discussed in "SICM for Hierarchical Relevance Determination", SICM estimates the parameters of the individual mixtures by solving the L 1 -regularized maximum likelihood estimation problems. Therefore SICM derives sparse parameters of the individual components by the regularization effect of L 1 penalty, as in the case of sparse estimation methods such as Lasso.
Monotonicity
The SICM algorithm for HRD monotonically decreases the objective function, which corresponds to the upper bound of the information criterion to be minimized. We present a sketch of the proof below.
SICM for model selection monotonically decreases F in Eq. (45), the upper bound of the information criterion to be minimized. A sketch of the proof can be shown as where s represents an update step. Inequality (84) arises from Eqs. (90) and (92) in Appendix B, and Step 3 of Algorithm 2, which estimates the optimal q variationally by fixing q , and . Inequality (85) arises from Step 4 of Algorithm 2, which estimates the optimal q [as described immediately after Eq. (62)] by fixing q, and . Inequality (86) arises from Step 5 and Step 6, which conducts maximum likelihood estimations of and by fixing q and q.
In the case of SICM for HRD, which is a combination of SICM for model selection and SICM for variable selection, the maximum likelihood estimation of is replaced with L 1 -penalized sparse estimation of such as sparse logistic regression summarized in Appendix D. The sparse estimation decreases the upper bound expressed as the L 1 -regularized objective function. Therefore inequality (86) also holds in the HRD case.
Properties Related to Information Criterion Minimization
The SICM algorithm can avoid the singularity problem of mixture modeling described hereinafter. Both AIC and BIC are derived based on second-order expansions of their original objective functions and are derived by assuming the regularity condition. It is therefore not justifiable to apply the information criteria to singular models. However, a mixture model represented as p(y�x, ) = ∑ k k p k (y�x, k ) is singular. SICM avoids this difficulty for the following reasons. When constructing SICM, we started from a mixture represented as Eq. (2) and considered the objective function corresponding to its information criterion in Eq. (19). p(Y|X, Z, ) in Eq. (2) is regular because it is assumed that the conditions described in "Hierarchical Relevance Determination Problem" ("each mixture p k (y|x, ) is regular" and "the mixture assignment of each observation is unique") hold. Consequently, use of the information criterion corresponding to Eq. (19) is justified.
Actually, SICM with BIC minimization has asymptotic consistency. Because p(Y|X, Z, ) satisfies the regularity condition as described above, its Laplace approximation (BIC used in SICM) has asymptotic consistency. Actually, SICM with AIC does not have asymptotic consistency as AIC does not. Nevertheless, AIC is "consistent" in the sense that an estimated distribution asymptotically approaches the true distribution (not the true "model" in the case of BIC).
One benefit of SICM is that it can minimize either AIC or BIC, whereas Bayesian methods such as variational Bayesian inference correspond only to BIC minimization (actually, minimization of negative marginal log-likelihood). We expected, and we show in "Experiments", that SICM with AIC sometimes outperforms SICM with BIC in some cases.
Applicability to Unsupervised Learning
As described in this report, we have considered the task of relevance determination in supervised mixture distributions. However, the SICM algorithm is applicable to unsupervised mixtures by making the following alteration: Unsupervised mixtures include a mixture of normal distributions. For instance, by application of SICM to a normal mixture, we can estimate a mixture of sparse Gaussian graphical models.
Application to Interpretable Hierarchical Modeling
In this section, as an application of SICM, we propose a method for constructing an interpretable hierarchical model, which is constructed as a combination of interpretable models: a decision tree and regression models.
The overall flow of the model construction is summarized as presented below.
1. We estimate p(Y|X, Z, ) using the SICM algorithm. As the individual mixtures, we use interpretable regression models: linear regression models or logistic regression models. 2. We construct a decision tree for model assignment using the training dataset where {r nk } K k=1 ( r nk = ⟨z nk ⟩ q ) is a set of target variables and where x n is a set of input variables. The decision tree for predicting r ⋅k is estimated by solving a K-class classification problem. 3. Using the results of the two steps described above, we conduct prediction as follows: Here r k (x) represents the predicted value of r ⋅k estimated from a test observation x using the decision tree. Actually, r k (x) is considered p(k|x). A key point in constructing the model is the introduction of Step 2, the decision tree estimation. To conduct prediction, it is necessary to estimate the predicted value of r ⋅k from a test observation. However, the prediction mechanism of r ⋅k is not included in the SICM algorithm. Therefore, Step 2 is introduced to predict r ⋅k . Consequently, we adopt the two-step training process described above, which consists of (87) p k (y n |x n , k ) →p k (x n | k ).
Experimental Setting
Through the following experiments, we demonstrate the utility of the SICM algorithm using the hierarchical prediction model proposed in "Application to Interpretable Hierarchical Modeling".
We considered binary classification problems in the experiments. Therefore, we used a logistic regression as a mixture component of the hierarchical model. Appendix D summarizes the SICM algorithm for logistic regression mixtures. The maximum number of components, the constant for approximating the L 0 terms, and the threshold for the convergence condition were set to (K, , ) = (50, 1.0, 10 −6 ) . The maximum number of components K is equal to the initial number of components in the SICM algorithm. The number of components decreases as the SICM steps progress, but it never increases. Therefore it is necessary to set K to enough large number and we set K to 50. The approximation constant was not estimated from training datasets but was given by us as a fixed value.
We used nine datasets from the UCI repository [19]. We selected these datasets from the UCI repository because they are datasets for binary classification problems and consist of a binary target variable and numerical input variables. Some properties of the datasets are summarized in Table 1.
For training the models, we transformed categorical input variables into their one-hot encodings and standardized all input variables.
We observed the binary classification accuracies of the proposed models: SICM AIC and SICM BIC . Hereinafter, we denote the proposed hierarchical model estimated by minimizing AIC (BIC) as SICM AIC ( SICM BIC ). As a measure of the classification accuracy, we used the mean Area Under the Curve (AUC) of ROC curves estimated using five-fold cross validation.
In the experiments, we aimed to show the effectiveness of the SICM algorithm by demonstrating (a) we could construct accurate and interpretable models by solving the HRD problem, and (b) we could get better results using the SICM algorithm than using another existing inference method. In other words, (a) means that we could improve prediction accuracies of interpretable models by solving the HRD problem. In order to show (a), we compared SICMs ( SICM AIC and SICM BIC , the hierarchical models proposed in "Application to Interpretable Hierarchical Modeling", which are estimated using the SICM algorithm) to decision tree and logistic regression, which are frequently used as interpretable models. For this comparison, as our model, we selected the hierarchical prediction model proposed in "Application to Interpretable Hierarchical Modeling", which consisted of tree and logistic regression. Namely to compare to tree and logistic regression, we constructed the hierarchical model consisting of a tree and logistic regressions. For reference, we compared SICMs to support vector machine (SVM), which is a highly accurate uninterpretable model. We note that SICM is not proposed for constructing highly accurate uninterpretable models but proposed for improving prediction accuracies of interpretable models. Therefore, in the experiments, we did NOT aim to show SICMs outperformed SVM because SVM is uninterpretable. SVM was selected to observe how closely SICMs performs to an uninterpretable model. In order to show (b) SICMs derived better results than an existing inference method, we compared SICMs to VB. Here, VB represents the hierarchical prediction model which has the same structure as SICMs and its parameter inference method is replaced from the SICM algorithm to variational Bayesian inference (namely SICMs and VB are the same models estimated by the different methods). We selected VB for the following reasons. First, as described in "Introduction", our main aim and contribution are to construct a new inference method for solving the HRD task. Therefore, we compared the SICM algorithm to variational Bayesian inference by comparing the same models estimated by different inference methods. Variational Bayesian inference is one of the most popular method for relevance determination and is still a SOTA in this area. Second, in this paper, we do NOT aim to propose a new hierarchical model, and thus to select an optimal model family is out of scope. Therefore we selected VB having the same structure as SICMs, and did not select another latest hierarchical models belonging to different model families.
Let us give supplementary explanations to how to estimate the compared models. Logistic regression models were constructed by maximum likelihood estimation. Gaussian kernels were used in SVM. We optimized a decay parameter of a Gaussian kernel by conducting two-fold cross validation. As mentioned above, we used variational Bayesian inference for estimating VB.
The computational environment was as follows: Intel Xeon CPU ES-1650 3.50 GHz CPU and 64 GB memory on a Linux Ubuntu platform. We conducted the experiments using Python language. The implementations of SVM, logistic regression and decision tree (including the tree estimation part in VB and SICMs) were taken from scikit-learn [21], a public machine learning library in Python. We implemented the HRD part of VB (variational Bayesian inference of a logistic regression mixture) based on Refs. [1,22]. Table 2 presents the experimentally obtained results: the classification accuracies of six models. Results indicate the SICM algorithm's effectiveness for the following reasons.
Prediction Performance
By combining decision tree and logistic regression, we can construct a model with higher accuracy than that of any individual models. We can construct an interpretable and highly accurate model using the SICM algorithm. This statement is supported by the following results. First, combination models (hierarchical models), known as SICM AIC , SICM BIC and VB, outperformed decision tree and logistic regression. Second, the combination models performed comparably to SVM. The SICM algorithm which minimizes BIC is theoretically better than variational Bayesian inference because the independence between the latent variables and the parameters of the individual components is not assumed in SICM, whereas variational Bayesian inference requires the independence assumption. This statement is supported by the results that SICM BIC outperformed VB.
One benefit of SICM is that SICM can minimize either AIC or BIC, whereas Bayesian methods such as variational Bayesian inference only minimize the negative marginal log-likelihood. In fact, BIC is an approximate representation of the negative marginal log-likelihood. This statement is supported by results showing that SICM AIC performed comparably to SICM BIC from the viewpoint of their winloss record. Table 3 Model assignment rules of the proposed hierarchical model Each row represents a model assignment rule, which corresponds to a leaf of a decision tree and which is expressed as a product set of the listed conditions. For example, for the first rule in the figure, we use Model 1 for prediction if an observation simultaneously satisfies the following conditions: (1) bill statement in July ≤ − 0.5 ; (2) payment in August ≤ 4.0 ; and (3) payment delay in September ≤ 0.5 . In the rules, represent standard deviations of the respective input variables. The IDs of the assigned regression model (from Model 1 to Model 6) correspond to those in Table 4 Model Model Rows and columns respectively represent an input variable and a logistic regression model. One logistic regression model in this table was used for prediction in accordance with the rules in Table 3. A coefficient is presented with red/white/blue color if its value is positive/exactly zero/negative. If a coefficient is positive/negative, then the default risk increases when the corresponding input variable increases/ decreases
Interpretability
Using the proposed method, we can construct an interpretable model because the proposed hierarchical model is represented as a combination of a decision tree and regression models, and has the sparse structure. In this section, we qualitatively describe the property of interpretability through an example of the proposed model. 2 As a related example, we used the hierarchical model trained on "Default of Credit Card Clients" dataset. An observation in the dataset represents a record of a credit card user, and consists of a binary target variable and 23 input variables. The target variable represents default on a payment in October. The input variables represent (1) amount of the given credit; (2) payment status recorded in April-September such as payment delay (month), amount of bill statement, and amount of payment; and (3) user profile such as age, gender, education, and marital status. We transformed the categorical input variables into their one-hot encodings. Tables 3 and 4 summarize the proposed hierarchical model trained on the "default of credit card clients" dataset based on the SICM algorithm. Table 3 represents the model assignment rules included in the decision tree. Table 4 shows regression coefficients of the individual logistic regression models. These results indicate that the proposed model has the following properties.
The SICM algorithm derives sparse solutions for the following reasons. First, as shown in Table 4, many coefficients were estimated as exactly zero. Second, the number of regression models diminished to six starting with K = 50.
The proposed model assigns a regression model to an observation based on its properties. For example, in the case of Table 3, (1) Model 1 corresponded to the "standard" clients (observations), who had low payment delay in September and low bill statement in July; (2) Model 2 corresponded to the clients who had larger bill statements in July than the standard clients had; (3) Model 3 corresponded to the clients who had more months of payment delay in September than the standard clients; and (4) Models 4, 5, and 6 corresponded to clients who had exceptionally large payments in August. Each regression model occupied an imbalanced number of observations. Three models (Models 4, 5 and 6) were used for the "exceptional" clients.
For each input variable, the signs of the regression coefficients may differ depending on the regression model. For example, in the case of Model 2, the regression coefficient of "age" was negative, which indicates that the default risk decreases as age increases. However, for Models 4 and 6, the coefficients were positive, which indicates that the default risk increases when age increases. Age inversely contributed to the default risk depending on whether the payment in August was exceptionally large ( > 4.0 ), or not. Such local sign inversion is regarded as one reason why the proposed hierarchical model is highly accurate.
Sign inversion does not always occur. For example, "the amount of the given credit" invariably had non-positive coefficients, which indicates that the default risk always decreases as the credit rating becomes better. This result is consistent with our intuition.
Summary
The SICM algorithm was proposed for solving the hierarchical relevance determination problem. The SICM algorithm minimizes an information criterion continuously and therefore enables us to determine the degrees of freedom in a mixture distribution automatically. A method for constructing an interpretable hierarchical model based on the SICM algorithm was also proposed. Experiment results obtained using the interpretable hierarchical model have demonstrated the utility of the SICM algorithm for the following reasons. First, the hierarchical model outperformed frequently used interpretable models (tree and logistic regression) in terms of prediction accuracy. Second, it was shown qualitatively that the hierarchical model derived interpretable results consistent with our intuition.
Future work includes the following two issues. One is a theoretical expansion of the interpretable hierarchical model. When constructing the model, the degrees of freedom in the decision tree for model assignment are not automatically determined. Therefore introduction of relevance determination mechanism to the decision tree estimation is a subject of future work. The other is an application of the proposed information criterion minimization method to those other than the HRD problem. Relevance determination based on the continuous minimization of the information criterion is widely applicable. A promising application is relevance determination in unsupervised distributions such as sparse estimation of Gaussian graphical model mixtures.
SN Computer Science
Equation (102) is derived because W ii > 0 holds. From Eq. (102), the following conditions for relations between u i and a i are derived: By combining Eqs. (101) and (103)-(105), the following solution is obtained: Equation (106) is an update equation for the ith component. By repeating the update step until convergence, the solution of the L 1 -regularized quadratic programming problem can be derived.
bound of ||w k || 0 in Eq. (24), the following upper bound of the objective function F k is obtained.
When deriving the upper bound, we introduced the additional parameters, w k0 and k . As discussed in "Proposed Method: SICM Algorithm", F k can be minimized approximately by minimizing its upper bound: F k (w k , w k0 , k ).
We erase k from F k (w k , w k0 , k ) in the following manner. When w k and k are given, the optimal w k0 , which minimizes F k (w k , w k0 , k ) , is expressed as In inequality (117), the equality holds if z = . Presuming z = z n = y n w † k x n and = kn in the inequality, the optimal k , which minimizes F k when w k is given, is expressed as From Eqs. (127) and (128), we select kn as and erase k from F k as shown below: (127) w k0 =w k .
(129) kn =y n w † k0 x n , As discussed in "Proposed Method: SICM Algorithm", we can approximately minimize F k and estimate the parameters of the kth mixture component by minimizing the upper bound F k in Eq. (131) with respect to w k and w k0 . We can minimize F k with respect to w k using the subgradient method explained in Appendix C. From the discussion presented above, the parameter estimation algorithm of the kth logistic regression component is summarized as Algorithm 4. The SICM algorithm for solving the HRD problem in the case of a logistic mixture is constructed by replacing Step 5 of Algorithm 2 with Algorithm 4 ( k = 1, 2, … , K).
|
2020-07-09T09:08:52.800Z
|
2020-07-01T00:00:00.000
|
{
"year": 2020,
"sha1": "11fa8c5cb5dd7e710e76595b02c89c876acb6519",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s42979-020-00239-3.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "8a1617682bae349557495ac7c22ae0e2059833d4",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
108463405
|
pes2o/s2orc
|
v3-fos-license
|
Use of Echocardiography to Optimize Left Ventricular Assist Devices
The burgeoning ranks of patients with heart failure, the limited number of organs available for heart transplant, and technological improvements have made ventricular assist devices (VADs) important therapeutic options for patients with acute cardiac decompensation and chronic end-stage heart failure. The increased use of VADs has been paralleled by an increased use of transthoracic and transesophageal echocardiography in patients who are candidates for long-term mechanical circulatory support. In particular, echocardiography is becoming an important part of tailoring VADs to specific patient needs. This review discusses current echocardiographic assessments used in the optimization of VAD settings and suggests novel methods that may become part of standard echocardiographic VAD optimization in the future.
clear superiority over the HeartMate I led to the recent US Food and Drug Administration (FDA) approval of the HeartMate II for DT of end-stage HF.
Since the number of patients implanted with LVADs for DT will inevitably increase, the need for incisive clinical tools, including echocardiography, to guide patient selection and optimization of cardiovascular function post-LVAD implantation is becoming increasingly important.
Candidacy for DT with an LVAD includes: • refractory NYHA class IV HF symptoms while on maximal medical therapy, including need for continuous inotropic support; • left ventricular ejection fraction (LVEF) <25-30%; and • peak maximal oxygen consumption <12ml/kg/minute.
Patients usually have an estimated one-year mortality of >50% and/or life expectancy of under two years. 10, 11 Additional hemodynamic cut-offs considered to support the diagnosis of refractory HF include cardiac index <2l/minute/m 2 , pulmonary capillary wedge pressure >20mmHg, and systolic blood pressure <80mmHg.
Echocardiography and Ventricular Assist Devices
Echocardiography plays an important role in the management of patients being considered for or currently supported on an LVAD. Before implantation, echocardiography establishes LVEF, an important criterion for VAD candidacy, and assists in the risk stratification of severe right ventricular (RV) failure requiring RV mechanical support. 12,13 An extremely important consideration is that one-time assessment of RV function in patients with chronic congestive heart failure can be very misleading. There is a tight coupling of RV systolic performance to the pulmonic afterload, and considerable plasticity of the RV has been observed in response to hemodynamic changes. Indeed, re-imaging after a successful 'tune-up' or optimization of a severely congested HF patient can result in a marked improvement of RV systolic functionproof that the dynamic range in load-sensitive RV can be very significant in individual patients.
Echocardiography also identifies important clinical factors that complicate VAD placement, including intracardiac thrombi, aortic regurgitation, severe tricuspid regurgitation, atrial and ventricular septal defects, 14 and ascending aortic dissection or severe atherosclerosis.
Intraoperative transesophageal echocardiography guides the placement of inflow LVAD cannulae in the LV apex or left atrium and of the outflow cannula in the ascending aorta. After implantation, echocardiography aids in the diagnosis of VAD dysfunction, including identification of thrombosis, obstruction or kinking of the inflow and outflow cannulae, 15,16 RV dysfunction, pericardial tamponade, VAD-associated endocarditis, 17 aortic insufficiency, 18,19 and VAD regurgitation. 20 In general, optimizing VAD flows involves finding a (sometimes delicate) balance between ventricular volume overload and underfilling.
Optimization of Left Ventricular Assist Device Settings
Increasing VAD rpm will decompress the ventricle and, in some cases, lead to reverse remodeling and ventricular recovery. On the other hand, excessive unloading of a ventricle can cause myocardial atrophy, mask ventricular recovery, and worsen states such as dehydration, sepsis, anemia, and pericardial tamponade with phenomena known as 'suckdown' events. Here, myocardial structures such as trabeculations, papillary muscles, chordal structures, and aneurysmal wall segments can be drawn into the inflow cannula, resulting in transient obstruction.
VAD optimization by echocardiography essentially involves making echocardiographic assessments and measurements during adjustments of pumping rates, output volume, or rotor speed. The simplest echocardiographic assessment of LV filling is neutral alignment of the interventricular septum. In cases of volume overload, the septum will bow into the RV. If the ventricle is underfilled or, in the case of a continuous-flow device, there is excessive unloading with the rpm set too high, the septum will bow into the LV. A neutrally oriented septum suggests the preferable state of mechanical unloading. In cases of insufficient unloading of the ventricle, the rotor speed or pumping rate or volume can be increased until the septum no longer bows into the RV and the LV has decreased in size. If the ventricle is underfilled and/or trabeculations or other ventricular structures impede inflow, the rotor speed or pump rate/volume can be decreased (see Figure 1).
Optimizing VAD filling by assessment of septal neutrality has several pitfalls. First, septal bowing may be caused by factors other than abnormal ventricular filling. Elevated RV pressures from RV outlet obstruction, acute pulmonary embolism, or the many subcategories of pulmonary arteriolar and pulmonary venous hypertension can cause septal flattening. M-mode and 2D imaging can demonstrate the degree and frequency of aortic valve opening (see Figure 2).
Echocardiography also plays a crucial role in assessing ventricular recovery, particularly during LVAD 'pump-off' or 'turn-down' testing. As VAD support is gradually decreased during testing of myocardial function with minimal mechanical support, the LVEDD will increase in response to increased afterload and preload. The degree to which this change is minimal and accompanied by an increase in LVEF after 15 minutes is suggestive of myocardial recovery. In small studies, LVEF >45% and LVEDD <5.5cm at the time the LVAD was turned down to minimal mechanical support predicted ventricular recovery. 23,24 In a separate study, inferolateral basal wall motion recovery portended successful VAD weaning in patients with acute myocarditis. 25 Dobutamine stress testing has been advocated to measure LV reserve in the form of improved LVEF, and thereby to predict recovery. 26 More simply, an increase in degree and frequency of aortic valve opening on full mechanical circulatory support is a sign of improved ventricular function and suggests that VAD weaning may be possible.
Novel Echo Techniques for Ventricular Assist Device Optimization
Predicting ventricular recovery and optimizing VAD settings may involve detecting subtle alterations in ventricular performance in response to changes in VAD settings. Myocardial strain is a measure of change in length relative to initial length. It is relatively less dependent on filling pressures than chamber dimensions, volumes, and ejection fraction, and therefore represents a better assessment of intrinsic myocardial contractility. Changes in strain over time may provide a sensitive marker to predict ventricular recovery, but strain imaging may also allow finetuning of VAD settings to optimize myocardial performance. Myocardial strain can be measured in different tissue segments but also averaged to provide global strain. Strain is commonly measured by two different echocardiographic techniques. Tissue Doppler imaging (TDI) measures tissue velocity (see Figure 3). Tissue velocity can be used to calculate strain rate, which can be integrated over time to yield strain. TDI has a major drawback in that, as a Doppler modality, it is dependent on the angle of orientation of the ultrasound beam relative to the direction of tissue movement. If the Doppler ultrasound beam and vector of tissue movement are not parallel, tissue velocity will be underestimated. A different modality, speckle tracking (ST), mostly overcomes this problem by measuring the movement of ultrasound speckles (artifacts created by inhomogeneous ultrasound backscatter) in the myocardium. Using 2D images, ST can measure distance of speckle movement in any direction within the 2D plane. From this distance, ST algorithms compute velocity, strain, and strain rate. 27 In order to accurately track the speckles, high frame rates are required, but this is not always possible to achieve. Furthermore, speckles move outside the 2D imaging plane. ST algorithms attempt to compensate with mathematical modeling, but through-plane motion remains a significant limitation. 3D ST is available but currently limited by suboptimal frame rates in many patients.
LVADs have differing effects on the RV. They can reduce RV afterload by reducing LV preload and pulmonary pressure, but they can increase RV preload by increasing the trans-systemic circulating blood volume. RVs with marginal reserve may fail in the face of increasing preload with VAD insertion, or may fail during VAD weaning when pulmonary pressures increase. Specifically, the RV may increase in size over time due to increased preload, and if the decrease in pulmonary afterload is not matched by an increase in cardiac output, there will be a decrease in RV stroke work. 28
|
2019-04-12T13:29:43.077Z
|
2010-02-18T00:00:00.000
|
{
"year": 2010,
"sha1": "d8f82d63a2500d4cecb1597b8127124f541cd533",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.15420/usc.2010.7.2.11",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "3d74b4ee3fcad3b21e2f322cc844e1a39c1ed656",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
268652336
|
pes2o/s2orc
|
v3-fos-license
|
Evaluation of the reduction of radiation dose received by pediatric patients in new-generation biplane angiocardiography: Randomized controlled study
Objective We aimed to evaluate the safety and efficacy of radiation dose reduction with a new-generation biplane angiocardiography system in patients undergoing transcatheter isolated patent ductus arteriosus (PDA) closure. Materials and methods Fifty pediatric patients who underwent transcatheter PDA closure were randomly divided into two groups as normal radiation dose and low dose. Patients who required additional procedures other than PDA closure were excluded. PDA closure was performed according to the angiographic measurement of the defect. After the procedure, age, weight, sex, PDA measurements, and radiation measurements such as dose-area product (DAP, Gy.cm2) and air kerma (AK, mGy) were compared between the groups. Results There was no statistically significant difference between the groups in age, sex, weight, PDA diameter, PDA type, device used, and device diameter (p > 0.05). While there was no statistically significant difference between the groups in terms of cine recording, number of recorded images, and fluoroscopy time (p > 0.05), there was a statistically significant difference between the total DAP, cine and fluoroscopy DAP, total AK, frontal and lateral tube AK, and DAP/kg (mGy.m2/kg) measurements (p < 0.05). Conclusion Transcatheter PDA closure with a low radiation dose is as effective as that with a normal radiation dose. The radiation dose received by the patient during the procedure was significantly reduced. With the vision provided by this study, it seems possible to work with a low radiation dose in other groups of patients.
Introduction
The occurrence of cancer due to radiation doses and stochastic effects is well known.This is even more important for children with a long-life expectancy.It highlights the importance of the as low as reasonably achievable (ALARA) principle, which should be considered in diagnostic or interventional cardiac catheterization for congenital heart disease [1][2][3][4].Repeated cardiac catheterizations, chest radiographs, and computed tomography scans are required in children with congenital heart disease, especially in complex diseases.Shortening the duration of cine cardiography during each catheterization procedure alone is not sufficient to reduce the dose.
In addition, it is recommended to acquire the fluoroscopic image instead of the cine image, to use collimation, not to make recordings of unnecessary areas for too long, and to optimize the distance of the detector from the patient [1][2][3][4].In our study, we evaluated the efficiency of the procedure and the success of dose reduction with a randomized method in transcatheter closure of the patent ductus arteriosus (PDA) by dose reduction using the ClarityIQ program of the Philips Azurion 7 B20/12® (Philips Medical Systems, Eindhoven, The Netherlands) angiocardiography system.
The deterministic and stochastic effects of radiation were determined by measurements of dose-area product (DAP, Gy.cm 2 ) and air Kerma (AK, mGy).Low and normal dose were compared in terms of effectiveness and radiation doses.The deterministic effect is related to the cell damage caused by radiation.It occurs when the radiation dose exceeds a certain threshold.When the threshold is exceeded, the severity of tissue reactions increases with increasing radiation dose.The threshold dose is usually 2 Gy for transient skin redness and 3 Gy for transient hair loss.Air Kerma is used to determine the deterministic effects of ionizing radiation.Stochastic effects refer to mutations caused by DNA damage that can occur at low levels of radiation.In the long term, cancer may develop.It is maintained that the stochastic risk, or probability of developing cancer, is directly proportional to the total radiation dose received and there is no threshold as in deterministic risk.Dose-area product is used to estimate the risk of stochastic effects [5][6][7][8].
As a function of the new-generation angiocardiography system (Philips Azurion 7®), a dose model is created after the procedure [9].In this modeling, the thoracic region is represented as a sphere with a diameter of 30 cm arranged around the isocenter.The surface of this sphere is divided into 10 areas, five on the cranial and five on caudal sides, corresponding to the different projections of the X-ray beam [9].In the dose model and report, in addition to the fluoroscopy time, the number of series, the number of images, the total DAP, the cine DAP, the fluoroscopy-induced DAP, the total cumulative AK of the whole body, the real cumulative peak AK of the hottest spot in the irradiated body region, and the frontal and lateral tube AK are indicated numerically and graphically [9].(Figs. 1 and 2.).In addition, a warning is given when the peak AK in the irradiated body region is above the threshold (2 Gy).
Research method
A total of 50 children undergoing isolated transcatheter closure of the PDA were randomly divided into two separate groups of 25 to ensure homogeneity of the patient group.Both groups were treated using the same biplane angiocardiography machine, one group with normal radiation dose and the other group with low radiation dose using the new-generation ClarityIQ program.Patients who underwent aortic/pulmonary balloon valvuloplasty, or aortopulmonary collateral artery occlusion in the same session and patients with complicated additional procedures that required a longer procedure time were excluded from the study.For standardization, the frame per second (fps) number was kept at the recommended fixed 25 fps.
Measurements and parameters monitored
Patient demographics included sex, age, height, and weight.During PDA closure, defect type, length, ampulla diameter, pulmonary tip diameter, pulmonary artery pressure, shunt ratio, device type, and device size were detected.Angiographic images of each patient and contrast injection were obtained in the standard right anterior oblique (RAO) and lateral positions.After the occlusion procedure, the efficacy of the procedure was evaluated with a control contrast injection in the RAO and lateral positions.Fluoroscopy time, total DAP, cine DAP, fluoroscopy DAP, total AK, separate AK of the frontal and lateral tubes, and peak AK were recorded in the automatic radiation measurement report of the angiography machine.DAP (Gy.cm 2 ) was converted to mGy.m 2 and indexed as DAP/kg.
Data analysis
Patients were randomly divided into two groups: a normal standard-dose and a low-dose group.Demographic data are expressed as percentage, mean, median, standard deviation, minimum-maximum value, and interquartile range (IQR).Analyses were performed using the SPSS 22.0 program.Demographic characteristics of the participants are presented as frequency and percentage.Parametric measurements were compared using the t-test for independent groups and nonparametric measures were compared using the one-way analysis of variance.Percentages were analyzed using the chi-square test.A value of P < 0.05 was considered statistically significant.O. Bas ¸pınar et al.
Ethics Committee approval
Ethics committee approval was obtained.Written informed consent forms were obtained from the patients' families and patients who were old enough to provide their consent for participation in the study.
Study schedule
This study was conducted between December 2021 and October 2022 with patients who applied to our pediatric cardiology outpatient clinic and underwent PDA closure.
The system used was Philips Azurion 7 B20/12® model.The radiation protocol was selected as normal and low with the ClarityIQ® system of the device. 9Special filters in the ionizing radiation tube, kV and mA values were changed and normal and low-dose radiation adjustments could be made according to the body surfaces and weights of the patients.With the technological advantages offered by ClarityIQ, 500 parameters can be fine-tuned to reduce noise, sharpen edges, increase filtering, reduce focusing, change the grid, and enable the tube and generator to deliver a shorter pulse current.In addition, the Zero Dose Positioning® system in Philips angiography devices allows the patient's image focus to be changed without the use of fluoroscopy; the Last Image Hold® feature allows the last fluoroscopy images to be recorded and DoseAware® is a protective measure that allows the patient and staff to receive less dose.Radiation measurement parameters vary depending on the technique.Measurements in this study were Gray (Gy), along with dose area product of Gy.cm 2 .AK is the amount of kinetic energy released into the air due to ionizing radiation, expressed in mGy, where 1 Gy = 1 J/kg.Peak AK is the highest AK value to which any point of a surface exposed to radiation is exposed.Skin dose is the absorbed dose delivered by ionizing radiation to the patient's skin at the point of irradiation.Unlike the reference AK, this value determines the actual absorption.DAP is the product area cross-section of the X-ray beam and the AK value averaged over this cross-section, expressed in Gy.cm 2 .Unlike skin dose and AK, DAP is independent of the distance from the focal point.The unit of measurement is converted to mGy.m 2 by multiplying by 100 for comparison with other studies.It was also indexed by dividing by weight and DAP/kg, and mGy.m 2 /kg was obtained.
Demographic characteristics of the participants
Twenty-eight (56%) of the 50 patients were female with a mean weight of 8.6 ± 7.1 (3.3-35) kg.The arterial and venous sheaths were inserted in 32 patients (64%), and the most common form of PDA was type A conical type.The mean pulmonary artery pressure was 25 ± 12.9 (12-65) mmHg.The PDA at the pulmonary artery end was 2.2 ± 1.0 (1-4.9)mm and the shunt ratio was 1.6 ± 1.1 (1-5.8).When patients were randomly divided into two groups (normal and low dose), there was no statistically significant difference between demographic data, and the groups were homogeneous (p > 0.05, Table 1.).
The method of vascular access, PDA type, PDA size, pulmonary artery pressure, shunt ratio, device types and diameters used, and device delivery ways were assessed in the cardiac catheterization of patients who received low-and normal-dose radiation.No difference was found between the groups regarding the measured parameters (p > 0.05, Table 1.).Down syndrome was detected in six patients and pulmonary hypertension in 11 patients.Transcatheter PDA closure was successfully performed in all patients in each group without residual shunts and iatrogenic the left pulmonary artery and aortic obstructions.There were no major complications between groups, and the minor complications (hematoma at the enterance point, transient hypoxemia during sedation, transient arrhythmias) were not related to assigned radiation dose.After cardiac catheterization, patients' dose reports were automatically appended to the end of the cine images, and dose models were created.An example of the dose report of one of our patients is shown in Figs. 1 and 2.
While there was no statistically significant difference between the number of series and images and fluoroscopy duration among the low-and normal-radiation dose groups (p > 0.05), a significant difference was observed between total DAP, cine DAP, fluoroscopy DAP, total AK, frontal AK, and lateral AK measurements (p < 0.05) (Table 2, Figs. 3 and 4.).
Measurements of DAP/kg (mGy.m 2 /kg) and calculations to standardize DAP values to patient weight were performed in both groups, and there was a statistically significant difference between measurements (mean 5.6 ± 3.6 [1.6-17.9] vs. mean 11.6 ± 5.7 [4.6-23.4],p < 0.0001) (Table 2 and Fig. 5).The median low DAP dose/kg was 4.68 (IQR 3.2 and 6.89) and the median AK was 5.53 (IQR 4.2 and 7.7).We found that the highest radiation dose was from the lateral exposure in 48 patients and from the frontal tube in only two patients.
Discussion
Especially in children with complex congenital heart disease, ionizing radiation exposure increases with repeated cardiac catheterization.To protect the cardiology staff and the pediatric patient, the ALARA principle is often implemented.Other practices include using flat-panel detectors, avoiding unnecessary imaging, shortening fluoroscopy time, using nonradioactive methods such as echocardiography whenever possible, using fluoroscopy recordings instead of cine recordings, using collimation, and keeping the detector close to the patient [1][2][3][4][5][6][7][8].In pediatric patients, the number of fps is higher than in adults because of the higher heart rate, but it may be possible to reduce fps significantly, especially for procedures such as transcatheter closure of secundum atrial septal defects [10,11].
The harmful effect of radiation depends on the dose received by the patient; therefore, dose reduction is critical.Although the stochastic effect is independent of the threshold dose, its frequency is expected to increase with increasing dose [12][13][14].In our study, the number of fps was kept constant according to the general recommendation for children to determine the effect of dose reduction, while the dose of ionizing radiation was adjusted as normal and low dose.It was found that the use of low-dose radiation made a statistically significant difference in the DAP and AK units in which the risk of deterministic and stochastic effects was measured, while not resulting in a decrease in success.The randomized selection of groups, the selection of a procedure such as transcatheter PDA closure that requires homogeneity, the standardized use by a single physician, and the prospective study design are the positive aspects of the study.
The Philips AlluraClarity® radiation dose reduction system was found to be efficient and effective by Bracken et al. [15] in 268 patients with coronary artery disease and by Sullivan et al. [8] in 430 pediatric patients, with the previous version of ClarityIQ.The newly developed system was evaluated in our study and resulted in significant DAP and AK reduction even when fps was maintained at the standard level.
AK and DAP measurements can be used to determine the deterministic and stochastic adverse effects of radiation.It is even more important that this can be achieved without compromising the success and without reducing the number of fps, which is an important indicator of image quality.Studies find that partially reducing the number of fps, further reduces the amount of radioactivity.Because image sharpness is even less important in transcatheter closure of a secundum atrial septal defect, the number of fps can even be reduced to single digits [4,[10][11][12].However, as was the case with the present study, the fps number may need to be kept higher because measurement of the diameters and length of the PDA is critical for proper device selection.
When DAP/kg was standardized, the results of our study were found to be significantly lower compared with other studies.To allow objective comparison with these studies, transcatheter PDA closure and median DAP (mGy.m 2 /kg) and median AK (mGy) and interquartile measurements were used as units of measurement.Ghelani et al. [16], first published national radiation doses in 2014.In 2017, the results were updated with more than 2000 cases from seven centers. 2 Patel et al. [14] obtained the lowest DAP/kg and AK measurements for six different procedures, particularly atrial septal defect and PDA closure.In addition to these large studies, smaller studies by Borik et al. [12] achieved dose reductions of approximately 50% in atrial septal defect and PDA closure at 7.5 fps.Kyobashi et al. [17] demonstrated low DAP in a study that was not indexed by patient body weight.Compared with these studies, our study was able to achieve the lowest DAP/kg and AK values by keeping not lowering fps numbers (Table 3).It was found that the fine-tuning of the ClarityIQ program was quite efficient.After this phase, a same or similar study should be performed by decreasing the number of fps to reach the lowest effective radiation dose.
In 48 patients, it was observed that the highest value of the AK dose was caused by lateral exposure and only in two patients by RAO exposure originating from the frontal tube.This finding supports that highest radiation scattering measurements with biplane angiographic tubes are most often associated with excessive angulation of the lateral tube [20].
A limitation of our study may be that we did not reduce the fps numbers.Although we could certainly achieve lower radiation values by reducing the frame rate, it would not have been possible to determine the effectiveness of the ClarityIQ system.
Fig. 1 .
Fig. 1.First page of Philips Azurion 7 biplane angiocardiography device dose model one of a patient from low dose group.DAP total dose, cine and fluoroscopy DAP, AK total dose, and AK dose for frontal and lateral tube are given in Gy.cm 2 and mGy.In addition, the number of series, number of images, and what part of the thorax receives the maximum AK dose through which tube are shown in the figure.(DAP, dose-area product; AK, air kerma).
Fig. 2 .
Fig. 2. Second page of the Philips Azurion 7 biplane angiocardiography device dose model one of a patient from low dose group.DAP, total AK, and frontal and lateral AK doses are given in Gy.cm 2 and mGy per dose.The number of series, number of images, and number of images per second are also shown.(DAP, dose-area product; AK, air kerma).
Fig. 3 .
Fig. 3. Boxplot with linear scales of total, cine, and fluoroscopic DAP measurements in patients who received low or normal doses of ionizing radiation during PDA closure (p < 0.05).(DAP, dose-area product; PDA, patent ductus arteriosus).
Fig. 4 .
Fig. 4. Boxplot with power scales of total, frontal, and lateral AK measurements in patients who received a low or normal dose of ionizing radiation during PDA closure (p < 0.05).(AK, air kerma; PDA, patent ductus arteriosus).
Table 1
Comparison of patient groups demographic characteristics divided into low and normal radiation dose.
Table 2
Comparison of automatic dose measurement parameters of patients receiving low-and normal-dose radiation.
Table 3
Comparison of our study and other studies on ionizing radiation reduction during transcatheter PDA closure (median values).
DAP, dose-area product; AK, air kerma, * lowest median ionizing radiation values, † did not index DAP to weight.O. Bas ¸pınar et al.
|
2024-03-24T15:20:42.085Z
|
2024-03-01T00:00:00.000
|
{
"year": 2024,
"sha1": "5678b852f8a9c3c58c7038f6fb0f3fc5692aefa1",
"oa_license": "CCBY",
"oa_url": "http://www.cell.com/article/S2405844024041690/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "259940de13a96361b93b9e16722dbcee29d7b65d",
"s2fieldsofstudy": [
"Medicine",
"Physics",
"Engineering"
],
"extfieldsofstudy": []
}
|
252633598
|
pes2o/s2orc
|
v3-fos-license
|
Advances in Exosomes as Diagnostic and Therapeutic Biomarkers for Gynaecological Malignancies
Simple Summary The three major gynaecological cancers are ovarian cancer, endometrial cancer, and cervical cancer, which endanger women’s health worldwide. Significant progress has been made in the study of exosomes, which have been proven to be an important form of intercellular communication, as well as an important carrier for the uptake, transport, and release of cargo. Exosomes may also be promising diagnostic or prognostic markers for gynaecologic malignancies, which may improve the level of treatment of gynaecologic malignancies. This article reviews the latest research progress and systematic knowledge of exosomes in gynaecological malignant tumours in recent years, in order to provide a new perspective for the treatment of gynaecological tumours and promote the clinical application of exosomes in gynaecological malignancies. Abstract Background: Exosomes are extracellular vesicles that can be released by practically all types of cells. They have a diameter of 30–150 nm. Exosomes control the exchange of materials and information between cells. This function is based on its special cargo-carrying and transporting functions, which can load a variety of useful components and guarantee their preservation. Recently, exosomes have been confirmed to play a significant role in the pathogenesis, diagnosis, treatment, and prognosis of gynaecological malignancies. Particularly, participation in liquid biopsy was studied extensively in gynaecological cancer, which holds the advantages of noninvasiveness and individualization. Literature Review: This article reviews the latest research progress of exosomes in gynaecological malignancies and discusses the involvement of humoral and cell-derived exosomes in the pathogenesis, progression, metastasis, drug resistance and treatment of ovarian cancer, cervical cancer, and endometrial cancer. Advances in the clinical application of exosomes in diagnostic technology, drug delivery, and overcoming tumour resistance are also presented. Conclusion: Exosomes are potentially diagnostic and prognostic biomarkers in gynaecological malignancies, and also provide new directions for the treatment of gynaecological tumours, showing great clinical potential.
Introduction
Exosomes are membrane-coated particles that range in size from 30-150 nm and can transport several types of cargo, including proteins, lipids, genetic material, and others. Exosomes carry out intercellular communication and have an impact on recipient cells' functionality. This reveals excellent research prospects in terms of therapeutic delivery vehicles.
Ovarian cancer (OC), cervical cancer (CC), and endometrial cancers (EC) are the three most frequent gynaecologic malignancies, and they contribute considerably to the global cancer burden. The most common gynaecological malignancy cancer-related cause of death is ovarian cancer. More than 70% of OC patients are diagnosed at an advanced stage and relapse rapidly after initial treatment; thus, the 5-year survival rate of OC is low [1,2]. The biogenesis, content, and transport of exosomes. Exosomes originate from plasma membrane invaginations and then form early endosomes, which in turn form multivesicular bodies (MVB) containing intraluminal vesicles (ILVs). Some MVBs then fuse with the plasma membrane and release ILVs into the extracellular environment as exosomes. During the formation of exosomes, they will carry exogenous or endogenous cargoes, and finally be released from the cells with a diameter of 30-150 nm. The membrane structure and contents of exosomes, including RNA, DNA, proteins, lipids, etc., are also shown in the figure, which are then transported to recipient cells for their functions. Created with BioRender.com.
The lipid bilayer membrane structure of exosomes can effectively safeguard the chemicals they contain. Engineered exosomes attach cell-or tissue-targeting peptides to the surface of exosomes in order to achieve selective targeting to specific cells or tissues, in order to modulate the function of target cells as well as living tissues. Thus, they could be utilized as carriers for drug delivery, targeting specific cells or tissues in order to improve therapeutic efficacy and safety [14].
Exosomes and Gynaecologic Malignancies
Exosomes play a crucial role in intercellular communication [15], Tumour-derived exosomes have emerged as mediators of tumour formation and progression, metastatic spread, enhanced angiogenesis, and drug resistance, by regulating stromal cells and the Figure 1. The biogenesis, content, and transport of exosomes. Exosomes originate from plasma membrane invaginations and then form early endosomes, which in turn form multivesicular bodies (MVB) containing intraluminal vesicles (ILVs). Some MVBs then fuse with the plasma membrane and release ILVs into the extracellular environment as exosomes. During the formation of exosomes, they will carry exogenous or endogenous cargoes, and finally be released from the cells with a diameter of 30-150 nm. The membrane structure and contents of exosomes, including RNA, DNA, proteins, lipids, etc., are also shown in the figure, which are then transported to recipient cells for their functions. Created with BioRender.com.
The lipid bilayer membrane structure of exosomes can effectively safeguard the chemicals they contain. Engineered exosomes attach cell-or tissue-targeting peptides to the surface of exosomes in order to achieve selective targeting to specific cells or tissues, in order to modulate the function of target cells as well as living tissues. Thus, they could be utilized as carriers for drug delivery, targeting specific cells or tissues in order to improve therapeutic efficacy and safety [14].
Exosomes and Gynaecologic Malignancies
Exosomes play a crucial role in intercellular communication [15], Tumour-derived exosomes have emerged as mediators of tumour formation and progression, metastatic spread, enhanced angiogenesis, and drug resistance, by regulating stromal cells and the tumour microenvironment (TME). Tumour exosomes primarily contain three major components, namely DNA, RNA, and protein [16], and their biogenesis, mechanisms involved in tumorigenesis, development, and treatment, as well as biomarker development in cancers are under investigation [17,18].
Exosomes in Ovarian Cancer
One of the top causes of death for women is ovarian cancer. More than 70% of OC patients are identified at an advanced stage, where the 5-year overall survival rate is lower, and this stage has an average 5-year survival rate of about 47% for all OC patients [2]. OC is the most common cause of death from gynaecological malignancies globally. OC patients are usually diagnosed at an advanced stage, in part because of the lack of early diagnostic tools, and OC patients are often prone to rapid relapse, so focused therapy needs to be studied more, along with the response detection of OC and developing more applications, which is crucial for reducing OC mortality.
Exosomes Derived from Body Fluids
Exosomes use for early diagnosis and effective treatment of OC has advanced significantly in recent years. Nucleic acids and proteins identified in the serum of OC patients are helpful for the early diagnosis of OC. These chemicals can be utilized as diagnostic or prognostic indicators because it has been established that they are connected to the ovary. Zhu et al. [19] discovered that the expression level of miR-205 in plasma exosomes of OC patients was significantly higher as compared to that of benign and control groups. In addition, the level of miR-205 in serum of OC patients with stage III-IV was higher than that of I-II. FIGO stage III/IV, high grade, ascites, higher levels of CA-125, lymph node metastasis, and prognosis were strongly correlated with low plasma exosome-derived fragile site-associated tumour suppressor (FATS) levels in patients with OC [20]. Xiong et al. [21] explored that miR-200b was increased in serum exosomes of OC patients, and inhibited KLF6 expression in order to promote macrophage M2 polarization in OC to play a cancer-promoting role. Seven miRNAs were found to be upregulated and two miRNAs to be downregulated in the serum exosomes of OC patients, according to an exosomal miRNA study. Further analysis revealed that miR-4732-5p may be a promising candidate biomarker for the diagnosis of epithelial OC [22].
Exosomes Derived from Cells
Exosomes from OC cells have recently been found to include a range of chemicals that have been shown to be associated with tumour progression, metastasis, angiogenesis, or drug resistance. OC cell-derived exosomes induce premetastatic niche formation, laying the groundwork for rapid metastatic invasion in a distant TME [23]. For instance, exosomal ANXA2 from OC cells promotes the mesothelial-mesenchymal transition (MTT) as well as the degradation of the extracellular matrix of human peritoneal mesothelial cells. This ultimately influences the premetastatic microenvironment of OC, providing conditions for the intraperitoneal implantation and metastasis of OC [24]. Thus, tumourderived exosomes could serve as biomarkers suitable for liquid biopsy and new roles as chemotherapeutic targets.
Ovarian cancer cell-derived exosomes encourage the angiogenesis and migration of vascular endothelial cells in vitro and in vivo, which includes exosomal miR-130a [36], lncRNA ATB [37], and PKR1 [38] playing a role in it, while miR-92b-3p [39] secreted by OC cells is antiangiogenic in OC.
Drug-resistant OC cells' exosomal miR-429 [40] and miR-21-5p [41] can confer chemoresistance on other OC cells. Reduced O-GlcNAcylation of SNAP-23 promotes exosome release in OC cells, which enhances exosome-mediated efflux of cisplatin from cancer cells, which leads to increased chemoresistance [42]. Alharbi et al. explored that the degree of platinum resistance induced in OC cells differed when exposed to low oxygen tension (1% oxygen), so they identified a set of glycolysis-related proteins and it was illustrated that chemoresistance transmission to OC cells by exosomes is related to hypoxia-induced glycolysis pathway protein expression [43].
In platinum-resistant OC cell lines, TMEM205 transmembrane protein expression is 10-20 times higher, which may contribute to OC through exosome-mediated platinum drug efflux [44]. Exosomal CLPTM1L from a cisplatin-resistant OC cell line is capable of conferring cisplatin resistance in a drug-sensitive OC cell line [45]. Exosomes secreted by chemoresistant OC cells could promote angiogenesis, miR-130a in exosomes might play a crucial role in this process [36], and targeted delivery of exosomal miR-484 could induce normalization of blood vessels, sensitizing OC to chemotherapy [46].
Exosomes in Cervical Cancer
Cervical cancer is the fourth most common cancer in women globally, and despite being among the most preventable cancers, CC is consistently the second leading cause of cancer death among women between the ages of 20 and 39 [2]. CC develops in the squamocolumnar junction of the cervix, and the human papillomavirus is thought to be responsible in the great majority of cases (HPV). Human papillomavirus (HPV) appears to be a major cause of cervical squamous cell carcinoma and has been the focus of research on CC diagnosis as well as treatment over the past few decades [47]. Exosomes also play a crucial part in the growth of CC. The progression of CC initially occurs in the form of local expansion, so the creation as well as maintenance of a TME, which supports the growth and spread of tumour cells, is the key to CC progression. An integral role is played by CC cell-derived exosomes in intracellular communication, which promotes tumour growth.
Exosomes Derived from Body Fluids
Several nuclear transporters in exosomes of CC cells were identified in various studies and their presence was also verified in serum, combined as a set of biomarkers, and identified as potential biomarkers for diagnosis [48]. The level of serum exosomal lncRNA DLX6-AS1 in CC patients was significantly higher, as compared to that in CIN patients and normal controls [49]. In contrast to healthy controls, the plasma exosomal miR-125a-5p expression level of CC patients was significantly lower, which might be a potential marker to distinguish noncervical cancer and cervical cancer [50].
Exosomes Derived from Cells
The progression of CC largely relies on tumour angiogenesis, which is dependent on tight interactions among different cellular components of the TME, specifically among tumour cells, endothelial cells, and immune cells [51]. Exosomes play a crucial part in intracellular communication and interactions. Thus, exosomes have also been used for studying the underlying mechanisms of CC tumour angiogenesis.
Angiopoietins, which regulate vascular development and are essential for vascular remodelling in inflammatory conditions and tumour angiogenesis, are regulated by a receptor known as tyrosine kinase with immunoglobulin and epidermal growth factor homology 2 (TIE2). Duet et al. [52] discovered that CC cell-derived exosomes deliver TIE2 protein to macrophages, which induces the formation of TIE2-expressing macrophages (TEMs) to promote CC angiogenesis. By upregulating hedgehog-GLI signalling, CC exosomes also encourage angiogenesis in human umbilical vein endothelial cells (HUVECs), and exosomes from HPV-positive (SiHa and HeLa) cells are more angiogenic [53]. miR-663b is also confirmed to exist in CC exosomes and promote angiogenesis by inhibiting the expression of vinculin in vascular endothelial cells [54].
MiRNAs, lncRNAs, and other functional RNAs can be transported between cells by exosomes. Among them, exosomal miR-1323 was secreted by cancer-associated fibroblasts (CAFs) transferred to CC cells, which promote CC progression and radioresistance [55]; while exosomal miR-1468-5p released by CC cells increases tumour immune escape through immunosuppressive effects by lymphatic endothelial cells (LECs) in TME, high serum exosomal miR-1468-5p levels correlate with an immunosuppressive state and poor prognosis in CC patients [56]. Similarly, exosomal miR-142-5p secreted by CC cells also mediates immunosuppression by inhibiting indoleamine 2, 3-dioxygenase expression by LECs [57]. Exosomal miR-663b promotes EMT and metastasis in CC cells by targeting MGAT3 under TGF-β1 stimulation [58]. LncRNAs have also been found in exosomes, and exosomal lncRNA UCA1 from CC stem cells promotes self-renewal and differentiation of CC stem cells through the miRNA-122-5p/SOX2 axis [59]; exosomes from cancer cells produce lncRNA AGAP2-AS1, which regulates the miR-3064-5p/SIRT1 axis to boost the proliferation of CC cells [60]; LncRNA LINC01305 can also be transferred to recipient cells via exosomes in order to enhance CC progression [61].
Exosome-carrying proteins are also implicated in the development of CC. For instance, Wnt2B protein from CC cells is delivered to fibroblasts in exosome form, where it induces fibroblast activation into CAFs and advances CC [62]. HPV E6 transcripts were also detected in exosomes of CC cells, which might serve as potential exosome biomarkers for CC [63]. From these instances, it could be recognized that exosomes play a huge role in supporting CC progression. In addition, the clinical value of exosomes in CC diagnosis and treatment is worth exploring.
Exosomes in Endometrial Cancer
The second most frequent cancer of the female reproductive system and the sixth most frequent cancer in women is endometrial cancer [3]. EC originates in the lining of the uterus and occurs primarily in postmenopausal women. Exosomes are a key pathway utilized by tumour cells in order to establish a supportive microenvironment.
Exosomes Derived from Body Fluids
In endometrial liquid biopsy, exosomes have great application prospects, including nucleic acids carried by exosomes isolated from peritoneal fluid, urine, and serum of EC patients that may become new diagnostic biomarkers for EC.
Plasma-derived exosomal miR-15a-5p and exosomal lectin galactoside-binding soluble 3 binding protein (LGELS3BP) in EC patients were significantly elevated compared with controls, among which the integration of miR-15a-5p and serum tumour markers (CEA and CA125) achieved an AUC value of 0.899 [64]. Exosomal LGELS3BP also promotes EC cell growth and HUVEC angiogenesis [65]. Fan et al. also screened some miRNA markers in EC patient serum and verified the consistency in EC serum or plasma exosomes; exosomal miR-20b-5p [66] and miR-151a-5p [67] were considered as potential noninvasive biomarkers for EC diagnosis.
Exosomes Derived from Cells
Exosomes could transport functional RNAs between cells. miR-192-5p released by TAMs, miRNA-503-3p secreted by human umbilical cord blood mesenchymal stem cells (hUMSCs), and miR-765 derived from CD8+ T cells can all be transferred into EC cells and inhibit EC progression [68][69][70]. Along with miRNAs, exosomal lncRNA NEAT1 from CAFs downregulates the miR-26a/b-5p-mediated STAT3/YKL-40 pathway to promote EC progression [71]. M2-polarized macrophages release exosomes hsa_circ_0001610 for transfer to EC cells, which significantly downregulates the radiosensitivity of EC cells through endogenous competition for miR-139-5p [72]. Nevertheless, miR-26a-5p derived from EC cells could significantly decrease the migration and tube formation ability of human LECs, and could inhibit the proliferation, migration, and invasion of EC cells [73]. Exosomes have been regarded as the key components of communication between cancer cells and other cells in the TME, and RNAs are currently being investigated as significant cargoes (Table 1). Therefore, exosomes derived from other cells can regulate proliferation, migration, and other phenotypes of tumour cells. The effects on other cells, which include the promotion of angiogenesis or lymphangiogenesis, the effect on the distribution of immune cells, and the mutual communication between tumour cells, are all based on the function of exosomes, so exosomes play a role in gynaecological tumours. The mechanism of action still must be further explored, and the translation to clinical application has broad prospects, which provides a direction for the diagnosis of disease progression and the development of therapeutic targets (Supplementary Table S1).
Clinical Diagnosis and Therapeutic Applications of Exosomes in Gynaecologic Malignancies
Exosomes play an important role in the diagnosis and prognosis of illnesses, which places great demands on the sensitivity and specificity of markers to detect diseases. Simultaneously, the type and quality of the tested samples also influences the biological understanding of exosomes and the development of biomarkers. Due to improvements in laboratory methods and technology, exosomes are now available for clinical use. These new therapeutic and diagnostic approaches utilizing exosomes have made some progress in the diagnosis and therapeutic applications of gynaecological malignancies.
To date, the lack of an elective procedure to separate specific extracellular vesicles populations in body fluids or abundantly released by tumour cells impedes the use of exosomes in clinic. A small subset of exosome subtypes with specific or prominent functions are masked by a large number of nonfunctional EVs. In fact, the available technical procedures do allow for the distinction of EVs based on their size and density, regardless of endosomal or plasma membrane origin. Therefore, there is an urgent need to develop techniques that would the isolation of a pure exosome fraction from bulky vesicular populations, as well as to comprehensively define the many subtypes of EVs.
Diagnostic Technology
Exosome-involved liquid biopsy is a new noninvasive and individualized method that can provide valuable information for the diagnosis of low-access tumours through the presence of tumour substances in body fluids [75], This could address the lack of sensitivity, specificity, and survival benefit of serum markers [76], as well as the highly invasive, local sampling of tissue biopsies.
Whether serum or plasma, isolation of EVs from these two blood components holds the potential to utilize EVs as disease biomarkers. Nevertheless, it remains unclear whether distinct EV subsets exist in plasma and serum. According to the research, blood sampling methods, including the anticoagulant used and the centrifugation protocol chosen, might influence the EV analysis [77]. By combining size-exclusion chromatography (SEC) with OptiPrep density gradient centrifugation, Vergauwen et al. [78] fractionated blood plasma to obtain EVs for a deeper biological understanding of EVs and the development of biomarkers. Cho et al. [79] screened noncoding RNAs from plasma exosomes, examined the association between ncRNA-mRNA networks and cancer, and built a method to screen eight types of RNA combinations as a new method for CC diagnosis. Krishnan et al. [80] prepared a new material, Chitosan grafted butein (CSB), as well as processed CSB-modified flexible screen-printed electrodes for electrochemical biosensing of exosomal CD24-specific nucleic acids at ultralow sample concentrations, which is expected to be utilized in OC diagnosis. The fluorescent gold nanoclusters with protein templates have highly fluorescent properties and biocompatibility. Combining them with exosomes successfully obtained nuclear staining of CC cells and was compatible with membrane-staining dyes, which proposes that the use of exosome synthesis for cellular imaging applications is also feasible [81].
Therapeutic Advances
As natural intercellular information carriers, exosomes are one of the ideal targets for the development of drug-delivery vehicles due to their nanoscale size, excellent stability, and biocompatibility. Exosome-based drug delivery has the potential to reach cells and tissues that are currently inaccessible by other drug-delivery technologies. Exosomes also have the advantage of low toxicity and low immunogenicity, reducing adverse effects on major organ systems (especially the heart) and reducing the risk of rejection and inflammation. HEK-293T cell-derived exosomes have been loaded with safranin and curcumin compounds as chemotherapeutic agents. ExoCrocin and ExoCurcumin enter tumour cells, and the synergistic effect of HPV L1-E7 polypeptide vaccine construction could significantly induce T cells' immune response and antitumour effects [82]. Liposome nanoparticles containing ruthenium (II)-curcumin complexes are significantly cytotoxic to Hela cells and exhibit anticancer properties [83]. Bhatta et al. [84] established a multivalent phosphatidylserine binder named ExoBlock in order to block the activity of human OC-associated immunosuppressive exosomes as well as enhance T-cell-mediated tumour-suppressive effects. The cytotoxic drug paclitaxel (PTX) improved the production of exosomes, and this exosome-mediated drug efflux attenuated drug function. Omeprazole and GW4869 were discovered to be exosome inhibitors that can stop the efflux of PTX [85].
Exosomes derived from immune cells, cancer cells, and normal cells activate tumour immunity and show great potential in tumour therapy [86]. Among immune cells, exosomes released by B cells, dendritic cells (DCs), macrophages, and plasma cells activate tumour immunity by expressing tumour antigens, functioning in antigen presentation, triggering T-cell responses, and promoting cytokine release, so they have the potential to become a carrier of tumour vaccines.
Exosomes are involved in the pathogenesis, progression, and metastasis of gynaecological malignancies. Therefore, inhibiting the release or uptake of exosomes may be an effective method to inhibit tumour progression or metastasis. The acidic TME can promote the release of exosomes [87], which may alleviate the accumulation of toxic substances in cells [88]. Therefore, proton pump inhibitors have been used to reduce exosome levels in cancer models [85,89]. Lee et al. [90] found that the knockdown of monocarboxylate transporter 1 (MCT1) and its partner CD147 reduced the release of exosomes from glioma cells, while overexpression significantly increased the release of exosomes. It is suggested that MCT1 and CD147 may play a key role in suppressing exosome secretion in tumours. In addition, various exosome uptake inhibitors, including amiloride, dynasore, chlorpromazine, and heparin [91], have been developed to target the process of exosome uptake by recipient cells, which is dependent on different molecules and glycoproteins on exosomal membranes and recipient cells.
Overcome Chemoresistance
About 90% of cancer-related deaths are attributable to drug resistance, a significant obstacle to effective cancer treatment. The potential of exosomes to overcome cancer drug resistance has been exploited. Exosomes can bypass endosome capture and diffuse uniformly into the cytoplasmic matrix to enhance the anticancer effects of chemotherapy when used to transport cisplatin into cisplatin-resistant OC cells [92]. Tumour suppressor miRNAs are new targets for tumour therapy, but the difficulty of miRNA delivery limits its clinical application. Exosomes have been used as carriers for OC miRNA replacement therapy, and the synthesized miRNAs are loaded into exosomes by electroporation, which might provide a new direction for exosomes as vector of drug delivery to increase tumour treatment sensitivity [93]. Exosome-mediated targeted delivery of miR-484 causes vascular normalization and reconnection of tumour vasculature and makes OC cells more chemosensitive [46]. Overexpression of miR497 could overcome OC chemotherapy resistance by inhibiting the mTOR pathway. Therefore, Li et al. created an exosome-liposome hybrid nanoparticle codelivery of TP and miR497, which could effectively enrich in the tumour area, improve tumour cell apoptosis, exert significant anticancer activity, and overcome chemoresistance in OC [74]. Exosomes are anticipated to have a significant role in the treatment of drug-resistant gynaecological cancers in the future (Figure 2). vascular normalization and reconnection of tumour vasculature and makes OC cells more chemosensitive [46]. Overexpression of miR497 could overcome OC chemotherapy resistance by inhibiting the mTOR pathway. Therefore, Li et al. created an exosome-liposome hybrid nanoparticle codelivery of TP and miR497, which could effectively enrich in the tumour area, improve tumour cell apoptosis, exert significant anticancer activity, and overcome chemoresistance in OC [74]. Exosomes are anticipated to have a significant role in the treatment of drug-resistant gynaecological cancers in the future (Figure 2).
Conclusions and Prospects
Research on exosomes primarily concentrates on specific biomarkers with diagnostic and prognostic implications as well as therapeutic targets in gynaecological malignant disorders. Exosomes are mostly studied in OC due of their quick development, high recurrence rate, and poor prognosis. Evidence accumulated in the past suggests that some exosomes have strong tumour-promoting effects in gynaecological malignancies. Never-
Conclusions and Prospects
Research on exosomes primarily concentrates on specific biomarkers with diagnostic and prognostic implications as well as therapeutic targets in gynaecological malignant disorders. Exosomes are mostly studied in OC due of their quick development, high recurrence rate, and poor prognosis. Evidence accumulated in the past suggests that some exosomes have strong tumour-promoting effects in gynaecological malignancies. Nevertheless, the role of tumour-derived exosomes and their cargo still needs further exploration in order to clarify their roles, mechanisms, and application prospects. In order to better understand tumour progression, current research has been focused on developing exosome-based diagnostic as well as prognostic tools for the effective control and management of gynaecological malignancies.
With noninvasive and individualized advantages in gynaecological oncology applications, exosome participation in liquid biopsies, which are characterized by the analysis of tumour material in the peripheral circulation, might provide valuable information for the diagnosis of low-access tumours. So far, the potential shown by exosomes in the early diagnosis of tumours is expected to be a promising alternative to traditional tissue-sampling methods, but is still in its infancy. More research is needed to further elucidate the mechanism of exosome release, identification of tumour tissue origin, and biological significance, and to improve technical stability and reproducibility by implementing standardized procedures for clinical application. Exosomes in serum are expected to serve as biomarkers for screening or early diagnosis of cervical cancer, which can help overcome the limitations of sampling locations and personnel due to the invasiveness of cervical cancer screening and narrow the gap between developed countries and the rest of the world.
Precision medicine was first proposed in 2015, and since then, people have been seeking accurate precise diagnosis and customized therapy. Exosomes have become a new research hotspot because of their widespread existence, stability, and ease of access in vivo, and have great prospects in assisting in accurate diagnosis and the treatment of diseases. Exosomes have benefits over other vesicles, in that they can carry genetic material or proteins for intercellular communication and material transfer. As a result, there is a growing study interest in exosomes' role as medication carriers. The engineered exosomes, which have been constructed by modifying the surface molecules of exosomes to endow them with cell-and tissue-targeting specificity, are powerful tools in order to achieve specific cell-targeted transport. Proteins or other small molecules can have therapeutic effects on specific disease areas or cells by loading them with functioning genetic material. Exosomes may also be used as nanocarriers to deliver chemotherapeutic drugs in order to overcome drug-resistant tumours. Further investigation and demonstration and standardized clinical trials are needed to verify and ensure safety and accuracy before clinical routine use is realized.
In the future, regular monitoring with liquid biopsies to elucidate resistance acquired due to genetic alterations, such genome-based prediction of drug response, may enable liquid biopsies as a companion biomarker for large-scale drug trials [94]. In addition, tumour-derived exosomes themselves may also become targets for tumour therapy, including inhibiting the critical role of exosomes in tumour metastasis. The ability of exosomes to induce antitumour immune responses in the cancer environment can be used to develop safe and reliable tumour vaccines [95].
In conclusion, exosomes may be an important player in addressing some of the key unanswered questions in the onset, progression, and treatment of gynaecological cancers. It is hoped that more and more research will help diagnose and treat cancer and improve outcomes for gynaecological cancer patients around the world.
|
2022-10-01T15:07:32.703Z
|
2022-09-28T00:00:00.000
|
{
"year": 2022,
"sha1": "3ec50090667dfe7c0e25f918698ca51e29449004",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/14/19/4743/pdf?version=1664515377",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "04777e1299d8debafdb655bc512d4a88c36eda9d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
261352668
|
pes2o/s2orc
|
v3-fos-license
|
Arctic Sea Ice : Decadal Simulations and Future Scenarios Using BESM-OA
Important international reports and a significant number of scientific publications have reported on the abrupt decline of Arctic sea ice and its impact on the Global Climate System. In this paper, we evaluated the ability of the newly implemented Brazilian Earth System Model (BESM-OA) to represent Arctic sea ice and sensitivity to CO2 forcing, using decadal simulations (1980-2012) and future scenarios (2006-2100). We validated our results with satellite observations and compared them to Coupled Model Intercomparison Project, Phase 5 (CMIP5) for the same numerical experiment. BESM results for the seasonal cycle are consistent with CMIP5 models and observations. However, almost all models tend to overestimate sea ice extent in March compared to observations. The correct evaluation of minimum record of sea ice, in terms of time, spatial and area remains a limitation in Coupled Global Climate Models. Looking to spatial patterns, we found a systematic model error in September sea ice cover between the Beaufort Sea and East Siberia for most models. Future scenarios show a decrease in sea ice extent in response to an increase in radiative forcing for all models. From the year 2045 onwards, all models show a dramatic shrinking in sea ice and ice free conditions at the end of the melting season. The projected future sea ice loss is explained by the combined effects of the amplified warming in northern hemisphere high latitudes and feedbacks processes.
Introduction
Sea ice is an important and complex component of the global climate system acting both as an indicator as well as an amplifier of climate change [1]- [3].Notz and Marotzke [4] and Doescher et al. [5], indicate that sea ice cover is a more robust indicator of climate change than temperature trends alone, because sea ice changes depend on integrated changes in atmospheric and ocean variables with non linear impacts, on various temporal and spatial scales under global climate forcing.
Over the last 30 years, abrupt changes in sea ice have become evident in the Arctic, especially in the summer months of 2007 and 2012 when Sea Ice Extent (SIE) reached a minimum record extent of 4.2 × 10 6 km 2 and 3.4 × 10 6 km 2 , respectively.Satellite data have shown that the sea-ice loss has happened faster than forecasted and is unprecedented in the past 1.5 millennia [1] [6]- [8].
Sea ice age and sea ice thickness also have decreased rapidly resulting in a sea ice more sensitive to dynamic and thermodynamic forcing [1] [7].There is an agreement among scientists about the direct relationship between the shrinking of the Arctic sea ice and global warming.According to Holland and Bitz [3] and Curry et al. [9], the range of simulated polar warming is from 1.5 to 4.5 times the global mean warming and is widely related to the sea ice-albedo feedback mechanism.Most of the climate models agree that the global air temperature will continue to rise, particularly in northern high latitudes and the Arctic will become ice free in the summer in approximately 30 years, as a response to an increase in atmospheric greenhouse gas concentrations [1] [5].The impacts of melting sea ice in recent and future decades have not yet been fully understood and accurately quantified.Nonetheless, recent studies suggest that sea ice loss is linked to cold winter extremes in the northern continents, hot summer extremes over mid-latitudes continents, as well as wet summers and flooding in Eurasia [10]- [13].
Besides the important role of sea ice in the climate system, knowing the dynamics and geographical sea ice cover is also essential for human activities such as navigation, oil exploration and fishery [14] [15].According to Cochran et al. [16] and Meier et al. [15], changes in Arctic threaten the infrastructure, health and safety of the Arctic indigenous people as well as present a significant risk to local marine biodiversity According to Whiteman et al. [17], sea ice changes will affect all nations, not just those in the world's far north, and all should be concerned about changes that are happening in the Arctic region.In that sense, Global Climate Models, even with inherent uncertainties and limitations are powerful tools for better understanding the changes in sea ice as well as providing future scenarios to guide decision markers, governments and local communities among others.
The recent development of the Brazilian Earth System Model (BESM) is an effort of several institutions and researchers lead by the Brazilian National Institute for Space Research (INPE) to build a multidisciplinary research framework with the intent to understand the causes of global climate change, its effects and its impacts on society.The BESM model, also aims to contribute to Program for Climate Model Diagnostics and Intercomparison (PCMDI) with short-term and long-term simulations, as well as to provide futures scenarios of climate change [18].Based on several studies and reports [5] [14] [17] [19] and understanding the importance of sea ice in the global climate system and the global economy, BESM simulations are expected to contribute with, among other variables, sea ice short and long-term simulations.BESM simulations can also be useful for future studies on ocean-atmosphere-sea ice coupling processes and impacts of sea ice loss around the world.
In this paper, we evaluated decadal simulations (1980-2012) and future scenarios (2006-2100) of SIE as simulated by two versions of the BESM and by other Coupled General Circulation Models participating in the Coupled Model Intercomparison Project, Phase 5 (CMIP5).Our goal is to evaluate the first results on the ability of BESM to represent past and future sea ice changes and sensitivity of the sea ice to the radiative forcing, using the Taylor protocol [20] [21].The paper is structured as follows: first, we present the data sources in Section 2.Then, in Section 3, we examine the seasonal cycle, the spatial pattern and the minimum records of the Arctic sea ice, comparing the BESM decadal simulations to satellite observations and other CMIP5 models.In Section 4, we investigate the future scenarios for two different versions of BESM and the CMIP5 models, using two different scenarios, the Representative Concentration Pathway RCP4.5 and RCP8.5.We discuss the results and also indicate possible causes to explain the differences between the sea ice variation using BESM versions 2.3 and 2.5.Finally, in Section 5 we present our conclusions and lay out our recommendations for future work.
Data Sources
This study uses short-term simulations (decadal hindcasts) and long-term simulations (future scenarios) of 11 state-of-the-art General Circulation Models (GCMs) and Earth System Models (ESMs), seen in Table 1.The numerical experiment design follows the CMIP5 protocol, for decadal data and future projections based on the Taylor protocol [20] [21].CMIP is an international effort of the scientific community to provide simulations of many different climate models in order to better understand past and future climate changes as well to provide a scientific data set for the Intergovernmental Panel on Climate Change (IPCC).
The BESM ensemble members of the decadal simulations were integrated for 10 years, each with initial conditions (IC) on 1 -10 December of the years 1960, 1965, 1970, 1975, 1980, 1985, 1990, 1995, 2000 and 2005.Three of these ensembles (1960, 1990 and 2005) were extended for an extra 20 years for each of the 10 members, completing 30 years long integrations each.These simulations used atmospheric CO2 concentrations derived from in situ air samples collected at Mauna Loa Observatory, Hawaii [18].The Atmospheric model initial conditions for each ensemble member used the National Centers for Environmental Prediction (NCEP-NCAR) reanalysis fields for the 0000 UTC of each day from 1 to 10 December of the chosen years.The ocean initial states were chosen from the same dates from a spinup run of MOM4p1 that used prescribed atmospheric fields of momentum, solar radiation, air temperature, and freshwater described in Nobre et al. [18].
The future scenarios are defined by the Representative Concentration Pathways (RCPs) and each RCP defines a specific emissions trajectory and subsequent radiative forcing.The radiative forcing values in the year 2100 relative to pre-industrial values are 4.5 W•m −2 and 8.5 W•m −2 for RCP4.5 and RCP8.5 respectively, which include the period from 2006 to 2100.The CO 2 concentration in the year 2100 for each RCP is approximately 600 ppm and 1300 ppm for RCP4.5 and RCP8.5, respectively.
We compared BESM results with CMIP5 models simulations using the same numerical experiment setup.Still, the models differ in spatial resolution, physical component and parameterizations.For decadal simulations we chose to work with time series from 1980 to 2012 due to the availability of satellite observations for comparison.The SSM/I (Special Sensor Microwave Imager) satellite observations obtained from the National Snow and Ice Data Center (NSIDC) were used to validate the numerical simulations.For all simulations we calculated the SIE, defined as the area where the sea ice concentration is greater than 15% in a grid.
BESM-OA Model
In this work, we used two versions of the BESM Coupled Ocean Atmosphere (BESM-OA) model: BESM-OA V2.3 for decadal and RCP simulations and BESM-OA V2.5 for RCP simulations only.The main differences between these two versions are the microphysics scheme proposed by Ferrier et al. [31] and a new surface layer scheme based on Jimenez and Dudhia [32] described by Capistrano et al. [33] [34].
Both BESM versions used in this research are composed of the INPE/CPTEC atmospheric general circulation model (AGCM) coupled to NOAA/GFDL's Modular Ocean Model version 4p1 (MOM4p1) oceanic general circulation model (OGCM) via GFDL's Flexible Modular System [18] [35] [36].The INPE/CPTEC AGCM has a spectral horizontal resolution truncated at triangular wave number 62, giving an equivalent grid size of 1.8758 degrees of latitude and longitude and 28 sigma levels unevenly spaced in the vertical (i.e., T062L28).The exchanges of heat, moisture and momentum between the surface and atmosphere in INPE/CPTEC AGCM over the ocean and continents are computed differently by various physical processes that define the surface fluxes.
The ocean model MOM4p1 [35] from GFDL, includes the Sea Ice Simulator (SIS), described in Winton [37].The SIS is a dynamical model with three vertical layers (two ice and one snow), and five ice thickness categories.The elastic-viscous-plastic technique of Hunke and Dukowicz [38] is used to calculate ice internal stresses, and the thermodynamics is a modified Semtner's three-layer scheme [39].SIS calculates the concentration, thickness, temperature, brine content, and snow cover of an arbitrary number of sea ice thickness categories (including open water) as well the motion of the complete pack.Additionally, the model is responsible for calculating ice/ocean fluxes and communicating fluxes between the ocean and atmosphere models globally.
The MOM4p1 horizontal grid resolution is set to 1˚ in the longitudinal direction, and in the latitudinal direction the grid spacing is 1/4˚ in the tropical region (10˚S -10˚N), decreasing uniformly to 1˚ at 45˚ and to 2˚ at 90˚ in both hemispheres.For the vertical axis, 50 levels are adopted with a 10 m resolution in the upper 220 m, increasing gradually to about 370 m of grid spacing in deeper layers.We used FMS to coupling MOM4p1 and CPTEC/AGCM.Thus, wind stress fields are computed, using Monin-Obukhov scheme within MOM4p1, from the field 10 meters above the above the ocean surface.Adjustments were done to the Monin-Obukhov boundary layer scheme, whose parameters were tuned according to the wind fields output by the CPTEC AGCM.The AGCM receives the following two fields from the coupler: sea surface temperature (SST) and ocean albedo from ocean and sea ice models at an hourly rate (coupling time step).Adjustments were also made to ocean shortwave penetration parameters due to the CTPEC AGCM supply of visible and infrared short wave radiation.The coupling variables supplied by the AGCM are as follows: freshwater (liquid and solid precipitation), specific humidity, heat, vertical diffusion of velocity components, momentum fluxes, and surface pressure.
The microphysics of Ferrier et al. [31] used in BESM-OA V2.5, replaced the Large Scale Precipitation scheme used in BESM-OA V2.3.This new microphysics scheme computes changes in water vapor, cloud water, rain, cloud ice and precipitation ice.Also, BESM-OA V2.5 uses a new surface layer scheme based on Jimenez and Dudhia [32] and described by Capistrano et al. [33] [34].In this scheme, the surface and the first AGCM level values are used to assess wind, air temperature and humidity at 10 m.The changes introduced lead to a more consistent surface layer formulation that resulted in a near-surface wind, air temperature and humidity more consistent with observations than previous BESM version.This occurs mainly over the ocean, where those variables are important to compute the heat fluxes at ocean-atmosphere interface.
Seasonal Cycle
Seasonal melt-freeze transitions are important to continuously monitor sea ice over the Arctic.Sea ice formation, growth and decay are closely related to air temperature, ocean heat content, albedo and heat fluxes and hence can vary strongly from month to month [5] [40].
Thus, we present in this section the Arctic seasonal cycle of sea ice, in order to better understand the differences between the models studied, with a focus on the performance of the BESM-OA V2.All of the models were able to represent the seasonal cycle of the Arctic sea ice.Large oscillations between Based on our results, Sortberg et al. [42] and Karlsson and Svensson [43], we suggest the following scheme to explain the differences between winter and summer model's performance in representing SIE.First, the presence of sea ice affects strongly the sea ice albedo, which has a key influence on the energy budget and is directly linked to the cloud-albedo effect and cloud-radiation effect.Clouds are linked with the energy budget by reflecting shortwave radiation back to space, trapping Longwave (LW) radiation and radiating it back to the surface, providing one of more the strongest feedbacks in the climate system [44].Second, in wintertime, the amount of solar radiation is low or non-existent and the ability of the clouds to reemitted LW to the surface presents a positive cloud radiative effect on the surface energy budget.On the other hand, during the seasons with solar radiation, the positive greenhouse effect is competing with a negative cloud albedo effect, because the clouds decrease the amount of incident solar radiation at the surface.Finally, recent publications using CMIP3 and CMIP5 models [42] [43] suggest that models generally have the tendency to underestimate the amount of LW radiation reemitted back to the surface in winter.As a consequence of these processes, the models tend to overestimate SIE in wintertime.Additionally, the annual amplitude of sea ice cover depends inversely on the model's sea ice albedo [42] [43].BESM-OA V2.3 results agree with this scheme, as both downward and upward LW radiation at the surface are underestimated in winter.The ensemble mean is lower than the mean of the observations by approximately 30 W•m −2 (not shown).Another notable example is related to the MPI-ESM-LR model's performance, which presents a high sea ice albedo and low annual amplitude of sea ice.According to Wild et al. [45], the bias in LW radiation depends on the climate conditions and is not geographically uniform, with higher (smaller) bias in cold and dry climates (warm and humid climates) with low (high) downward LW radiation emission.
Stroeve et al. [1], Knutti and Sedlacek [46] and Li et al. [44], assessed the evolution between CMIP3 and CMIP5 and showed an improvement in the Arctic sea ice prediction and radiation in CMIP5.Nevertheless, a better representation of sea ice depends also on improvements in the representation of the Arctic sea-ice albedo, clouds, cloud-radiation effects and feedback processes.However, when looking at separate months, the correlation drops, as consequence in the same month time series, only the interannual variability is being evaluated.For the month of March, we suggest the previously described scheme (radiation effect) to explain the low correlation between observation and models.To understand the low correlation in September, we suggest a relation with sea ice thickness.According to Shu et al. [47] and Stroeve [1], the sea ice thickness simulated in the CMIP5 models is too thin, resulting in enhanced sea ice melt and an underestimation fo SIE in summertime, as shown in Figure 1.BESM-OA V2.3 agrees with Shu et al. [47], showing an underestimate in sea ice thickness, notably greater in the marginal sea ice zones (not shown).
Spatial Pattern
Several studies have compared the observed SIE variation using climate models and CMIP data sets in a seasonal cycle or time series approach [1] [6] [41].This type of analysis is important to know the model's ability to predict SIE.However, when considering only SIE, the information related to spatial pattern is lost.Analyzing spatial patterns avoids overconfidence in the predictions and excludes compensation of errors of opposite sign in different regions [48].Cavalieri and Parkinson [49] also show the importance of evaluating the Arctic Ocean by regions.The authors, using satellite data set to analyze sea ice variability and trends from 1979 to 2010, found that trends for nine distinct regions in the Arctic are not homogeneous and indicated the complex nature of the Arctic climate system by regions.
Figure 3 shows September average Sea Ice Cover (SIC) observations over the study area.The spatial difference between modeled and observed SIC in September average is shown in Figure 4. September was chosen because commonly is when sea ice reaches its annual minimum over the Arctic.
Despite obvious inter-model differences depicted in Figure 4, there is a reasonable agreement between all the models.Most models tend to well represent SIC in the central Arctic, whereas the opposite occurs in marginal ice zones.There is a general tendency to underestimate SIC in areas such as the Beaufort Sea and the East Siberian Sea (except for MPI-ESM-LR and HadCM3) suggesting a systematic model error in this region.However, NCAR-CCSM4 overestimates SIC in both Laptev and Kara Sea (Figure 4).This may reflect the NCAR model's overestimation observed for September and shown in Figure 1.
The SIC in the region between Canada and Greenland is well represented by BESM-OA V2.3, GFDL-CM2.1 and MPI-ESM-LR models, while SIC between East Siberia and the Barents Seas is only well simulated by 4), it is clear that the amplitude of its annual cycle is smaller than both the others models and satellite observations (Figure 1).This reveals a certain deficiency in representing physical processes between ocean-atmosphere-sea ice, although the good representation during summer.
The detailed analysis of simulated SIC by regions using Climate Models is justifiable due to both economic and scientific reasons.Economically, as a result of sea ice loss maritime transports may gain two new routes with the opening of the "Northwest Passage" in Northern Canada and Greenland, and the "Northeast Passage" between Northern Russia and Norway [14].This is considered a hot topic because these passages could lead to fast and cheaper ship transport between Europe and North America.Scientifically, this is relevant because the importance of properly account for the dynamical/thermodynamical processes taking place in shaping SIC over the Arctic region.
It is instructive to compare Figure 1 with Figure 4, analyzing only SIE in September for GFDL-CM2.1 and CanCM4 models (Figure 1).This can induce overconfidence in how well the models agree (SIE in both models are approximately 5.6 × 10 6 km 2 ).However, when we look at the spatial patterns in Figure 4, we find quite different SIE distributions.CanCM4 model shows a large area of high negative values (especially between Greenland and Canada), whereas GFDL-CM2.1 shows a small area of high negative values only in parts of Beaufort Sea and East Siberian Sea.Thus, even some climate models showing a good performance in simulating SIE during summertime do not necessarily simulate a realistic spatial sea ice distribution.
Minimum of Sea Ice Extent
Changes in ice extent due to the seasonal cycle are so large that they tend to obscure any signal related to interanual variability.To remove the strong seasonal cycle, we again specifically focus on September since it shows the minimum annual of SIE.According to Doescher et al. [5], the ability to identify real changes in the Arctic Climate System increases when we focus on individual seasons.In this context, Figure 5 [50], such abrupt sea ice loss resulted from a complex interplay between the thermodynamics and dynamics of sea ice, ocean and atmosphere and successful prediction requires careful initialization with ocean and sea ice conditions.
Figure 6 illustrates the spatial distribution of average September SIE (left) and minima of September SIE value found between 1980 and 2010 (right) for all the models evaluated in this work.This figure aimed to show the model's performance to represent the spatial pattern in episodes of low SIE, regardless of year.
Looking at the spatial patterns of the SIE climatological mean and minimum record, it is clear that the climate models are able to reproduce the seasonal cycle of the SIE (Figure 1 and Figure 2) better than they represent the minimum records.Only GFDL-CM2.1 model presents a good spatial agreement of minimum records with observation.This could be explained by two main reasons.First, it was the only one that matched the spatial pattern of the observed minimum, which may lead to a better agreement with observed meteo-oceanographic patterns (not studied here).Second, it may be related to a better representation of the sea ice and feedbacks processes in the parameterizations of the GFDL-CM2.1 model.Two other models (BESM-OA V2.3 and CanCM4) also show a reasonable spatial agreement with observations, although not as well as GFDL-CM2.1.These two models presented an underestimation of SIE, but presented a very good representation of the SIE in the central Arctic region (Figure 6).The minimum record for the BESM-OA V2.3 shows a deficiency near Greenland and at the north of Canada.We understand that this happens because of the overestimation of Sea Surface Temperature (SST) in that region by the BESM-OA V2.3 model (not shown).Although BESM-OA V2.3 and CanCM4 were capable to capture the correct signature of the SIE minimum record with a decrease in SIE followed by an increase in the following year (Figure 5), the correct estimation of minimum SIE, in time, spatial, area and processes signatures remains a challenge for the modeling community.
Due to the sea ice retreat in recent Septembers months, ice cover in the following spring tends to be thinner, thus vulnerable to melting in summer.According to Doscher et al. [5], each record of low SIE is followed by a partial recovery.Additionally, Tietsche et al. [48] suggest that the minimum record of SIE during a single September is reversible, as the albedo sea ice mechanism is compensated by large scale recovery mechanisms.According to Vihma [51] the sea ice loss increases the heat flux from the ocean to the atmosphere in early winter and autumn.As result of this, a local increase of air temperature, humidity and cloud cover is expected thus reducing the stability of the atmospheric boundary layer.
Hunke et al. [52] evaluated the retrospective and new directions of sea ice models.The authors indicated some deficiencies in the dynamics (e.g.transport processes, dynamic coupling and mechanical redistribution) and thermodynamics (e.g.feedback processes and melt ponds) and suggested that improvements in the sea ice prediction dependent on improvements in the descriptions of the physical processes and characteristics, as well as, extending the models for Earth System Model simulations including biogeochemistry.According to Flocoo et al. [53] and Roeckner et al. [54], one of the processes, poorly represented in sea ice models, is the formation and evolution of melt ponds.Melt ponds affect the heat and mass balances of SIC, mainly by reducing albedo by up to 20%.Consequentially, a reduction of the sea ice volume can reach 40%, leading to further sea ice melt.At the end of the melting season, melt ponds cover up to 50% of the sea ice surface.A better representation of the melt pond scheme will improve the sea ice simulation and is essential for accurate future sea ice projections.
Future Projection of Arctic Sea Ice
The long-term evolution of SIE in the northern hemisphere as simulated by BESM and CMIP5 models, using RCP4.5 and RCP8.5 is shown in Figure 7.The simulations clearly show a decrease in SIE up to 2100, for all simulations and both RCPs.Arctic SIE decline with the increase of the radiative forcing in all models.The BESM-OA V2.3 control experiment (gray lines in Figure 7) reinforce that ice-free conditions only happen when external forcing from anthropogenic sources are include in climate model simulations.These results are in agreement with Stroeve et al. [1].
For September, at the beginning of the series (2006 to present-day), the HadGEM2 model SIE values are close to satellite observations.During March the best representation of the observed data was obtained by MIROC5 and BESM-OA V2.5 models.It is possible to observe a higher inter-annual variability in September than in March for all models, as well as for the early period's satellite observations.The changes in inter-annual variability are important for sea ice prediction and frequency and for assessing the frequency of occurrence of extreme SIE anomalies.
It is noteworthy that the models comparatively show different tendencies for the months of maximum and minimum SIE.For the month of March, the MPI-ESM-LR model presented the lowest values compared to the other models used here, whereas for the month of September the lowest values encountered are those of the BESM-OA V2.5 model.In general, when compared to the other CMIP5 models, the BESM-OA V2.3 model tends to overestimate SIE in March and September, for both RCP simulations.
From the year 2040 onward, all models show a dramatic shrinking in SIE in the RCP 8.5 scenario.This indicates a high sensitivity of sea ice cover in response to an increase in the atmospheric carbon dioxide.The GFDL-CM3 model clearly shows this abrupt decrease in SIE with the RCP8.5 scenario when compared to the RCP4.5 one.In this case, the decline is so strong that at the end of the 21st century the SIE maximum (in March) is similar to the minimum SIE (in September) found in the beginning of the 21st century.If the GFDL-CM3 model is reasonably correct, it means that the Arctic can be ice-free also during the coldest season of the year just after 2100.
For September, ice-free conditions (defined as less than 0.5 × 10 6 km 2 ) are obtained from 2020 in Can-ESM2 model, BESM-OA V2.5 and HAGEM2-2S with the RCP8.5 scenario.According to Chylek et al. [23] the addition of the land-vegetation model and terrestrial oceanic interactive carbon cycle to the coupled atmosphere-ocean in the Can-ESM2 model improved the simulations, although increased the overstimation of atmospheric warming after 1970.That explains the minimum values found here for CAN-ESM2 sea ice projections.Also focusing on the RCP8.5 scenario, most of the models show ice free situations, or episodes, after 2045 for the month of September.The exceptions are BESM-OA V2.3 and NCAR-CCSM4.These two models are a bit more conservative than the others, pointing out to ice free situations to starting after 2060.
It is expected that ice-free conditions will have strong effects on the global climate system though changes in both ocean and atmospheric circulations.It is known that sea ice loss amplifies the effects of radiative forcing by the albedo-sea ice feedback mechanism and cloud effects.It also, affects the meridional and inter-hemispheric temperature gradients that can affect mid-latitude circulation.However the quantification of these effects remains unclear requiring improvements in the global climate models.
Surface Anomalies Temperatures
In this section, we compare Surface Anomalies of Temperatures (SAT) for BESM-OA V2.3 and BESM-OA V2.5 to explain the differences between those versions in SIE presented in Figure 8.
Figure 8 shows SAT and Total Cloud Cover for BESM-OA 2.3 and BESM-OA 2.5 using future scenarios, relative to the period from 2006 to 2100.A marked warming in the northern high-latitudes is observed in both BESM versions, being notably stronger in the RCP8.5 simulation.This warming called Polar Amplification (PA) occurs due to the increase in the atmospheric greenhouse gas concentration, and is accompanied by an expressive reduction in SIE in both simulations (Figure 8).The relationship among air temperature rises and sea ice loss is evident and underpinned statistically [5].PA is associated with several feedback processes as the ice-albedo feedback, temperature, water vapor and clouds.Most of studies indicate that the ice-albedo feedback is the main contributor for PA [2] [3] [9].However Pithan and Maurtsen [55], using CMIP5 simulations found that, the major contributor to PA comes from air temperature feedbacks (as the surface warms, more energy is radiated into space in lower latitudes compared with the Arctic region).
When comparing the warming between the two BESM versions, we observe that higher amplitude values are observed in BESM-OA V2.5, particularly in high latitudes between 75˚N to 90˚N.As a result of these warming discrepancies the SIE is lower in BESM-OA V2.5 than in BESM-OA V2.3 (Figure 7).
The microphysics of Ferrier et al. [31] and the new surface scheme based on Jimenez and Dudhia [32] used in BESM-OA V2.5 produced an improved in the representation of precipitation, wind, air temperature, humidity, and energy balance at the top of the atmosphere (not shown).A better representation of these variables exerts strong influences in coupling ocean-atmosphere-sea ice simulations and teleconnections with higher latitudes.The microphysics adopted in BESM-OA V2.5 produced a decrease in the total cloud cover in the Arctic region (Figure 8).This allows the ocean to absorb more heat from incident shortwave radiation and then contribute to a greater melting of the sea ice.The decrease in total cloud cover and consequent strong increase in SAT is consistent with the SIE reduction showed in Figure 7.
According to Jiang et al. [56], Clouds (in both ice and liquid forms) are important modulators of the climate system and are involved in several feedback processes that affect the global atmospheric circulation and the energy budget.Improving the cloud microphysics in Coupled Climate Models result in an advance in climate prediction and reduce the uncertainties in future projections.As recently pointed out by Eyring et al. [57], the understanding of the role of the clouds in the general atmospheric circulation, climate sensitivity and assessing the response of the cryosphere system to a warming climate, are among the greatest challenges for CMIP6.
Conclusion
In this work, we evaluated the decadal simulation and assessed the future climate projections (2006-2100) generated by BESM-OA and CMIP5 models.BESM-OA V2.3 results for the seasonal cycle are consistent with satellites observations and the other CMIP5 models, however almost all models tend to overestimate SIE in March in relation to observations.Based on our results and [42] [43] [45], we suggest that the winter Arctic SIE bias is related to a LW radiation bias in climate models.Spatial patterns of climatological averages at the end of the melting season presented a deficiency in capturing the correct signature of the minimum SIE record, as well a model systematic model error between Beaufort Sea and East Siberia (Figure 4 and Figure 6).Future scenarios show an abrupt shrinking of the sea ice and ice-free summer conditions from the year 2045 and onwards, for both RCPs projections.This is a result of the internal climate response to the changing in radiative forcing throughout the years.Polar Amplification and feedback processes explain the rapid Arctic sea ice loss, despite the uncertainties and limitations of Global Climate Models.The sea ice responses are different in CMIP5 models due to differences in the ocean, atmosphere, sea ice conditions, as well the coupling between the components in each model.Future progress in sea ice modeling is essential and requires advances in the parameterizations of climate feedback processes.The climate in the Arctic region will change even more and will induce complex changes in the global climate, thereafter will induce changes in Arctic climate over again.In synthesis, we can say that the Arctic region and its climate are way more complex than forecasted.
3 model.First, to understand the ability of BESM-OA V2.3 to simulate the seasonal cycle in relation to observation and other CMIP5 models, we present in Figure 1 the seasonal cycle of climatological average of SIE from CMIP5 model and observed values for the period from 1980 to 2010.
Figure 1 .
Figure 1.Climatology of SIE (1980 to 2010) in the northern hemisphere simulated by BESM-OA V2.3, CMIP5 models and observations.summerand winter are evident, with sea ice growing from autumn and winter reaching a peak in March, and then declining throughout spring and summer as the melting season progresses.However, most models overestimate SIE values in winter (except the MPI-ESM-LR model), and underestimate in summer (except HadCM3 and NCAR-CCSM4).BESM-OA V2.3 ensemble agrees quite well with observations and satisfactorily represents the seasonality of sea ice, although the model's sea ice decays more rapidly than observed in summer and autumn.The observational data (BESM-OA V2.3) shows that Arctic SIE varies between approximately 15 × 10 6 km 2 (18 × 10 6 km 2 ) at winter maximum and 6 × 10 6 km 2 (6 × 10 6 km 2 ) at summer minimum.The difference between the model's performance for winter and summer are in agreement with[1] [26] [41][42].It is clear that the BESM-OA V2.3 model, even with an overestimation during winter presents a very good agreement in summer when SIE reaches critical values.Based on our results, Sortberg et al.[42] and Karlsson and Svensson[43], we suggest the following scheme to explain the differences between winter and summer model's performance in representing SIE.First, the presence of sea ice affects strongly the sea ice albedo, which has a key influence on the energy budget and is directly linked to the cloud-albedo effect and cloud-radiation effect.Clouds are linked with the energy budget by reflecting shortwave radiation back to space, trapping Longwave (LW) radiation and radiating it back to the surface, providing one of more the strongest feedbacks in the climate system[44].Second, in wintertime, the amount of solar radiation is low or non-existent and the ability of the clouds to reemitted LW to the surface presents a positive cloud radiative effect on the surface energy budget.On the other hand, during the seasons with solar radiation, the positive greenhouse effect is competing with a negative cloud albedo effect, because the clouds decrease the amount of incident solar radiation at the surface.Finally, recent publications using CMIP3 and CMIP5 models[42] [43] suggest that models generally have the tendency to underestimate the amount of LW radiation reemitted back to the surface in winter.As a consequence of these processes, the models tend to overestimate SIE in wintertime.Additionally, the annual amplitude of sea ice cover depends inversely on the model's sea ice albedo[42] [43].BESM-OA V2.3 results agree with this scheme, as both downward and upward LW radiation at the surface are underestimated in winter.The ensemble mean is lower than the mean of the observations by approximately 30 W•m −2 (not shown).Another notable example is related to the MPI-ESM-LR
Figure 2
shows a Taylor diagram for September, March and annual climatology of SIE.This diagram is a useful tool to compare observed and simulated data in terms of correlation coefficient, RMS and standard deviation.A shorter distance between model and REF (observed) in a Taylor Diagram indicates a better model's performance.For the annual values (black) all the six models have a correlation with the observations higher than 0.96, while for March and September the correlation coefficient presents low values, as expected.For all models (except MPI-ESM-LR) the correlation in March is smaller than 0.6.The annual cycle of SIE is quite well represented because the seasonal cycle of SIE was well represented by all the models as shown in Figure 1.
Figure 2 .
Figure 2. Taylor diagram of September, March and climatological annual cycle of SIE for the period 1980-2012.The x-axis and y-axis is the standard deviation normalized.The correlation coefficient between observations and each model is given by the azimuthal position.The centered RMS difference between simulated and observed is proportional to their distance one from another.
Figure 3 .
Figure 3. Arctic study area and September SIC Climatology (1980-2010) from satellite observations (shaded colors).Dark gray and orange lines refer to the 2007 and 2012 minimum events respectively.
shows the SIE time series of September averages from 1982 to 2014.Arctic sea ice has declined sharply during the last three decades, with record low summer ice cover in September 2007 and 2012 as illustrated in Figure 1.Here, we show the time series of SIE and analyze the ability of the models to represent recent changes.Arctic SIE averages from 1980 to 2010 (Figure 5) show a noticeable decrease in Arctic SIE.September SIE simulated by BESM-OA V2.3 (satellite observations) between 2000 and 2010 was 4.2 × 10 6 km 2 (5.7 × 10 6 km 2 ), while between 1980 and 1990 it was 6 × 10 6 km 2 (7.1 × 10 6 km 2 ), showing a reduction of approximately 30% (19.8%) in SIE.March SIE for example (not shown), although at a slower rate in comparison to September, also decreases with time.The minimum satellite record of SIE occurred in September 2012, with 3.6 × 10 6 km 2 against 4.5 × 10 6 km 2 in BESM-OA V2.3.In 2012, satellite observations (GFDL-CM2.1)presented a decrease of 50.4% (42.3%) of SIE in relation to the 1980s decade.Except for the GFDL-CM2.1 model, no other model was able to well represent the observed 2012 minimum.However, BESM-OA V2.3 and CanCM4, generated episodes of low SIE in September with a magnitude and behavior comparable with the low observed in 2012.These episodes of minimum SIE occurred in 2006 and 2002 for the BESM-OA V2.3 and CanCM4 models, respectively.According to Doscher et al. [5] and Holland et al.
Figure 5 .
Figure 5. Arctic sea ice extent time series of September from 1980 to 2014 for CMIP5 models and observational data.
Figure 6 .
Figure 6.Spatial distribution of SIE average (left) and lowest values of September SIE found between 1980 to 2010, (right), for all CMIP5 models evaluated in this work.
Figure 7 .
Figure 7. Time series of modeled Arctic SIE in September and March from 2006 to 2100, using Representative Concentration Pathways RCP4.5 (solid lines) and RCP8.5 (dash lines).Black lines are the satellite observations and gray lines refer to the control run of the BESM-OA V2.3.During the first 30 years of the series, values from both RCPs are very similar in March and September months.For March, the SIE in the first years of the 21st century ranges from 11.8 × 10 6 km 2 (MPI-ESM-LR) to 18.8 × 10 6 km 2 (BESM-OA V2.3) in the RCP45 simulation.For the RCP8.5 simulation they vary between 11.7 × 10 6 km 2 (MPI-ESM-LR) and 18.4 × 10 6 km 2 (BESM-OA V2.3).For September SIE values vary from approximately 2 × 10 6 km 2 (BESM-OA V2.3) to 6.7 × 10 6 km 2 (MIROC 5).Already during these early years, it is possible to observe the discrepancy between the two different BESM configurations.For all RCP simulations, BESM-OA V2.3 and BESM-OA V2.5 show higher (lower) values in SIE during March (September) when compared to other models used in this work.The models reveal strong amplitudes in SIE between different seasons.Both BESM simulations clearly present the higher values in March for all years.However, for September the higher SIE was found in MIROC5 model (similar amplitude was observed in MPI-ESM-LR model).It is possible to observe a higher inter-annual variability in September than in March for all models, as well as for the early period's satellite observations.The changes in inter-annual variability are important for sea ice prediction and frequency and for assessing the frequency of occurrence of extreme SIE anomalies.It is noteworthy that the models comparatively show different tendencies for the months of maximum and minimum SIE.For the month of March, the MPI-ESM-LR model presented the lowest values compared to the other models used here, whereas for the month of September the lowest values encountered are those of the BESM-OA V2.5 model.In general, when compared to the other CMIP5 models, the BESM-OA V2.3 model tends to overestimate SIE in March and September, for both RCP simulations.From the year 2040 onward, all models show a dramatic shrinking in SIE in the RCP 8.5 scenario.This indicates a high sensitivity of sea ice cover in response to an increase in the atmospheric carbon dioxide.The GFDL-CM3 model clearly shows this abrupt decrease in SIE with the RCP8.5 scenario when compared to the RCP4.5 one.In this case, the decline is so strong that at the end of the 21st century the SIE maximum (in March) is similar to the minimum SIE (in September) found in the beginning of the 21st century.If the GFDL-CM3 model is reasonably correct, it means that the Arctic can be ice-free also during the coldest season of the year just after 2100.For September, ice-free conditions (defined as less than 0.5 × 10 6 km 2 ) are obtained from 2020 in Can-ESM2 model, BESM-OA V2.5 and HAGEM2-2S with the RCP8.5 scenario.According to Chylek et al.[23] the addition of the land-vegetation model and terrestrial oceanic interactive carbon cycle to the coupled atmosphere-ocean in the Can-ESM2 model improved the simulations, although increased the overstimation of atmospheric warming after 1970.That explains the minimum values found here for CAN-ESM2 sea ice projections.
Figure 8 .
Figure 8. Surface Anomalies Temperature (SAT) and total cloud cover from January 2006 to December 2100 for BESM-OA V2.3 and BESM-OA V2.For SAT, the red lines represent the average for latitudes between 75˚N to 90˚N.Green and blue lines are for latitudes between 45˚N to 75˚N and 25˚N to 45˚N, respectively.Latitudes between 0˚N to 90˚N are represented by the black lines.
|
2019-02-14T03:39:05.730Z
|
2016-03-01T00:00:00.000
|
{
"year": 2016,
"sha1": "80bc9d09936b956b5fda6097c1ea9682d17289e6",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=66158",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "80bc9d09936b956b5fda6097c1ea9682d17289e6",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
234823274
|
pes2o/s2orc
|
v3-fos-license
|
Distributed Ledger Infrastructure to Verify adverse Event Reporting (DeLIVER): Proposal for a proof-of-concept study
Background: Adverse drug event reporting is critical for ensuring patient safety; however, numbers of reports have been declining. There is a need for a more user-friendly reporting system and for a means of verifying reports that have been filed. Objectives : This project has two main objectives: 1) to identify the perceived benefits and barriers in the current reporting of adverse events by patients and healthcare providers and 2) to develop a distributed ledger infrastructure and user interface to collect and collate adverse event reports to create a comprehensive and interoperable database. Methods: A review of the literature will be conducted to identify the strengths and limitations of the current UK adverse event reporting system (the Yellow Card System). If insufficient information is found in this review, a survey will be created to collect data from system users. The results of these investigations will be incorporated into the development of a mobile and web app for adverse event reporting. A digital infrastructure will be built using distributed ledger technology to provide a means of linking reports with existing pharmaceutical tracking systems. Results : The key outputs of this project will be the development of a digital infrastructure including the backend distributed ledger system and the app-based user interface. Conclusions : This infrastructure is expected to improve the accuracy and efficiency of adverse event reporting systems by enabling the monitoring of specific medicines or medical devices over their life course while protecting patients’ personal health data.
Background
It is essential that adverse drug reactions and medical device events are reported to provide accurate and comprehensive warning labels and restrict or remove products that present too high of a risk to patient safety [1].However, the numbers of adverse events reported in the UK has been declining [2].Mitigating the barriers associated with spontaneous reporting systems, and ensuring that those reports are useful, is an essential component of ensuring patient safety and reducing costs of post-market surveillance [3].Therefore, this project aims to develop a distributed ledger infrastructure that will redesign the adverse event reporting and collecting process to improve patient care and safety.
Currently, the UK uses the MHRA's Yellow Card Scheme (YCS) for adverse event reporting.Reports submitted to the MHRA are currently stored in a database, in compliance with GDPR requirements [4,5].Access to deidentified data can be requested under the Freedom of Information Act, but is otherwise only accessible to MHRA staff and, in some circumstances, health researchers.Confidential information is only made available if necessary to achieve specific health purposes, with conditions in place to protect patient information as best as possible [5].
The YCS has contributed to the identification of serious safety issues [6]; however, several usability issues with the public interface have been identified [7].The YCS was redesigned in 2012 based on recommendations [7] and launched an app in 2015 to help make reporting easier [8].An international Med Safety app has been developed (with collaboration from the MHRA) as part of the Innovative Medicines Initiative WEB-Recognising Adverse Drug Reactions (WEB-RADR) project [9] to provide a platform for members of the public to report adverse events.These reports are then transmitted to the relevant national databases [9,10].In the UK, the WEB-RADR2 project is ongoing, and aims to facilitate information sharing between healthcare and regulatory systems (by mapping terminologies and connecting with electronic health records) and to provide its features through application programming interfaces (APIs); tools that will eventually replace the Yellow Card website [11][12][13].
However, while the WEB-RADR2 project work packages include improving connectivity, their backend system (Vigilance Hub) only appears to allow the tailoring of the app to suit the needs of specific regulators (i.e. to manage aesthetics, translations, reporting forms, news, lists of authorised products, and app users) [14].Therefore, there is still a need for an innovative infrastructure to support the improvements made in the reporting interfaces by providing a means of verification.Without this infrastructure, the benefits of improved interfaces will not be fully realised.
Given the regulatory changes with Brexit, a decentralized system with simplified information sharing and consent procedures could have significant economic value.The potential benefit of blockchain for the medical supply chain, including pharmacovigilance specifically, has been recognised [15].It could reduce time and effort needed to trace medicines associated with adverse events; e.g. to link adverse events reported in the UK with medicines produced in the EU.A distributed ledger infrastructure would enable parties throughout Europe (and throughout the supply chain) to add information to the database that can then be made accessible to all other relevant parties, avoiding potential regulatory and data security issues that may occur.
There is a large and growing global pharmacovigilance market [16].In 2019, spontaneous reporting -such as reports made to the YCS -had the biggest pharmacovigilance market share.It is a cost-effective means of detecting adverse events and is commonly used by both regulatory agencies and pharmaceutical companies [17].Therefore, there is significant potential value in improving systems of spontaneous reporting and in improving the traceability, verification, and quality management of that data.
Rationale
There is a need to improve confidence in shared data by considering provenance, traceability, verification, and quality management.The need is not to develop a new technology but to integrate existing technologies to develop a novel system to improve the specific challenge identified: the limitations and inefficiencies of the UK's current Yellow Card Scheme (YCS).
One of the key issues is a lack of reporting, which could be due in part to the usability problems of the current system.The only previous usability study identified issues such as difficulties navigating and using the online form and overly complex language, which increased the effort required to submit a report [7].The YCS was redesigned on the basis of this study, but no further usability evaluations were identified.For members of the public and healthcare professionals to report more adverse events, they need to understand how and what to report, be capable of using the reporting system, and be motivated to report.Taking full advantage of the AER that occurs requires a more interoperable system that can verify the provenance of adverse event report data by linking with the pharmaceutical and medical device supply chains.
This project has significant potential value to many stakeholders.Economically, there is value for manufacturing and pharmaceutical companies.The distributed ledger infrastructure would make it easier to track and manage individual drugs and devices, reducing costs of more labour-intensive tracking processes.It could also potentially help companies avoid costs associated with adverse events by identifying and addressing any harmful side effects of the drug or device earlier.In the longer-term, the infrastructure could be used to identify and track causes of adverse events, and pinpoint whether the problem was due to the drug itself or an issue during production, such as contamination, mixed labels, or an inaccurate amount of the active ingredient [18].It could also support common manufacturing problems related to documentation [19].This would help companies document issues and engage in continuous process improvement, which would also help avoid costs associated with adverse events.An improved reporting system would also benefit patients, clinicians, and pharmacists by reducing the time and effort needed to submit adverse event reports, which is expected to help increase the number of reports submitted.This offers significant public health value for patients; identifying and addressing harmful side effects earlier could reduce the number of people who will suffer from them.It also has the potential to improve patient empowerment by increasing the clarity and ease of reporting.If patients feel confident in reporting side effects that they consider unacceptable and demanding alternatives, this empowerment could help drive innovation in the pharmaceutical market.Additionally, it would help reduce the significant economic costs associated with adverse event-related hospitalizations for the NHS.
A more effective and efficient system for managing post-market surveillance data would have significant value for regulatory authorities and companies conducting their own post-market surveillance.It would help reduce costs associated with conducting post-market surveillance by reducing the effort needed to collect and compile all of the relevant data.It could also provide new insights to inform guidance on the correct use of medicines and drugs.
There are also potential environmental benefits from making surveillance more efficient.Identifying and verifying problems with a medicine or medical device earlier could reduce the waste associated with producing and dispensing drugs that are ineffective or harmful.
Aims and Objectives
This project will optimise the integration of the medicines and medical devices supply chain to provide a means of verifying the provenance, and improving the quality, of adverse event reports (AERs) made by UK patients and healthcare professionals.It aims to address two key problems with the current system: lack of reporting and lack of verification.The first problem will be solved by developing an easy-to-use app interface for adverse event reporting.This could replace the current data collection but replicate the types of data collected.The second problem will be solved by developing an infrastructure that will integrate adverse event reporting with the entire pharmaceutical supply chain, enabling report provenance to be verified and associated with specific medical products.Together, these solutions will optimize the quality of data and analyses that can be derived from adverse event reports to better prevent fraud and identify any potential health risks early.This project will improve on the current adverse event reporting system by developing an infrastructure that supports interoperability, can integrate information from different sources to trace specific medicines and medical devices over their lifespan, and thus corroborate reports of adverse events and enable more rapid action to be taken when adverse events are identified.Our project will be more user-friendly and accessible than the YCS and will disrupt the existing systems (including YCS and WEB-RADR2) by decentralizing them to provide better information flow, to increase security, and, crucially, to allow for the provenance of adverse event reports to be validated.The key objectives of these project are to: 1. Identify the perceived benefits and barriers in the current reporting of adverse events by patients and healthcare providers; 2. Develop a distributed ledger infrastructure and user interface to collect and collate adverse event reports to create a comprehensive and interoperable database.
Study Design
Table 1.PICO
Population
The primary target customers and end users chosen for this project are UK healthcare professionals (HCPs) and members of the public.They were chosen because significant barriers to the usability and effectiveness of the UK's spontaneous reporting Yellow Card Scheme have been identified for both patients and healthcare professionals, including: complex language, lack of feedback, lack of knowledge about the criteria for reporting, and lack of time [20][21][22][23].
Intervention A new mobile and web app user interface will be developed, which is intended to increase user reporting of adverse events.A distributed ledger infrastructure will also be developed to provide a means for regulatory agencies (like the MHRA) to verify the provenance of the events reported to the app system by linking them to the medicines and medical devices supply chain.
Comparator No comparator.
Outcomes
A prototype of the app and the distributed ledger infrastructure.
Data collection
To address the first objective, a review of the literature will be conducted to identify specific benefits and barriers in the current reporting of adverse events by patients and healthcare providers in the UK.If insufficient information is found, a survey will be developed and conducted to collect user feedback about the current system of reporting.
To inform the development of the digital infrastructure, literature and reports relating to the current operation of the YCS will also be investigated.This will help to ensure that problems not identified by patients and healthcare providers are also detected and can be addressed in the design of the new system.
Digital Infrastructure Development
A foundational level, based on distributed ledger technology, will form the infrastructure of the system, upon which applications can be built and provided.Farmatrust's blockchain solutions have been used to provide multi-site, multi-stakeholder, end-to-end supply chain tracing of pharmaceutical products for governmental, regulatory, hospital, and industry organizations.This experience and previous developmental work will provide a strong base on which to develop a similar end-to-end supply chain tracing system that links adverse event reports with existing pharmaceutical tracking systems.
The current backend of the adverse event reporting system will be examined to identify inefficiencies.The core infrastructure will be developed using distributed ledger technology.The specific solution architecture to be used will be determined at this point, after the further examination of the needs, inputs, and current limitations of the adverse event reporting system.A formal evaluation framework will be used to guide the development and ensure that the design choices fit the specific needs of the situation [24].Interoperability with current and future databases and drug and medical device supply chain systems will be ensured by using the Fast Healthcare Interoperability Resources (FIHR) standard.This aspect of the project will also explore the options for governance processes to admit and remove actors from the system.
User Interface Development
Alongside the development of the system infrastructure, a mobile and web app will be developed to provide an easy-to-use and generally accessible interface for patients and healthcare providers to use to report adverse events to the system.This development will take into account the benefits and barriers identified in the literature and from user research.
Patient and Public Involvement
In line with the principles of user-centred design [25,26], members of the public and healthcare professionals will be recruited to collaborate on the development of the adverse event reporting app, to ensure that it is user-friendly and addresses their needs.
Ethics and Dissemination
The infrastructure will be built to comply with all General Data Protection Regulation (GDPR) and the British Data Protection Act (DPA 2018) requirements [27,28].Potential ethical issues relating to the development of a digital infrastructure to record and verify personal and sensitive medical information will be identified from the literature and discussions with the public representatives.
If user research is conducted, a participant information sheet will be produced and provided to participants upon recruitment before getting their informed consent.This will explain the study and detail their rights with respect to withdrawing and having their data removed.Data protection procedures will also be established and made clear.
A paper detailing the methods and results of this project will be submitted to academic peer-reviewed journals, conferences, and clinical meetings.The public representatives will also be consulted about the dissemination of the results of the study to the public in generally accessible formats.
RESULTS
The key output of this project will be the development of a digital infrastructure for a new system of adverse event reporting, including the backend distributed ledger system including a mobile and web app user interface.The novel user interface will help to address the challenge of increasing adverse event reporting by patients and healthcare professionals.The distributed ledger system will provide a means of verifying this information using zero knowledge proof by tracking the medicine or medical device in question to confirm that the correct item was delivered to and used by the intended recipient for the correct purpose.
DISCUSSION
The infrastructure developed will be able to use zero knowledge proofs to verify the provenance of adverse event reports and reduce risks to patient confidentiality and data privacy.While data security and privacy concerns can never be completely avoided, these methods should provide a more secure system for confidential patient data.The decentralized nature of the system means that specific medicines or medical devices can be traced over their life course and reported adverse events can be linked to the specific product without compromising the patient's personal health data.This means that individual adverse event reports can be verified by checking that the medicine or device in question has actually been dispensed, and to the intended recipient.The traceability of specific products enables the system to highlight any discrepancies in the adverse event report.A more user-friendly interface and a means of verifying reports is expected to increase the ease of use, efficiency, and the quality and quantity of adverse event data reported.
Anticipated Impact
Estimates of the cost of preventable drug-related adverse events in the UK are in the range of £400-500 million pounds a year [29][30][31].A digital system that operates more efficiently, with lower costs, and increases the number of adverse events reported, could save money by identifying problems earlier.A more efficient system that identifies and verifies adverse events earlier could also reduce waste.By reducing inefficiencies -i.e.reducing the labour intensive management of the existing adverse event reporting system -production of the suspect drugs or devices could be paused to await investigation earlier.This would avoid the production and shipping of many drugs or devices that would otherwise have been deployed and potentially recalled, reducing waste.The economic savings for pharmaceutical and manufacturing companies would free up money for increased investment in industrial digitalisation research, as less money would be needed to deal with adverse events.
In addition to economic savings and reduced waste of resources, the proposed distributed ledger infrastructure will be open, interoperable, and able to provide zero knowledge proof, thus avoiding issues with data privacy and security.As patients are encouraged to report all side effects, even known ones [32], to enable more accurate understanding of the prevalence of side effects of drugs, this system will have the potential to capture a very large amount of real world data.The openness, access, and accountability of the infrastructure will be a mechanism to enable collaborations and interaction between academia, healthcare systems, the established manufacturing sector and smaller, start-up digital technology companies.
Risks
Given the level of innovation, the details of technical implementation will depend on factors such as interoperability gateway configurations, computational capacities required and adoption by the end-users.This risk will be mitigated by a development approach that will use the FHIR standard and user research to identify needs and barriers.
There are also potential risks from a healthcare perspective.Healthcare providers may be unwilling to report if concerned about disciplinary action, however, an anonymous and easy-to-use system should enable reporting to be more easily incorporated into routines in healthcare environments without personal risk.Flaws in the system could potentially expose patients to higher risk medicines and medical products; mitigated by careful design and testing before promoting for implementation and the use of zero knowledge proof encryption.
Future Directions
In this project, the infrastructure developed will be designed to replace the current YCS operated by the MHRA.The main barrier to implementing our distributed ledger infrastructure is whether the MHRA will accept the change in system.To verify the provenance of adverse event reports, it will be necessary to link the reports with the pharmaceutical supply chain.Because barcode serialization technology is already used to label and track individual packets of medicine or medical devices [33], the Farmatrust system will provide the technology to link adverse event reports with the serialized medicine or device.However, to do this, access will have to be granted via a wholesaler who can access the National Medicines Verification System (NMVS, or SecurMed in the UK) database that tracks the pharmaceutical supply chain [34].
Future studies will be needed to test the feasibility, usability, and efficacy of the system.As the front-end and back-end components could theoretically be implemented separately, separate evaluations could provide useful information on their independent impacts on adverse event reporting and pharmacovigilance.In the longer term, this project could easily be expanded and adapted for global use.It could provide a means of linking data across countries.Applications could be added to address the needs of specific stakeholders (e.g. pharmaceutical and manufacturing companies) to identify relevant issues from the adverse event reports and enable continuous improvements in products and processes.
There are many applications -beyond verifying adverse event reports -that our infrastructure could support.For instance, because the system links adverse event reports with the pharmaceutical supply chain, it could be used to trace individual packets of medicine or medical devices.This could be very valuable in the case of recalls -specific packets or devices could be tracked and information about their distribution and dispensation almost instantly provided.Alerts could then be sent to wholesalers and recipients of the medicines or devices.The number of potential applications provide an opportunity for long-term growth and productivity, and these applications (e.g. the particular method used to check the report with the supply chain database and create a report) can be protected with patents.After the feasibility study, we will aim to market this solution globally.
|
2021-05-21T16:58:05.553Z
|
2021-03-08T00:00:00.000
|
{
"year": 2021,
"sha1": "83029a85288935d24286acfdc4a0f1e82d399f5b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.2196/28616",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "6a62490f33848c00945768f0fb91dca5d57678dd",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
119442979
|
pes2o/s2orc
|
v3-fos-license
|
Double Lobed Radio Quasars from the Sloan Digital Sky Survey
We have combined a sample of 44984 quasars, selected from the Sloan Digital Sky Survey (SDSS) Data Release 3, with the FIRST radio survey. Using a novel technique where the optical quasar position is matched to the complete radio environment within 450", we are able to characterize the radio morphological make-up of what is essentially an optically selected quasar sample, regardless of whether the quasar (nucleus) itself has been detected in the radio. About 10% of the quasar population have radio cores brighter than 0.75 mJy at 1.4 GHz, and 1.7% have double lobed FR2-like radio morphologies. About 75% of the FR2 sources have a radio core (>0.75 mJy). A significant fraction (~40%) of the FR2 quasars are bent by more than 10 degrees, indicating either interactions of the radio plasma with the ICM or IGM. We found no evidence for correlations with redshift among our FR2 quasars: radio lobe flux densities and radio source diameters of the quasars have similar distributions at low (mean 0.77) and high (mean 2.09) redshifts. Using a smaller high reliability FR2 sample of 422 quasars and two comparison samples of radio-quiet and non-FR2 radio-loud quasars, matched in their redshift distributions, we constructed composite optical spectra from the SDSS spectroscopic data. Based on these spectra we can conclude that the FR2 quasars have stronger high-ionization emission lines compared to both the radio quiet and non-FR2 radio loud sources. This is consistent with the notion that the emission lines are brightened by ongoing shock ionization of ambient gas in the quasar host as the radio source expands.
Introduction
Although we now know that the majority of quasars are, at best, weak radio sources, quasars were first recognized as a result of their radio emission. Over the decades a great deal of information has been accumulated about the radio properties of quasars. Generally speaking, roughly 10% of quasars are thought to be "radio-loud" (e.g., Kellermann et al. 1989, and references therein). The radio emission can be associated with either the quasar itself or with radio lobes many kilo-parsecs removed from the quasar (hereafter we refer these double lobed sources as FR2s 1 ). Traditionally it was widely held that there was a dichotomy between the radio-loud and radio-quiet quasar populations, although more recent radio surveys have cast doubt on that picture (e.g., White et al. 2000;Cirasuolo et al. 2003Cirasuolo et al. , 2005. The advent of wide area radio surveys like the FIRST survey coupled with large quasar surveys like SDSS permit a more extensive inventory of the 1 Fanaroff & Riley (1974) class II objects radio properties of quasars. The association of radio flux with the quasar itself (hereafter referred to as core emission) is straightforward given the astrometric accuracy of both the optical and radio positions (typically better than 1 arcsec). The association of radio lobes is more problematic since given the density of radio sources on the sky, random radio sources will sometimes masquerade as associated radio lobes In this paper we attempt to quantify both the core and FR2 radio emission associated with a large sample of optically selected quasars.
Our new implementation of matching the FIRST radio environment to its associated quasar goes beyond the simple one-to-one matching (within a certain small radius, typically 2 ′′ ), in that it investigates (and ranks) all the possible radio source configurations near the quasar. This also goes beyond other attempts to account for double lobed radio sources without a detected radio core, most notably by Ivezić et al. (2002) who matched mid-points between close pairs of radio components to the SDSS Early Data Release catalog. While this does recover most (if not all) of the FR2 systems that are perfectly straight, it misses sources that are bent. Even slight bends in large systems will offset the midpoint enough from the quasar position as to be a miss.
The paper is organized as follows. The first few sections ( § 2 through § 3.2) describe the matching process of the radio and quasar samples. The results ( § 4) are separated in two parts: one based on statistical inferences of the sample as a whole, and one based on an actual sample of FR2 sources. These two are not necessarily the same. The former section ( § 4.1 through 4.5) mainly deals with occurrence levels of FR2's among quasars, the distribution of core components among these FR2 quasars, and their redshift dependencies. All these results are based on the detailed comparison between the actual and random samples. In other words, it will tell us how many FR2 quasars there are among the total, however, it does not tell us which ones are FR2. This is addressed in the second part of § 4, which deals with an actual sample of FR2 quasars (see § 4.6 on how we select these). This sample forms a representative subsample of the total number of FR2 quasars we infer to be present in the initial sample, and is used to construct an optical composite spectrum of FR2 quasars. Section 4.9 details the results of the comparison to radio quiet and non-FR2 radio loud quasar spectra.
Optical Quasar Sample
Our quasar sample is based on the Sloan Digital Sky Survey (SDSS) Data Release 3 (DR3, Abazajian et al. 2005) quasar list, as far as it overlaps with the FIRST survey (Becker et al. 1995). This resulted in a sample of 44 984 quasars. In this paper we focus on the radio population properties of optically selected quasars.
Radio Catalog Matching
The radio matching question is not a straightforward one. By just matching the core positions, we are biasing against the fraction of radio quasars which have weak, undetected, cores. Therefore, this section is separated in two parts, Core Matching, and Environment Matching. The former is the straight quasar-radio positional match to within a fixed radius (3 ′′ in our case), whereas the latter actually takes the distribution of radio sources in the direct vicinity of the quasar in account. This allows us to fully account for the double lobed FR2 type quasars, whether they have detectable cores or not.
Faint Core matches
In this section, we quantify the fraction of quasars that exhibit core emission. We can actually go slightly deeper than the official FIRST catalog, with its nominal 5σ lower threshold of 1.0 mJy, by creating 3σ lists based on the radio images and quasar positions. This allows us to go down to a detection limit of ∼ 0.75 mJy (versus 1.0 mJy for the official version).
Given the steeply rising number density distribution of radio sources toward lower flux levels, one might be concerned about the potential for an increase in false detections at sub-mJy flux density levels. The relative optical to radio astrometry is, however, accurate enough to match within small apertures (to better than 3 ′′ ), reducing the occurrence of chance superpositions significantly. The surface density of radio sources at the 1 mJy level is not high enough to significantly contaminate the counts based on 3 arcsecond matching. The Fig. 1.-Plot of the fraction of quasars with a detected radio core as function of its flux density. The two lighter-grey curves are for the 5σ and 3σ catalogs respectively. The dashed dark-grey line represents an extrapolation of the expected number densities below our detection threshold. The limiting detection rates are: 9.2% (5σ), 11.8% (3σ), and 23.3% (extrapolation down to 0.1 mJy). The large dot represents the detection rate for the Spitzer First Look Survey (FLS) field (36 ± 8%). Matching is done within 3 arcseconds. fraction of radio core detected quasars (RCDQ) out of the total quasar population hovers around the 10% mark, but this is a strong function of the radio flux density limit. It also depends on the initial selection of the quasar sample. The SDSS quasar selection is mainly done on their optical colors (Richards et al. 2002), but ∼ 3% of the quasars were selected solely on the basis of their radio emission (1397 out of 44 984). Looking at only those SDSS quasars which have been selected expressly on their optical colors (see the quasar selection flow-chart of Richards et al. 2002, Fig 1), there are 34 147 sources which have either a QSO CAP, QSO SKIRT, and / or a QSO HIZ flag set. For these, the relevant radio core detection fractions are 7.1% (2430) and 10.1% (3458) for the 5σ and 3σ detection limits, respectively (the binomial error on these percentages is on the order of 0.1%). These core fractions are higher for the 10 837 quasars (44 984−34 147) that made it into the SDSS sample via other means (1694, 15.6% and 1855, 17.1% for the 5 and 3σ catalogs). The higher core fractions are due to the large number of targeted FIRST sources that would not have made it into the sample otherwise, and to the greater likelihood of radio emission among X-ray selected quasars. Clearly, the initial quasar selection criteria impact the rate at which their cores are detected by FIRST. The results have been summarized in Table 1.
A more direct view of the flux limit dependence of the RCDQ fraction is offered by Fig. 1. An extrapolation of the data suggests that at 0.1 mJy about 20% of quasar cores will be detected. This extrapolation is not unrealistic, and even may be an underestimate: the extragalactic Spitzer First Look Survey (FLS) field has been covered by both the SDSS and the VLA down to 0.1 mJy levels (Condon et al. 2003). Out of the 33 quasars in the DR3 that are covered by this VLA survey, we recover 12 using the exact same matching criterion. This corresponds to a fraction of 36%, which carries an 8% formal 1σ uncertainty.
In fact, judging by the progression of detection rate in Fig. 1, one does not have to go much deeper than 0.1 mJy to recover the majority of (optically) selected quasars. The results and discussion presented in this paper, however, are only relevant to the subset of quasars with cores brighter than ∼ 1 mJy. It is this ∼ 10% of the total that is well-sampled by the FIRST catalog. This should be kept in mind as well for the sections where we discuss radio quasar morphology.
Environment Matching
The FIRST catalog is essentially a catalog of components, and not a list of sources. This means that sources which have discrete components, like the FR2 sources we are interested in, are going to have multiple entries in the FIRST catalog. If one uses a positional matching scheme as described in the last section, and then either visually or in an automated way assesses the quasar morphology, one will find a mix of core-and lobe-dominated quasars provided that the core has been detected. However, this mechanism is going to miss the FR2 sources without a detected core, thereby skewing the quasar radio population toward the core dominated sources. -Histograms of lobe opening angles of FR2 quasar candidates. Each box represents a different FR2 size bin, as indicated by its diameter in arcminutes. The light-grey histogram represents the candidate count, and in dark-grey is the corresponding random-match baseline. This baseline increases dramatically as one considers larger sources, while the FR2 candidate count actually decreases. Note both the strong trend toward linear systems (180 degrees opening angles), as well as the significant presence of bent FR2 sources. The bin size 2.5 degrees.
Preferably one would like to develop an objective procedure for picking out candidate FR2 morphologies. We decided upon a catalog-based approach where the FIRST catalog was used to find all sources within a 450 ′′ of a quasar position (excluding the core emission itself). Sources around the quasar position were then considered pairwise, where each pair was considered a potential set of radio lobes. Pairs were ranked by their likelihood of forming an FR2 based on their distances to the quasar and their opening angle as seen from the quasar position. Higher scores were given to opening angles closer to 180 degrees, and to smaller distances from the quasar. The most important factor turned out to be the opening angle. Nearby pairs of sources unrelated to the quasar will tend to have small opening angles as will a pair of sources within the same radio lobe of the quasar, so we weighted against candidate FR2 sources with opening angles smaller than 50 • . The chances of these sources to be real are very small, and even if they are a single source, their relevance to FR2 sources will be questionable. We score the possible configurations as follows: where Ψ is the opening angle (in degrees), and r i and r j are the distance rank numbers of the components under consideration. The closest component to the quasar has an r = 0, the next closest is r = 1, etcetera. This way, the program will give the most weight to the radio components closest to the quasar, irrespective of what that separation turns out to be in physical terms. Each quasar which has at least 2 radio components within the 450 ′′ search radius will get an assigned "most likely" FR2 configuration (i.e., the configuration with the highest score w i,j . This, by itself, does not mean it is a real FR2 2 . In fact, this procedure turns up large numbers of false positives. Therefore, as a control, we searched for FR2 morphologies around a large sample of random sky positions that fall within the area covered by FIRST. Since all of the results for FR2s depend critically on the quality of the control sample, we increased this random sample size 20-fold over the actual quasar sample (of 44 984). Given the area of the FIRST survey (∼ 9 000 sq. degree) and our matching area (1/20th of a sq. degree), a much larger number of pointings would start to overlap itself too much (i.e., the random samples would not be independent of each other).
In Fig. 2 we display a set of histograms for particular FR2 sizes. For each, the number FR2 candidates are plotted as a function of opening angle both around the true quasar position (lightgrey trace) as well as the offset positions (darkgrey trace). There is a clear excess of nominal FR2 sources surrounding quasar positions which we take as a true measure of the number of quasars with associated FR2s. Although the distribution of FR2s has a pronounced peak at opening angles of 180 degrees, the distribution is quite broad, extending out to nearly 90 degrees. It is possible that some of this signal results from quasars living within (radio) clusters and hence being surrounded by an excess of unrelated sources, but such a signal should not show a strong preference for opening angles near 180 degrees.
The set of histograms also illustrates the relative importance of chance FR2 occurrences (darkgrey histograms), which become progressively more prevalent if one starts considering the larger FR2 configurations. While the smallest size bin does have some contamination (∼ 14% on average across all opening angles), almost all of the signal beyond opening angles of 90 degrees is real (less than 5% contamination for these angles). However, the significance of the FR2 matches drops significantly for the larger sources. More than 92% of the signal in the 3 to 4 arcminute bin is by chance. Clearly, most of the suggested FR2 configurations are spurious at larger diameter, and only deeper observations and individual inspection of a candidate source can provide any certainty.
In the next few sections we describe the results of the analysis.
Fraction of FR2 quasars
The primary result we can quantify is the fraction of quasars that can be associated with a double lobed radio structure (whether a core has been The summed excess counts within 300 ′′ are 547 and 202 for the core and non-core subsamples, respectively. Note that the smallest size bin for the core sample is affected by resolution: it is hard to resolve a core inside a small double lobed structure. detected in the radio or not). This is different from the discussion in § 3.1 which relates to the fraction of quasars that have radio emission at the quasar core position. This value, while considerably higher than the rates for the FR2 quasars, does not form an upper limit to the fraction of quasars associated with radio emission: some of the FR2 quasars do not have a detected radio core. Figure 2 depicts the excess number of FR2 quasars over the baseline values, plotted for progressively larger radio sources. The contamination rates go up as more random FIRST components fall within the covered area, and, at the same time, fewer real FR2 sources are found. This effect is illustrated in Fig. 3, which shows the FR2 excesses as function of overall source size. The light-grey line indicates the FR2 number counts for candidates without a detected (3σ) core, and the darkgrey histogram is for the FR2 candidates with a detected core. It is clear that FR2 sources larger than about 300 ′′ are very rare, and basically cannot be identified using this method. Most FR2 sources are small, with the bulk being having diameters of less than 100 ′′ .
The summed total excess numbers, based on Fig. 3 and limited to 300 ′′ or smaller, are 749 FR2 candidates (1.7% of the total), of which 547 have cores. Some uncertainties in the exact numbers still remain, particularly due to the noise in the Fig. 3, the smallest size bin is affected by the angular resolution of FIRST. The mean core fraction is 73.0%, which appears to be a representative value irrespective of the FR2 diameter The horizontal dashed line represents the core-fraction of the non-FR2 quasar population at 10.6%. The error estimates on the core fraction are a combination of binomial and background noise errors. counts at larger source sizes. A typical uncertainty of ∼ 20 should be considered on these numbers (based on variations in the FR2 total using different instances of the random position catalog).
At these levels, it is clear that the FR2 phenomenon is much less common than quasar core emission; 1.7% versus 10% (see § 3.1). Indeed, of all the quasars with a detected radio core, only about 1 in 9 is also an FR2. The relative numbers have been recapitulated in Table 2.
Core Fractions of FR2 quasars
As noted above, not all FR2 quasars have cores that are detected by FIRST. We estimate that about 75% of FR2 sources have detected cores down to the 0.75 mJy flux density level. This value compares well with the number for our "actual" FR2 quasar sample of § 4.6. Out of 422 FR2 quasar sources, 265 have detected cores (62.8%) down to the FIRST detection limit (1 mJy).
We are now in the position of investigating whether there is a correlation between the overall size of the FR2 and the presence of a radio core. In orientation dependent unification schemes, a radio source observed close to the radio jet axis will be both significantly foreshortened and its core brightness will be enhanced by beaming effects (e.g., Barthel 1989, Hoekstra et al. 1997. This would imply that, given a particular distribution of FR2 radio source sizes and core luminosities, the smaller FR2 sources would be associated (on average) with brighter core components. This should translate into a higher fraction of detected cores among smaller FR2 quasars (everything else being equal). Figure 4 shows the fraction of FR2 candidates that have detected cores, as function of overall size. There does not appear to be a significant trend toward lower core-fractions as one considers larger sources. The much lower fraction for the very smallest size bin is due to the limited resolution of the FIRST survey (about 5 ′′ ), which makes it hard to isolate the core from the lobe emission for sources with an overall size less than about half an arcminute. Also, beyond about 275 ′′ the core-ratio becomes rather hard to measure; not a lot of FR2 candidates are this large (see Fig. 3).
Since the core-fraction is more or less constant, and does not depend on the source diameter, it does not appear that relativistic beaming is affecting the (faint) core counts. Unfortunately, one expects the strongest core beaming contributions for the smallest sources; exactly the ones that are most affected by our limited resolution.
Bent Double Lobed Sources
The angular distributions in Fig. 2 reveal a large number of more or less bent FR2 sources. Bends in FR2 sources can be due to a variety of mechanisms, either intrinsic or extrinsic to the host galaxy. Local density gradients in the host system can account for bending (e.g., Allan 1984), or radio jets can run into overdensities in the ambient medium, resulting in disruption / deflection of the radio structure (e.g., Mantovani et al. 1998). Extrinsic bending of the radio source can be achieved through interactions with a (hot) intracluster medium. Any space motion of the source through this medium will result in ram-pressure bending of the radio structure (e.g., Soker et al. 1988;Sakelliou & Merrifield 2000). And finally, radio morphologies can be severely deformed by merger events (e.g., Gopal-Krishna et al. 2003). Regardless of the possible individual mechanisms, a large fraction of our FR2 quasars have significant bending: only slightly more than 56% of FR2 quasars smaller than 3 arcminutes have opening angles larger than 170 degrees (this value is 65% for the actual sample of § 4.6). This large fraction of bent quasars is in agreement with earlier findings (based on smaller quasar samples) of, e.g., Barthel & Miley (1988); Valtonen et al. (1994); Best et al. (1995).
Redshift Correlations
We can investigate whether there are trends with quasar redshift based on statistical arguments. This is done by subdividing the sample of 44 984 in two parts (high and low redshift), and then comparing the results for each subsample. As with the main sample, each subsample has its own control sample providing the accurate baseline.
Previous studies (e.g., Blundell et al. 1999) have suggested that FR2 sources appear to be physically smaller at larger redshifts. For selfsimilar expansion the size of a radio source relates directly to its age. It also correlates with its luminosity, since, based on relative number densities of symmetric double lobed sources over a large range of sizes (e.g., Fanti et al. 1995, O'Dea & Baum 1997, one expects a significant decline in radio flux as a lobe expands adiabatically. While the picture in a fixed flux density limit will preferentially bias against older (and therefore fainter) radio sources at higher redshifts, resulting in a "youth-size-redshift" degeneracy, scant hard evidence is available in the literature. Indeed, several studies contradict each other. See Blundell et al. (1999) for a nice summary. We, however, are in a good position to address this issue. First, our quasar sample has not been selected from the radio, and as such has perhaps less radio bias built in. Also, we have complete redshift information on our sample. The redshift range is furthermore much larger than in any of the previous studies. The median redshift for our sample of quasars is 1.3960, which results in mean redshifts for the low-and high-redshift halves of the sample of z L = 0.769, and z H = 2.088.
The first test we can perform on the 2 subsamples is to check whether their relative numbers as function of the average lobe flux density makes sense. The results are plotted in Fig. 5, left panel. The curves are depicting the cumulative number of FR2 candidates smaller than 100 ′′ for which the mean lobe flux density is larger than a certain value. We have explicitly removed any core contribution to this mean. We limited our comparison here to the smaller sources, for which the background contamination is smallest.
Since the left panel shows the results in the observed frame, it is clear that we detect far more local FR2 sources than high redshift ones (lightgrey curve in comparison to the dark-grey one). On average, more than twice as many candidates fall in the low-redshift bin compared to the highredshift ones (408 candidates versus 201). Furthermore, the offset between the two curves appears to be fairly constant, indicating that the underlying population properties may not be that different (i.e., we can match both by shifting the high-redshift curve along the x-axis, thereby correcting for the lower lobe flux densities due to their larger distances). This is exactly what we have done in the right panel. All of the FR2 candidates have been put at a fiducial redshift of 1, correcting its lobe emission both for relative distances and intrinsic radio spectral index. For the former we assumed a WMAP cosmology (H o = 71 km s −1 Mpc −1 , Ω M = 0.3, and Ω Λ = 0.7), and for the latter we adopted α = −0.75 for the high frequency part of the radio spectrum (> 1 GHz). It should be noted that these cumulative curves are only weakly dependent on the cosmological parameters Ω and radio spectral index α. It is not dependent on the Hubble constant. The physical maximum size for the FR2 candidates is set at 800kpc, which roughly corresponds to the 100 ′′ size limit in the left panel.
Both curves agree reasonably well now. At the faint end of each curve, incompleteness of the FIRST survey flattens the distribution. This accounts for the count mismatch between the lightand dark-grey curves below about 10 mJy. On the other end of both curves, low number statistics increase the uncertainties. The slightly larger number of bright FR2 sources for the high-redshift bin (see Fig. 5, right panel) may be real (i.e., FR2 There are about twice as many low-redshift FR2 candidates as high-redshift candidates. In the panel on the right the redshift dependencies have been taken out; all sources are placed on a fiduciary redshift of 1 by k-correcting their lobe flux densities. Note that the shape of the distribution only weakly depends on the assumed radio spectral index used in the k-correction (from α = 0 to the canonical radio spectral index of α = −0.75, solid and dashed curves respectively).
sources are brighter at high redshifts compared to their low-redshift counterparts), but the offset is not significant. Also note the effect of changing the radio spectral index from −0.75 to 0 (dotted dark-grey line versus solid line). A negative α value has the effect of increasing the lobe fluxes, especially for the higher redshift sources. Flattening the α to 0 (or toward even more unrealistic positive values) therefore acts to lower the average lobe fluxes, and as a consequence both cumulative distributions start to agree better. This would also suggest that the high-redshift sources may be intrinsically brighter, and that only by artificially lowering the fluxes can both distributions made to agree.
Physical FR2 sizes at low and high redshifts
This brings us to the second question regarding redshift dependencies: are the high-redshift FR2 quasars intrinsically smaller because we are biased against observing older, fainter, and larger radio sources? To this end we used the same two datasets that were used for Fig. 5, right panel. The upper size limit is set at 800kpc for both subsets, but this does not really affect each size distribution, since there are not that many FR2 sources this large. Figure 6 shows both distributions for the low and high redshift bins (same coloring as before). As in Fig. 4, the smallest size bins are affected by resolution effects, albeit it is easier to measure the lobe separation than whether or not there is a core component between the lobes. The smallest FR2 sources in our sample are about 10 ′′ (∼80 kpc at z = 1), which is a bit better than the smallest FR2 for which a clear core component can be detected (∼ 30 ′′ ). The apparent peak in our size distributions (around 200 kpc) agrees with values found for 3CR quasars. Best et al. (1995) quote a value of 207 ± 29 kpc, though given the shape of the distribution it is not clear how useful this measure is.
A Kolmogorov-Smirnoff test deems the difference between the low-redshift (green histogram) and high-redshift (blue histogram) size distributions insignificant. Therefore, it does not appear Fig. 6.-Histogram of the FR2 source size distribution. The light-grey histogram is for the lowredshift half of the sample, and the dark-grey line is for the high-redshift sources. Both histograms have been corrected for random matches, and therefore represent the real size distributions. that there is evidence for FR2 sources to be smaller in the earlier universe.
An issue that has been ignored so far is that we tacitly assumed that the FR2 sizes for these quasars are accurate. If these sources have orientations preferentially toward our line of sight (and we are dealing with quasars here), significant foreshortening may underestimate their real sizes by quite a bit (see Antonucci 1993). This will also "squash" both distributions toward the smaller sizes, making it hard to differentiate the two.
Previous studies (e.g., Blundell et al. 1999) relied on (smaller) samples of radio galaxies, for which the assumption that they are oriented in the plane of the sky is less problematic. Other studies which mainly focused on FR2 quasars (e.g, Nilsson et al. 1998, Teerikorpi 2001) also do not find a size-redshift correlation.
Sample Specific Results
The next few sections deal with properties intrinsic to FR2 quasars. As such, we need a sub-sample of our quasar list that we feel consists of genuine FR2 sources. We know that out of the total sample of 44 984 about 750 are FR2 sources, however, we do not know which ones. What we can do is create a subsample that is guaranteed to have less than 5% of non-FR2 source contamination. This is done by stepping through the multidimensional space spanned up by histograms of the Fig. 2 type as function of overall size. As can be seen in Fig. 2, the bins with large opening angles only have a small contamination fraction (in this case for sources smaller than 100 ′′ ). Obviously, the signal-to-noise goes down quite a bit for larger overall sizes, and progressively fewer of those candidates are real FR2. By assembling all the quasars in those bins that have a contamination rate less than 5%, as function of opening angle and overall size, we constructed a FR2 quasar sample that is more reliable than 95% 3 . It contains 422 sources, and forms the basis for our subsequent studies. Sample properties are listed in Table 3, and the positions of the 417 actual FR2 quasars are given in Table 4.
FR2 Sample Asymmetries
The different radio morphological properties of the FR2 sources have been used with varying degrees of success to infer its physical properties. In particular, these are: the observed asymmetries in the arm-length ratio (Q, here defined as the ratio of the larger lobe-core to the smaller lobe-core separation), the lobe flux density ratio (F = F lobe,distant /F lobe,close ), and the distribution of the lobe opening angle (Ψ, with a linear source having a Ψ of 180 • ).
Gopal-Krishna & Wiita (2004) provide a nice historic overview of the literature on these parameters. As can be inferred from a median flux ratio value of F < 1.0 (see Table 5), the closer lobe is also the brightest. This is consistent with the much earlier findings of Mackay (1971) for the 3CR catalog, and implies directly that the lobe advance speeds are not relativistic, and that most of the arm-length and flux density asymmetries are intrinsic to the source (and not due to orientation, relativistic motions, and Doppler boosting).
If we separate the low and high-redshift parts of our sample, we can test whether any trend with redshift appears. Barthel & Miley (1988), for instance, suggested that quasars are more bent at high redshifts. In our sample we do not find a strong redshift dependency. The median opening angles are 173.6 and 172.7 • , for the low and high redshift bins respectively 4 . A Kolmogorov-Smirnoff test deemed the two distributions different at the 97.2% confidence level (a 2.2σ results). This would marginally confirm the Barthel & Miley claim. However, Best et al. (1995) quote a 2σ result in the opposite sense, albeit using a much smaller sample (23 quasars). We also found no significant differences between the low and high-redshift values of the arm-length ratios Q (KS-results: different at the 87.0% level, 1.51σ), and the flux ratios F (similar at the 97.0% level, 2.2σ).
The Mackay-type asymmetry, in which the nearest lobe is also the brightest, is not found to break down for the brightest of our quasars. If we separate our sample into a low-and highflux bin (which includes the core contribution), we do not see a reversal in the flux asymmetry toward the most radio luminous FR2 sources (e.g., Gopal-Krishna & Wiita 2004, and references therein). Actually, for our sample we find a significant (3.25σ) trend for the brightest quasars to adhere more to the Mackay-asymmetry than the fainter ones.
Control Samples
Using the same matching technique as described in the previous section, we made two additional control samples. Whereas our FR2 sample is selected based on a combination of large opening angle ( 150 degrees) and small overall size ( 200 ′′ ), our control samples form the other extreme. Very few, if any, genuine FR2 sources will be characterized by radio structures with small opening angles (< 100 degrees) and large sizes (> 450 ′′ ). Therefore, we use these criteria to select two non-FR2 control samples: one that has a FIRST source coincident with the quasar (remember that the matching algorithm explicitly excludes components within 3 ′′ of the quasar po-sition), and another one without a FIRST counterpart to the quasar. For all practical purposes, we can consider the former sample to be quasars which are associated with just one FIRST component (the "core dominated" sample -CD), and the latter as quasars without any detected FIRST counterpart (the "radio quiet" sample -RQ).
Both of the CD and RQ samples initially contained more candidates than the FR2 sample. This allows for small adjustments in the mean sample properties, in particular the redshift distribution. We therefore matched the redshift distribution of the CD and RQ samples to the one of the FR2 sample. This resulted in a CD sample which matches the FR2 in redshift-space and in absolute number. The RQ sample, which will function as a baseline to both the FR2 and CD samples, contains a much larger number (6330 entries), but again with an identical redshift distribution. The mean properties of the samples are listed in Table 3.
Composite Optical Spectra
One of the very useful aspects of our SDSS based quasar sample is the availability of a large set of complementary data, including the optical spectra for all quasars. An otherwise almost impossible stand-alone observing project due to the combination of low FR2 quasar incidence rates and large datasets, is sidestepped by using the rich SDSS data archive. We can therefore readily construct composite optical spectra for our 3 samples (as listed in Table 3). We basically used the same method for the construction of the composite spectrum as outlined by vanden Berk et al. (2001), in combination with a relative normalization scheme similar to the ones used in Richards et al. (2003). Each composite has been normalized by its continuum flux at 4000Å (restframe).
The resulting spectra are plotted in Fig. 7, color coded green for the radio-quiet (RQ) quasar control sample, blue for the core-dominated (CD) radio-loud quasars, and red for the lobe dominated (FR2) radio quasars. All three composite spectra are similar to each other and to the composite of vanden Berk et al. (2001). Figure 8 shows the number of quasars from each subsample that were used in constructing the composite spectrum. Since each individual spectrum has to be corrected by a (1 + z) factor to bring it to its restframe, Fig. 7.-Composite spectra for our three samples of quasars: radio quiet (RQ) quasars in green, core dominated (CD) radio-loud quasars in blue, and lobe dominated (FR2) radio-loud quasars in red. This plot can be directly compared to Fig. 7 of Richards et al. (2003), and illustrates both the small relative color range among our 3 samples (all fall within the "normal" range of Richards et al.), and the apparent lack of significant intrinsic dust-reddening in these quasars (the red and gray dashed lines represent moderate to severe levels of dust-reddening). All spectra have been normalized to the continuum flux at 4000Å. not all quasars contribute to the same part of the composite. In fact, the quasars that contribute to the shortest wavelengths are not the same that go into the longer wavelength part. This should be kept in mind if one wants to compare the various emission lines. Any dependence of the emission line properties on redshift will therefore affect the short wavelength part of the composite more than the long wavelength part (which is made up of low redshift sources). Richards et al. (2003) investigated the effect of dust-absorption on composite quasar spectra (regardless of whether they are associated with radio sources), and we have indicated two of the absorbed template spectra (composite numbers 5 and 6, see their Fig. 7) in our Fig. 7 as the red and gray dashed lines, respectively. From this The color-coding is the same as for Fig. 7. The RQ sample (green histogram) contains 15 times as many quasars as both the CD and FR2 samples.
it is clear that our 3 sub-samples do not appear to have significant intrinsic dust-absorption associated with them. Indeed, the range of relative fluxes toward the blue end of the spectrum falls within the range of "normal" quasars (templates 1−4 of Richards et al. (2003)). The differences in spectral slopes among our 3 samples are real. We measure continuum slopes (over the range 1450 to 4040Å, identical to Richards et al. (2003)) of: α ν = −0.59 ± 0.01, α ν = −0.47 ± 0.01, and α ν = −0.80 ± 0.01 for the FR2, RQ, and CD samples respectively. These values are significantly different from the reddened templates (α ν = −1.51 and α ν = −2.18 for the red and grey dashed lines in Fig. 7), suggesting that our quasars are intrinsically different from dust-reddened quasars (e.g., Webster et al. 1995;Francis et al. 2000).
In order to study differences in line emission, small differences in spectral slope have to be removed. This is achieved by first normalizing each spectrum to the continuum flux just shortward of the emission line in question. Then, by fitting a powerlaw to the local continuum, each emission line spectrum can be "rectified" to a slope of unity (i.e., making sure both the left and right sides of
Restframe Wavelength
Relative Flux Density Fig. 9.-Composite spectra of the three comparison samples, centered around emission line regions. The histograms are color-codes as follows: green is for the radio quiet (RQ) quasar population, the blue for the core dominated (CD) sample, and red represents the lobe dominated FR2 quasars. All the spectra have been normalized to the continuum flux levels at the left and right parts of each panel.
the zoomed-in spectrum are set to unity). A similar approach has been employed by Richards et al. (2003, see their Figs. 8 and 9).
The results are plotted in Fig. 9, zoomed in around prominent emission lines. The panels are arranged in order of increasing restframe wavelength. A few key observations can be made. The first, and most striking one, is that FR2 quasars tend to have stronger moderate-to-high ionization emission lines in their spectrum than either the CD and RQ samples. This can be seen especially for the C IV, [Ne V], [Ne III], and [O III] emission lines. The inverse appears to be the case for the Balmer lines: the FR2 sources have significantly fainter Balmer lines than either the CD or RQ samples. Notice, for instance, the Hδ, Hγ, Hβ, and Hα sequence in Fig. 9. Other lines, like Mg II and C III], do not seem to differ among our 3 samples.
Measured line widths, line centers and fluxes for the most prominent emission lines are listed in Table 6. Since a lot of the lines have shapes that are quite different from the Gaussian form, we have fitted the profiles with the more general form F (x) = ce −0.5(x/σ) n , with c a normalization constant, and n a free parameter. Note that for a Gaussian, n = 2. The FWHM of the profile can be obtained directly from the values of n and σ: FWHM = 2(2ln2) 1/n σ. Allowing values of n < 2 results in better fits for lines with broad wings (e.g., C III] in Fig. 9). Typically the difference in equivalent width (EW) as fitted by the function and the actual measured value is less than 1%. The fluxes in Table 6 have been derived from the measured EW values, multiplied by the continuum level at the center of the line (as determined by a powerlaw fit, see Fig. 7). Since all composite spectra have been normalized to a fiducial value of 1.00 at 4000Å the fluxes are relative to this 4000Å continuum value, and can be compared across the three samples (columns 6 and 11 in Table 6). In addition, we have normalized these fluxes by the value of the Lyα flux for each subsample. This effectively takes out the slight spectral slope dependency, and allows for an easier comparison to the values of vanden Berk et al. (2001, their Table 2).
The differences between the various species of emission lines among the 3 subsamples, as illustrated in Fig. 9, are corroborated by their line see Osterbrock 1989) to determine whether we are dealing with AGN or H II-region dominated emission regimes (due to the fact that the broad and narrow lines do not originate from the same region), we can still discern trends between the subsamples in the relative importance of broad vs. narrow line emission. This is illustrated by Fig. 10, in which we have plotted various ratios of narrow and broad emission lines (based on fluxes listed in Table 6). The narrow lines are normalized on the x-axis by the broad-line Hα flux, and on the y-axis by the broadline component (listed separately in Table 6) of the Hβ line. It is clear from this plot that, as one progresses from RQ, CD, to FR2 sample, the relative importance of various narrow lines increases. The offset between the RQ and Vanden Berk samples (which in principle should coincide) is in part due to the presence of a narrow-line component in their Hβ fluxes (lowering the points along the y-axis), and a slightly larger flux density in their composite Hα line (moving the points to the left along the x-axis). The offset probably serves best to illustrate the inherent uncertainties in plots like these.
So, in summary, it appears that the FR2 sources tend to have brighter moderate-to-high ionization lines, while at the same time having much less prominent Balmer lines, than either the CD or RQ samples. The latter two have far stronger comparable emission line profiles / fluxes, with the possible exception of the higher Balmer lines and [S II].
Radio sources are known to interact with their ambient media, especially in the earlier stages of radio source evolution where the structure is confined to within the host galaxy. In these compact stages, copious amounts of line-emission are induced at the interfaces of the radio plasma and ambient medium (e.g., Bicknell et al. 1997;de Vries et al. 1999;Axon et al. 1999). Other types of radio activity related spectral signatures are enhanced star-formation induced by the powerful radio jet (e.g., van Breugel et al. 1985;McCarthy et al. 1987;Rees 1989), scattered nuclear UV light off the wall of the area "excavated" by the radio structure (e.g., di Serego Alighieri et al. 1989;Dey et al. 1996), or more generally, direct photoionization of the ambient gas by the AGN along radiation cones coinciding with the radio symmetry axis (e.g., Neeser et al. 1997). The last three scenarios are more long-lived (i.e., the resulting stars will be around for a while), whereas the shockionization of the line emission gas is an in-situ event, and will only last as long as the radio source is there to shock the gas (< 10 6 years).
It therefore appears reasonable to guess that in the case of the FR2 quasars, such an ongoing interaction between the radio structure and its ambient medium is producing the excess flux in the narrow lines. Indeed, shock precursor clouds are found to be particularly bright in high-ionization lines like [O III] compared to Hα (e.g., Sutherland et al. 1993;Dopita & Sutherland 1996). Since the optical spectrum is taken at the quasar position, and not at the radio lobe position, we are obviously dealing with interactions between the gaseous medium and the radio core (whether we detected one or not).
The other sample of quasars associated with radio activity, the core-dominated (CD) sample, has optical spectral properties which do not differ significantly from radio-quiet quasars.
Summary and Conclusions
We have combined a sample of 44 984 SDSS quasars with the FIRST radio survey. Instead of comparing optical and radio positions for the quasars directly to within a small radius (say, 3 ′′ ), we matched the quasar position to its complete radio environment within 450 ′′ . This way, we are able to characterize the radio morphological make-up of what is essentially an optically selected quasar sample, regardless of whether the quasar (nucleus) itself has been detected in the radio.
The results can be separated into ones that pertain to the quasar population as a whole, and those that only concern FR2 sources. For the former category we list: 1) only a small fraction of the quasars have radio emission associated with the core itself (∼ 11% at the 0.75 mJy level); 2) FR2 quasars are even rarer, only 1.7% of the general population is associated with a double lobed radio source; 3) of these, about three-quarter have a detected core; 4) roughly half of the FR2 quasars have bends larger than 20 degrees from linear, indicating either interactions of the radio plasma with the ICM or IGM; and 5) no evidence for correlations with redshift among our FR2 quasars was found: radio lobe flux densities and radio source diameters of the quasars have similar distributions at low and high redshifts.
To investigate more detailed source related properties, we used an actual sample of 422 FR2 quasars and two comparison samples of radio quiet and non-FR2 radio loud quasars. These three samples are matched in their redshift distributions, and for each we constructed an optical composite spectrum using SDSS spectroscopic data. Based on these spectra we conclude that the FR2 quasars have stronger high-ionization emission lines compared to both the radio quiet and non-FR2 radio loud sources. This may be due to higher levels of shock ionization of the ambient gas, as induced by the expanding radio source in FR2 quasars.
We like to thank the referee for comments that helped improve the paper. WDVs work was performed under the auspices of the U.S. Department of Energy, National Nuclear Security Administration by the University of California, Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48. The authors also acknowledge support from the National Radio Astronomy Observatory, the National Science Foundations (grant AST 00-98355), the Space Telescope Science Institute, and Microsoft. a sources which have one (or more) of the following flags set: QSO CAP, QSO SKIRT, QSO HIZ (as defined in Richards et al. 2002). b sources which do not have any of the flags set mentioned above. c matches to radio sources brighter than either a 5σ (≈ 1.0 mJy), or a 3σ (≈ 0.70 mJy) detection limit. Note.-The F distribution is not symmetric around F = 1, and therefore the median and mean values differ significantly.
|
2019-04-14T01:44:20.726Z
|
2005-10-26T00:00:00.000
|
{
"year": 2005,
"sha1": "e00adb0bfbf877856b6587bc2be06ea44067fbd3",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/astro-ph/0510747v1.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e00adb0bfbf877856b6587bc2be06ea44067fbd3",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
35912518
|
pes2o/s2orc
|
v3-fos-license
|
Breast Implant–Associated Anaplastic Large Cell Lymphoma: A Case Report and Review of the Literature
Breast implant–associated anaplastic large T-cell lymphoma has recently been recognized as an entity, with few reports describing the two common subtypes: in situ (indolent) and infiltrative. Recently, the infiltrative subtypes have been shown to be more aggressive requiring adjuvant chemotherapy. We report a rare case of breast implant–associated anaplastic large cell lymphoma (BIA-ALCL) in a 65-year-old Caucasian female following silicone breast implantation and multiple capsulectomies. We discuss the rare presentation of this disease, histopathologic features of the indolent and infiltrative subtypes of ALCL, and their clinical significance. We also review the literature for up-to-date information on the diagnosis and clinical management. Treatment modalities including targeted therapy are also discussed. Although BIA-ALCL is rare, it should always be considered as part of the differential diagnosis especially in women with breast implants. Given the increasing rate of breast reconstruction and cosmetic surgeries, we anticipate a continuous rise in incidence rates of this rare disease; thus, caution must be taken to avoid misdiagnosis.
Introduction
Breast lymphoma represents approximately 0.7% of all lymphomas, of which 8% are peripheral T-cell lymphomas (PTCLs) [1]. e majority of reported PTCLs are ALKnegative anaplastic large T-cell lymphomas (ALCLs). Breast implant-associated anaplastic large T-cell lymphoma (BIA-ALCL) has been reported but only recently has gained recognition as a distinct entity. Two di erent subtypes with a possible histogenetic relationship have been described including in situ BIA-ALCL and in ltrative BIA-ALCL; these subtypes have signi cantly di erent prognostic implications with the in ltrative subtype showing worse prognosis [2]. Generally, in situ BIA-ALCL follows an indolent clinical course after breast implant removal, whereas in ltrative BIA-ALCL is more aggressive, requiring additional therapy after implant removal [2]. us, accurate histopathologic diagnosis is crucial for risk assessment and therapeutic management. Recent advances in therapeutic approaches have resulted in signi cant improvement in the overall survival of patients with BIA-ALCL. More recently, targeted therapy utilizing the anti-CD30 antibody brentuximab-vedotin (BV) has shown promising results [3,4].
Case Presentation
A 65-year-old Caucasian female had a past medical history signi cant for bilateral brocystic breast disease resulting in bilateral subcutaneous mastectomy, followed by bilateral cosmetic breast reconstruction with textured silicone gel implants at age 30 ( Figure 1). Subsequently, she had multiple complications from the implants including capsule contractures, infections, chronic seroma, and ruptured breast implants, leading to capsulectomy and implant replacements 15 and 22 years post original implantation. During these periods, the patient was noted to have several areas of calci cations in both breasts (L > R) that had been monitored with routine mammography. e patient noted that at 30 years post original implantation, her left breast became edematous; however, this self-resolved a few months later. Two years later, edema was noted again in the left breast and con rmed by MRI ( Figure 2)], resulting in a third replacement of the silicone gel implant. Recently, at 35 years post original surgery, the patient presented with swelling in the left breast which progressively worsened over 2-3 months. A targeted ultrasound examination of the left breast at the approximate twelve o'clock position, left axilla, and of the right breast at the ten o'clock position over areas of concern demonstrated no discrete cystic or solid abnormalities ( Figure 3). Unremarkable parenchyma was observed throughout the entire region. However, given the extent of edema in the left breast and associated pain, bilateral total capsulectomy was performed for a fourth time.
Breast tissue obtained during surgery was sent for pathologic evaluation. e left capsulectomy specimen revealed a thickened brous capsule with chronic in ammation, consisting of small lymphocytes, eosinophils, plasma cells, and macrophages. e luminal surface of the breast capsule showed brin deposition with a thin row of highly atypical cells.
e atypical cells were large and pleomorphic, with hyperchromatic nuclei and occasional prominent nucleoli noted along with abundant clear to slightly eosinophilic cytoplasm ( Figure 4). Immunohistochemical analysis demonstrated strong CD30, CD43, and MUM1 expression, while EMA was weakly positive ( Figure 5). e cells did not express ALK ( Figure 5), CD20, CD79a, or estrogen receptor. e overall morphology and immunohistochemical pro le were diagnostic for breast implant-associated anaplastic large cell lymphoma.
A bone marrow evaluation, including ow cytometry studies, showed unremarkable trilineage hematopoiesis without evidence of involvement by lymphoma or metastatic malignancy. Cytogenetic examination of the bone marrow revealed twenty metaphase cells with a normal female diploid karyotype with no consistent numerical or structural chromosome aberrations. Computed tomography (CT) scans of the neck, chest, abdomen, and pelvis with IV and oral contrast were essentially negative for any malignancy or lymphadenopathy except for the noted uid collection in the left breast measuring 12.9 × 2.8 × 10.1 cm ( Figure 6). Whole body positron emission tomographycomputed tomography (PET/CT) scan with uorodeoxyglucose (FDG) radiotracer revealed increased FDG uptake along the anterior chest wall, slightly greater on the right than the left, with a maximum SUV of 4.7 and 4.1, respectively. No other area of increased FDG uptake was noted. Given the in situ subtype of ALCL noted in our patient, she underwent capsulectomy with no other local or systemic therapy. She remains clinically well after 12 months follow-up under close surveillance with our clinic.
Discussion
Non-Hodgkin lymphoma of the breast is exceedingly rare; the majority diagnosed are of B-cell origin including di use large B-cell lymphoma, extranodal marginal zone lymphoma, follicular lymphoma, primary e usion lymphoma, and lymphoplasmacytic lymphoma [5][6][7].
Peripheral T-cell lymphoma (PTCL) of the breast is less frequently reported and represents only 10% of all breast lymphomas. In breast implant patients, >90% of these are ALK-negative ALCL, compared to 37% in non-breast implant patients [1,8]. To date, there are more than 300 reported cases of BIA-ALCL worldwide; however, only about 130 report pathologic markers, the majority of which were in the United States (67.4%) [9,10]. e US Food and Drug Administration (FDA) estimated the incidence of BIA-ALCL to be 0.6-1.2 per 100,000, based on reported cases of BIA-ALCL among an estimated 5-10 million women with breast implants [11].
Anaplastic large T-cell lymphoma belongs to the spectrum of lymphoproliferative CD30 + disease and can manifest as either cutaneous or systemic disease. ere are two known subtypes of ALCL of the breast: (a) in situ, in which disease proliferation is con ned to the capsule and is often associated with seroma as in our patient, and (b) in ltrative, mostly associated with tumor mass with cells in ltrating the capsule and adjacent tissues [2]. Of the reported cases of BIA-ALCL, approximately 50% have seroma involvement, which is thought to be associated with a better prognosis, although some have argued a histogenetic relationship with the in ammatory subtype of BIA-ALCL [2]. e most common presenting symptom is unilateral swelling related to periprosthetic uid collection more than a year after implantation [12,13]. Other symptoms include pain, rash, pruritus, and capsular contracture [13,14]. Patients present rarely with a mass that protrudes from the brous capsule, leading to an implant with an irregular texture [13]. BIA-ALCL has been reported in both silicone and saline (either textured or smooth) implants; for instance, out of the 359 cases of BIA-ALCL reported to the US-FDA in 2017, 28 of the cancers were in women who received breast implants with smooth surfaces, whereas 203 were in women who had breast implant version with a textured surface [15]. In ammatory TH17 T-cells are found in greater numbers in textured compared to smooth breast implants [9]; however, no causal link between the type of implant and ALCL has been established. Chronic in ammation within the capsule is believed to be the cause of ALCL [16].
Histologically, ALCL may present as epitheloid-like, mimicking poorly di erentiated breast carcinoma [12,13]; thus, accurate immunohistochemical and histopathological evaluation is necessary. Usually, BIA-ALCL has uniform expression of CD30 with atypical cytology and is cytokeratin-negative [8]. Cells are large and pleomorphic with dispersed chromatin and multiple or single prominent nucleoli and have eosinophilic to amphophilic cytoplasm [9,13]. Hallmark cells can occasionally be observed, with horseshoe-or kidney-shaped nuclei and a paranuclear eosinophilic region [5,13]. Cytogenetically, about two-thirds of BIA-ALCL present with clonal rearrangement of the T-cell receptor gene [17]. Imaging of a ected breasts often shows an e usion surrounding the implant with or without a mass [13]. Overall, the lymphoma cells in BIA-ALCL histologically and morphologically resemble that of ALK-negative systemic ALCL. Despite these similarities, the clinical outcome of BIA-ALCL can di er greatly from that of systemic ALCL. A recent report by Laurent et al. [2] indicates that systemic ALCL has an aggressive clinical course closer to the inltrative subtype of BIA-ALCL than to the in situ subtype. For instance, 2-year overall survival of systemic ALCL and BIA-ALCL are 48% and 52.5%, respectively, whereas the in situ subtype has >95% survival at 2 years [12,18].
Proposed therapeutic approaches for patients with BIA-ALCL have ranged from surgery with or without standard chemotherapy and with or without radiation to more recent targeted therapy. Gidengil et al. elaborated the use of all these treatment options in a review of 54 cases of BIA-ALCL in which 57% were treated with standard chemotherapy treatment for non-Hodgkin lymphoma including cyclophosphamide, hydroxydaunorubicin, vincristine, and prednisone (CHOP) with or without other chemotherapy agents, 48% received radiation therapy mostly to the chest wall, and 11% received stem cell transplants [14]. Agents such as etoposide have also been reported in treatment therapies for BIA-ALCL [19].
Due to the indolent course of in situ disease, capsulectomy alone without aggressive chemotherapy has been suggested as a more appropriate approach in those patients with disease con ned to the capsule [9,16]. In these patients, removal of implants and capsulectomy treatment alone have favorable outcomes, with the mean duration of remission approximately 16 months [12]. Disease recurrence has also been reported and may present as either localized or metastatic [14]. A more aggressive approach is recommended for patients with the in ltrative subtype of BIA-ALCL. Patients with positive regional lymph node involvement at diagnosis have a higher rate of recurrence, and nodal and/or systemic involvement is often the cause of death [12]. In cases of lymph node involvement with cytogenetic abnormalities, capsulectomy followed by CHOP plus etoposide upon relapse has been suggested [20]. Patients who present with a distinctive mass may have a worse prognosis as this often times indicates the inltrative subtype of ALCL and require aggressive treatment including chemotherapy and radiation therapy [8,13]. Other factors such as staging at the time of diagnosis should also be considered. Brody et al. reported nine deaths in patients with BIA-ALCL even after repeated therapies and noted that four out of the nine deaths presented with a mass [9]. Recently, targeted therapy has shown encouraging results. More than 90% of ALCL overexpress CD30 antigen, thus making this a favorable target for future drug designs. e approval of brentuximab-vedotin (BV), a CD30-speci c monoclonal antibody conjugated to the tubulin toxin monomethyl auristatin E (MMAE) [21], provides a promising therapy for patients who do not respond to conventional chemotherapy or salvage high-dose chemotherapy and stem cell transplantation [19,22]. e proposed mechanism of action of BV involves MMAE binding to the CD30 receptor and internalization into the cell, where it induces growth arrest and apoptosis [21]. BV was approved for treatment of Hodgkin lymphoma (HL) and ALCL unresponsive to previous treatment. Peripheral sensory neuropathy is the most common side e ect, which has been shown to be dose dependent, and partially reversible following dose reduction or treatment cessation. Other side e ects include nausea, fatigue, pyrexia, diarrhea, rash, constipation, and neutropenia [21]. Recent phase II trials reported overall response rates of 75% in patients with HL and 86% in patients with systemic ALCL that have relapsed or were unresponsive to previous treatments. e complete response rates were 34% and 57% for patients with HL and ALCL, respectively [22]. BV has also been shown to be e ective as a rst line treatment. Oregel et al. reported successful treatment of a critically ill patient with ALK-negative ALCL involving the axillary lymph nodes [23]. e lack of complete response in some patients could be due to development of resistance to BV. Loss of CD30 expression following treatment with BV has been noted in 2 cases of ALK-negative ALCL [3,4]. Downregulation of CD30 has also been observed in resistant ALCL cell lines [22], supporting this mechanism in the development of resistance to BV. Currently, clinical trials assessing the use of BV with CHOP or CHP (CHOP without vincristine) have shown promising e cacy with tolerable toxicities in CD30 + PTCL [24].
us, BV could potentially be an option for treatment of aggressive BIA-ALCL refractory to chemotherapy or even as rst line treatment.
Apart from anti-CD30 immunotherapy, emerging studies are revealing other possible targets in patients with ALCL especially those with ALK positivity. For instance, a recent study by Laimer et al. showed that high expression of platelet-derived growth factor receptor (PDGFR a/b) was observed in mouse model e ected with human large T cell lymphoma. eir study revealed that combination of standard chemo/immunotherapy plus anti-PDGFR therapy such as imatinib resulted in complete remission in patients with relapsed ALCL after autologous transplantation [25]. Although still in the early stage of research, their results o er great potentials for patients with more aggressive variant of ALCL especially those with ALK expressivity. Currently, a clinical trial (a window of opportunity trial) investigating the therapeutic impact of combination treatment with anti-CD30 plus imatinib is ongoing [26].
Conclusion
BIA-ALCL is a rare breast lymphoma that has both indolent (in situ) and aggressive (in ltrative) subtypes. Tumor mass at presentation could be used a marker of the more aggressive type requiring standard chemotherapy with or without radiation plus capsulectomy. Although prognosis for patients with an in situ disease subtype is excellent, the in ltrative subtype has a prognosis similar to systemic ALCL. Early histopathologic diagnosis is crucial to initiating the right treatment course. Patients should continue close surveillance following completion of treatment to monitor for disease recurrence.
Conflicts of Interest
e authors declare that there are no con icts of interest regarding the publication of this article.
Authors' Contributions
Daniel E. Ezekwudo, Tolulope Ifabiyi, Bolanle Gbadamosi, and Zhou Yu prepared the literature review and manuscript drafts. Kristle Haberichter and Mitual Amin obtained and analyzed the pathology slides. Kenneth Shaheen is the operating surgeon. Michael Stender and Ishmael Jaiyesimi provided critical review of the manuscript. All authors read and approved the nal manuscript.
|
2018-04-03T02:59:09.313Z
|
2017-10-31T00:00:00.000
|
{
"year": 2017,
"sha1": "2406fa27421250b5f7c257b6a336464630a44091",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/crionm/2017/6478467.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fcf61ef91cebab165d26e67c4475ac0aed0a8b3f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
18591849
|
pes2o/s2orc
|
v3-fos-license
|
On the well-posedness of the nonlocal boundary value problem for elliptic-parabolic equations
The abstract nonlocal boundary value problem 8 − d 2 u(t) dt2 + sign(t)Au(t) = g(t),(0 � t � 1), du(t) dt + sign(t)Au(t) = f(t),(−1 � t � 0), u(1) = u(−1) + µ for the differential equation in a Hilbert space H with the self-adjoint positive definite operator A is considered. The well-posedness of this problem in Holder spaces without a weight is established. The coercivity inequalities for solutions of the boundary value problem for elliptic-parabolic equations are obtained.
First of all, let us give some estimates that will be needed below.
Lemma 1.2 [37].For 0 < α < 1 the norms of the spaces E α (A Lemma 1.3 .For 0 < α < 1 the following estimates hold: e −A || H→Eα(A where C([a, b], H) stands for the Banach space of all continuous functions ϕ(t) defined on [a, b] with values in H equipped with the norm Then the following estimates hold: where M does not depend on α, f (t) and g(t).
Proof.Using estimates (1.2)-(1.3),we get Ae −(s+z)A for all z, z > 0 and g(t) ∈ C α ([0, 1], H).Using estimates (1.2)-(1.3),we get for all z, z > 0 and f (t 3), we get for all z, z > 0 and g(t for all z, z > 0 and all s, s > 0, we have the bounded Then for all z, z > 0 and g(t 3), we get A solution of problem (1.1) defined in this manner will from now on be referred to as a solution of problem (1.1) in the space We say that the problem (1.1) is well-posed in C(H), if there exists the unique solution and the following coercivity inequality is satisfied: where M does not depend on µ, f (t) and g(t).
In fact, inequality (1.23) does not, generally speaking, hold in an arbitrary Hilbert space H and for the general unbounded self-adjoint positive definite operator A. Therefore, the problem (1.1) is not well-posed in C(H) [8].The well-posedness of the boundary value problem ( As in the case of the space C(H), we say that the problem (1.1) is well-posed in F (H), if the following coercivity inequality is satisfied: where M does not depend on µ, f (t) and g(t).
In paper [41] the well-posedness of problem (1.1) in Hölder spaces C α,α ([−1, 1], H), (0 < α < 1) with a weight was established.The coercivity inequalities for the solution of boundary value problems for elliptic-parabolic equations were obtained.The first order of accuracy difference scheme for the approximate solution of the nonlocal boundary value problem (1.1) was presented.The well-posedness of this difference scheme in Hölder spaces with a weight was established.In applications, the coercivity inequalities for the solution of difference scheme for elliptic-parabolic equations were obtained.
Then the boundary value problem (1.1) is well-posed in a Holder space C α (H) and the following coercivity inequality holds: where M does not depend on α, f (t), g(t) and µ.
Since the operator has an inverse it follows that Second, we will establish estimate (1.25).It is based on the estimates for the solution of an inverse Cauchy problem (1.27) and on the estimates Eα(A for the solution of the boundary value problem (1.26) and on the estimates EJQTDE, 2011 No. 49, p. 8 Au 1 − g( 1) Eα(A for the solution of the boundary value problem (1.1).Estimates (1.33) and (1.34) were established in [9] and [10].Now, first step would be to establish (1.35).Using (1.32), we get Using this formula and estimates ( , we obtain ) ds EJQTDE, 2011 No. 49, p. 9 Second step would be to establish (1.36).Using (1.32), we get (f (0) + g(0)) .
First, the mixed boundary value problem for the elliptic-parabolic equation generated by the investigation of the motion of gas on the nonhomogeneous space is considered (see [6] and [40]).Problem (2.1) has a unique smooth solution u(t, x) for the smooth a(x) a > 0(x ∈ (0, 1)), and ]) .Here M does not depend on α, f (t, x) and g(t, x).
The proof of Theorem 2.1 is based on the abstract Theorem 1.5 and the symmetry properties of the space operator generated by the problem (2.1).
The proof Theorem 2.2 is based on the abstract Theorem 1.5 and the symmetry properties of the space operator A generated by the problem (2.2) and the following theorem on the coercivity inequality for the solution of the elliptic differential problem in L 2 (Ω). is valid.
]) functions and δ = const > 0. This allows us to reduce the mixed problem(2.1)to the nonlocal boundary value problem (1.1) in a Hilbert space H = L 2 [0, 1] with a self-adjoint positive definite operator A defined by (2.1).Theorem 2.1 .The solutions of the nonlocal boundary value problem (2.1) satisfy the coercivity inequality
Theorem 2 . 3 .
For the solutions of the elliptic differential problem n r=1 (a r (x)u xr ) xr = ω(x), x ∈ Ω, (2.3) u(x) = 0, x ∈ S the following coercivity inequality [36] n r=1 u xr xr L2(Ω) ≤ M ||ω|| L2(Ω) 1.1) can be established if one considers this problem in certain spaces F (H) of smooth H-valued functions on [−1, 1].A function u(t) is said to be a solution of problem (1.1) in F (H) if it is a solution of this problem in C(H) and the functions u ′′
|
2019-04-17T15:40:14.422Z
|
2011-01-01T00:00:00.000
|
{
"year": 2011,
"sha1": "9880ac5e63339fc8ca8627c683faed4e032f4a0e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.14232/ejqtde.2011.1.49",
"oa_status": "GOLD",
"pdf_src": "CiteSeerX",
"pdf_hash": "d79ad51d72f2953896017b5eba898cfc28e5319a",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
209180317
|
pes2o/s2orc
|
v3-fos-license
|
Irbesartan in Marfan syndrome (AIMS): a double-blind, placebo-controlled randomised trial
Summary Background Irbesartan, a long acting selective angiotensin-1 receptor inhibitor, in Marfan syndrome might reduce aortic dilatation, which is associated with dissection and rupture. We aimed to determine the effects of irbesartan on the rate of aortic dilatation in children and adults with Marfan syndrome. Methods We did a placebo-controlled, double-blind randomised trial at 22 centres in the UK. Individuals aged 6–40 years with clinically confirmed Marfan syndrome were eligible for inclusion. Study participants were all given 75 mg open label irbesartan once daily, then randomly assigned to 150 mg of irbesartan (increased to 300 mg as tolerated) or matching placebo. Aortic diameter was measured by echocardiography at baseline and then annually. All images were analysed by a core laboratory blinded to treatment allocation. The primary endpoint was the rate of aortic root dilatation. This trial is registered with ISRCTN, number ISRCTN90011794. Findings Between March 14, 2012, and May 1, 2015, 192 participants were recruited and randomly assigned to irbesartan (n=104) or placebo (n=88), and all were followed for up to 5 years. Median age at recruitment was 18 years (IQR 12–28), 99 (52%) were female, mean blood pressure was 110/65 mm Hg (SDs 16 and 12), and 108 (56%) were taking β blockers. Mean baseline aortic root diameter was 34·4 mm in the irbesartan group (SD 5·8) and placebo group (5·5). The mean rate of aortic root dilatation was 0·53 mm per year (95% CI 0·39 to 0·67) in the irbesartan group compared with 0·74 mm per year (0·60 to 0·89) in the placebo group, with a difference in means of −0·22 mm per year (−0·41 to −0·02, p=0·030). The rate of change in aortic Z score was also reduced by irbesartan (difference in means −0·10 per year, 95% CI −0·19 to −0·01, p=0·035). Irbesartan was well tolerated with no observed differences in rates of serious adverse events. Interpretation Irbesartan is associated with a reduction in the rate of aortic dilatation in children and young adults with Marfan syndrome and could reduce the incidence of aortic complications. Funding British Heart Foundation, the UK Marfan Trust, the UK Marfan Association.
Introduction
Marfan syndrome is a dominantly inherited disorder of connective tissue caused by mutations in the gene that encodes fibrillin-1. 1 Cardiovascular complications, including aortic root dilatation, dissection, and rupture, are the leading cause of morbidity and mortality. 2 β blockers have been advocated to reduce the rate of aortic root dilatation in people with Marfan syndrome. 3,4 Experimental models of Marfan syndrome suggest that angiotensin-II type 1 receptor blockers (ARBs) can alter biological pathways, including excessive TGF-β signalling, that might contribute to the pathogenesis of aortic compli cations, [5][6][7][8] a finding that is supported by observational data in clinical studies. 9 Randomised trials in Marfan syndrome have compared the effects of the ARB losartan with either β blockers or control (where standard medical therapy could include β blockers) on aortic dilatation [10][11][12][13][14][15] without clear evidence of benefit. Other ARBs, such as irbesartan, might have greater bioavailability and a longer half-life than losartan with more potent antihypertensive effects. We aimed to determine the effects of the ARB irbesartan on the rate of aortic dilatation in children and adults with Marfan syndrome.
Study design and participants
The design and methods for the Aortic Irbesartan Marfan Study (AIMS) study have previously been reported. 16 Briefly, AIMS was an investigator-led, placebo-controlled, double-blind randomised trial done at 22 centres with experience of managing Marfan Syndrome in the UK. The study protocol was approved by the UK National Research Ethics Committee, participating institutions, and relevant regulatory authorities. All participants, or their legal guardians in the case of children, provided written informed consent. The study complies with the principles of the Declaration of Helsinki.
Individuals were eligible for inclusion if they were aged between 6 and 40 years and had clinically confirmed Marfan syndrome using the revised Ghent diagnostic criteria 17 and an aortic Z score of more than zero on baseline echocardiography. Individuals were excluded if they had undergone cardiac or aortic surgery or if this was planned, an aortic diameter of at least 4·5 cm, haemodynamically severe valve disease, a clear therapeutic indication or contraindication for ARB, or heart failure or they were pregnant. Individuals with potential for pregnancy could be enrolled if they were using a reliable means of contraception. Participants continued all their routinely indicated treatments. β-blocker use was not mandated by this protocol and was used at the discretion of the treating physician.
Randomisation and masking
To ensure tolerability, all participants initially received open-label irbesartan 75 mg once daily for 4 weeks before randomisation. Participants were then randomly assigned, using a web-based system, 1:1 to irbesartan 150 mg once daily for 4 weeks, titrated up to 300 mg once daily if tolerated and weight was more than 50 kg, or matching placebo for up to 5 years. The randomisation sequence was generated with randomly varying block sizes of 2 or 4 and stratified by centre, participant's age and concurrent β-blocker use. Irbesartan and matching placebo were provided, in bulk, by Sanofi (Reading, UK) and drug packaging, storage and supply by Brecon Pharmaceuticals (Hay-on-Wye, UK).
Procedures
At entry to the trial, before the open-label run-in phase, each participant had height, weight, blood pressure, heart rate, baseline electrocardiograph, current medication, and renal function studies recorded. If patients tolerated study medication and were willing to proceed with the study, full blood count and renal function were measured at baseline (at the time of randomisation), 1 month after entering the trial, and annually thereafter. In patients who provided consent, samples for fibrillin-1 mutation analysis were obtained if not taken already.
Transthoracic echocardiograms were acquired on an annual basis by experienced echocardiographers, according to a standardised research protocol and training
Research in context
Evidence before this study The routine use of angiotensin receptor blockers to reduce aortic dilatation in Marfan syndrome has been controversial. Literature searches using PubMed (key words "Marfan syndrome", "randomised", and "angiotensin receptor blocker") with no language or date restrictions (last search June 20, 2018) and discussion with other researchers were undertaken to identify all randomised controlled trials of angiotensin receptor blockers in Marfan syndrome. The largest randomised trial tested losartan against β blockers with similar effects on aortic dilatation, and the other two larger trials tested losartan against control with conflicting results. No trials of irbesartan in Marfan syndrome have been done.
Added value of this study
This AIMS trial shows that routine use of the angiotensin receptor blocker irbesartan was well tolerated and is associated with lower rates of aortic dilatation in children and young adults with Marfan syndrome compared with placebo. These results help to inform clinicians and patients about the use of irbesartan in Marfan syndrome.
Implications of all the available evidence
The results of the AIMS trial add to our knowledge of the effects of angiotensin receptor blockers in reducing the rate of aortic dilatation compared to placebo. Evidence suggests that this is a class effect and an individual patient data meta-analysis is in progress. Figure 1: Study profile *These adverse events were not serious. †Four did not attend, four withdrew consent, three non-compliance to medication, and two investigator decisions. ‡Nine did not attend, five withdrew consent, five non-compliance to medication, three investigator decisions, and three were ineligible. provided by the core echocardiography laboratory, including assessment of inter-observer and intra-observer variability. Each echocardiogram was transferred in DICOM format to the core echocardiography laboratory at the John Radcliffe Hospital, University of Oxford, Oxford, where a single experienced investigator (XYJ) supervised the overall image analysis process and interpretation for the primary outcome data. The core echocardiography laboratory was blinded to study drug allocation to eliminate reading bias in echo measurement. Strict quality control processes were applied during the analysis according to the guidelines from the American Society of Echocardiography. 18 From the parasternal long-axis view, aortic root diameter was measured using inner-edge to inner-edge technique during peak systole at the level of the sinus of Valsalva with the tip of the open cusps at ninety degrees to the direction of flow (primary endpoint; appendix p 9) and also at end diastole. Additional aortic diameter measure ments were made and will be reported separately. To adjust for somatic growth, aortic Z score was calculated based on aortic sinus diameter and body surface area as previously described by Devereux and colleagues, 19 and the Pettersen method 20 was used as a sensitivity analysis.
Outcomes
The primary outcome measure was the absolute change in aortic root diameter per year, measured by transthoracic echocardiography. Secondary outcomes reported here were annual rate of change in the Z score of aortic root diameter, occurrence of clinical events including aortic dissection, surgery for aortic dilatation death, and the incidence of adverse and serious adverse events. 16 Additionally, effects of irbesartan on systolic and diastolic blood pressure over the follow-up period are reported. TGF-β samples were obtained from a subset of patients who provided consent at baseline and 1 year. Samples were analysed by use of ELISA and full methods are provided in the appendix. Serious adverse events were reported by investigators on specific forms and reviewed by adjudicators blinded to treatment allocation. Reasons for study withdrawals and treat ment discontinuation and possible side-effects of treatment were also documented.
Statistical analysis
On the basis of existing information on aortic root dilatation in Marfan syndrome, the original sample size was set up to detect a 0·5 mm reduction in the aortic root diameter on irbesartan compared with placebo with an SD of 1·8 mm. 490 participants were anticipated to detect this difference, assuming up to a 20% drop-out rate and 80% power. Recruitment was slower than expected and trial recruitment was terminated at 192 participants. The loss of power was mitigated by extending follow-up to a maximum of 5 years and statistical analyses for the rate of aortic dilatation that accounted for repeated measurements and missing outcome data.
The primary analysis was the comparison of the change in aortic root diameter per year between the irbesartan and placebo groups. The population in the intention-totreat analysis for the primary and secon dary outcomes included all randomly assigned patients according to their original treatment allocation. An additional analysis of the primary outcome was repeated including patients only up to the point that they were known to have stopped taking the trial treatment (per-protocol analysis). The mean annual rate of aortic root dilatation in each treatment group and the absolute difference in these rates was estimated using a linear mixed effects model for repeated measures. 21 This model accounts for the baseline measure of aortic root diameter, while also incorporating all follow-up measurements, and enables the inclusion of participants with missing measurements. The model used in the primary analysis assumes that aortic root diameter changes linearly over time and a sensitivity analysis that relaxes this assumption was also done as follows: at each timepoint (from 1 to 5 years), the 22 The model includes a continuous variable for time (in years), random intercepts and slopes, a linear interaction between time and treatment group, and assumes no effect of treatment at baseline. An unstructured variance-covariance matrix was used to allow for correlations between the random intercepts and slopes. The model was fitted using restricted maximum likelihood. Patients were followed for between 2 and 5 years and like all linear mixed models, the model assumes that when data on aortic root diameter were missing, they were missing-at-random. Further details of the model are provided in the appendix (p 3). Similar models were used to examine the effect of irbesartan compared with placebo on the annual rates of change in aortic diastolic diameter, aortic Z score and systolic and diastolic blood pressure, and differences in TGF-β at 1 year. A small number of prespecified subgroup analyses were done for age, gender, blood pressure, β-blocker use, and aortic Z score by incorporating appropriate interaction terms. TGF-β was analysed in a subgroup of patients and was an exploratory post-hoc analysis. All statistical analyses were done on the intention-to-treat principle using Stata IC version 15.1.
This trial is registered with ISRCTN, number ISRCTN90011794.
Role of the funding source
The funder of the study had no role in study design, data collection, data analysis, data interpretation, or writing of the report. The corresponding author had full access to all the data in the study and had final responsibility for the decision to submit for publication.
Results
Between March 14, 2012, and May 1, 2015, 192 participants were recruited; 104 were randomly assigned to irbesartan and 88 to placebo (figure 1). Participants were followed for a median of 4 years (IQR 3-5), with the final patient visit on March 12, 2018. Baseline characteristics were well balanced between the groups Placebo (95% CI) Irbesartan (95% CI) (table 1); participants had a median age of 18 years (12-28), 99 (52%) were female, mean blood pressure was 110/65 mm Hg (SDs 16 and 12), and 108 (56%) participants were taking β-blocker treatment at baseline. Of the 149 patients who agreed to fibrillin-1 gene mutation analysis, 138 (93%) were confirmed as having positive mutations. More withdrawals occurred in the irbesartan group than in the placebo group, although they did not appear to be related to any sideeffects of treatment. Among the participants remaining in follow-up, discontinuation of study medication did not differ between the two groups (appendix p 5). A dose of 300 mg of irbesartan was achieved in 80% of partici pants with no apparent difference related to age or weight (appendix pp 5-6).
There was no evidence for an interaction in any of the prespecified subgroup analyses. Specifically, there was no evidence of interaction with concurrent β blocker use. Irbesartan could have greater effects in younger participants and those with a higher aortic Z score at baseline, but no statistical evidence supported this ( figure 3). TGF-β concentrations from baseline to 1 year for 99 patients did not differ between groups (appendix p 7).
In the placebo group, both systolic and diastolic blood pressure consistently increased during the study, whereas blood pressure was reduced in the irbesartan group. The difference at 1 year (irbesartan compared with placebo) was -6·3 mm Hg (95% CI -9·8 to -2·9) in systolic blood pressure and -3·6 mm Hg (-6·5 to -0·8) in diastolic blood pressure which was maintained throughout the trial (appendix pp 8, 10).
The numbers of patients reporting at least one serious adverse event were 21 (24%) in the irbesartan group
Discussion
This study provides evidence of a reduction in the rate of aortic root dilatation in children and young adults with Marfan syndrome treated with irbesartan compared with placebo, over a 5-year observation period. The treatment was well tolerated, and no differences in adverse events due to suspected side-effects were observed between irbesartan and placebo groups.
The effects of irbesartan on reduction of aortic dilatation appear to occur early and to be maintained over time compared with placebo These effects are mirrored by effects on blood pressure, suggesting a possible association. The apparent increase in aortic diameter after year 4 in both groups might be an artifact due to the reduced numbers available for follow-up during that period, although the observed reduction in aortic root diameter in the irbesartan group remained.
Our study had a robust double-blind design with use of a placebo control, an independent core laboratory blinded to treatment to evaluate echocardiographic endpoints, high proportions of children, more than 50% treatment with β blockers and good compliance to trial treatment. We provide evidence of a reduction in the rate of aortic dilatation in participants receiving irbesartan that was apparent at 1 year and maintained for the duration of the study. In our cohort of both children and adults, we used the rate of change in the absolute diameter as the primary endpoint, because it was a more direct measure. Our findings were similar for measurements of diastolic aortic root diameter and Z score, which take somatic growth over time into account. There has been discussion about the most appropriate method to estimate Z score in the Marfan population. 23 These methods for estimating aortic Z score have been primarily developed and validated in a population that shows somatic growth and might not be reliable in adults. We used the Devereux method for the primary Z score analysis and the Pettersen method for sensitivity analysis, which yielded very similar results for differences in rate of change between irbesartan and placebo.
Previous randomised studies of ARBs in Marfan syndrome have all assessed the impact of losartan against either β blocker, placebo-control or open control. [10][11][12][13][14][15] Groenink and colleagues 13 reported a beneficial effect of open label losartan over 3 years of about 0·2 mm per year which is similar to our finding. The study only included adults with established aortic root dilatation, in contrast to our study, in which half of all participants were younger than 18 years. Milleron and colleagues 12 showed a modest, but not statistically significant, reduction in aortic dilatation with losartan compared with placebo in 303 patients with Marfan syndrome followed for 3·5 years.
β blockers might not be tolerated in some Marfan patients with asthmatic symptoms and might paradoxically worsen vascular stiffness. 24 β blockers and ARBs are therefore not competitive and if effective, ARBs might be synergistic or additive to standard β-blocker therapy. In the US Pediatric Network Trial of losartan versus atenolol among 608 participants with Marfan syndrome aged 6 months to 25 years, 10 the rate of aortic dilatation did not differ between the two groups. The Pediatric Heart Network Trial is the largest randomised controlled trial in Marfan syndrome and suggests that ARBs and β blockers might have similar effects on aortic dilatation. 10 In the AIMS trial, β blockers were provided according to clinical need and about half the patients in each group were on β blockers at baseline. We used a placebo control rather than a β blocker, although β blockers were used according to clinical indication in more than half the patients. We found no evidence of interaction of irbesartan effect with β blockers or in any prespecified subgroup, although the study was underpowered in this regard.
Unlike other studies, AIMS assessed the effects of irbesartan in Marfan syndrome. Irbesartan is a selective angiotensin type-1 receptor blocker with greater bioavailability and a longer half-life than losartan (11-15 h for irbesartan vs 6-9 h for losartan), with more powerful antihypertensive effects. 25,26 Irbesartan might also have effects on the pathophysiology of aortic disease, including TGF-β pathways. 7 Another possibility is that effects on aortic dilatation reflect reduced blood pressure. Milleron and colleagues 12 showed that losartan resulted in a similar reduction in systolic blood pressure compared with the control, as we observed in the AIMS trial, without a clear effect on the rate of aortic dilatation. This suggests other pathways might be involved in the clinical effect, although patients were also on average of 10 years older than those in this study. We also found a reduced rate of change of diastolic aortic diameter which is likely to be less dependent on ambient blood pressure. Subgroup analyses showed numerically greater reductions with irbesartan in younger participants and those with established aortic dilatation at baseline, but they were not statistically significant and will require confirmation in other studies. About 40% of patients contributed to TGF-β analysis, and the groups did not differ from baseline to 1 year. Effects on reducing aortic dilatation in Marfan syndrome might be a class effect among ARBs, and any differences observed across the trials might be related to patient selection, doses achieved, duration of treatment, and precision of measurement method, which will be further assessed in a planned individual patient data meta-analysis. 27
|
2019-12-11T15:41:47.795Z
|
2019-12-01T00:00:00.000
|
{
"year": 2019,
"sha1": "280cb04fd570fc483d129f521747ad4f655a865d",
"oa_license": "CCBY",
"oa_url": "http://www.thelancet.com/article/S0140673619325188/pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "280cb04fd570fc483d129f521747ad4f655a865d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
31997120
|
pes2o/s2orc
|
v3-fos-license
|
Cardiovascular response and backward , upward , right push maneuver during laryngoscopy : comparison between CMAC ® video laryngoscopy and conventional
Copyright @ 2017 Authors. This is an open access article distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original author and source are properly cited. Arif H.M. Marsaban, Aldy Heriwardito, I G.N.A.D. Yundha Department of Anesthesiology and Intensive Therapy, Faculty of Medicine, Cipto Mangunkusumo Hospital, Universitas Indonesia, Jakarta, Indonesia Cl inical Research
Medical Journal of Indonesia Laryngoscopy creates painful stimuli, resulting in cardiovascular response in the form of increased blood pressure and heart rate.Although these responses are transient, they could be problematic for patients with cardiac diseases and intracranial lesion. 1 Higher dose intravenous opioid and intravenous lidocaine cannot repress cardiovascular response totally. 2 Inhalation agents used to deepen anesthesia cause blood pressure drop, resulting in poor coronary and brain perfusion.Neural blockage in airway needs special skills and experiences due to the difficulty level and high risk of local anesthesia injection into the vessels.Topical anesthesia use in the airway is still in research. 3Cardiovascular blocker drugs such as bisoprolol or esmolol can be given, but sympathetic activity would be repressed leading to decreased coronary perfusion. 4Laryngoscopy technique and tools modification lessen the nociceptive stimulation, thus preventing hemodynamic response. 5,6ckward, upward, right push (BURP) maneuver is a maneuver to push larynx from outside, to get a better larynx visualization during laryngoscopy.Laryngoscopy will push larynx downward (caudal), upward (anterior) and to the left.[9] Laryngoscopy video's role is increasing in the last 10 years, especially for difficult airway management.1][12] The advantages of CMAC ® such as increased successful first attempt of intubation, shorter laryngoscopy duration, and decreasing Cormack-Lehane grade interest the researchers to compare the cardiovascular responses and BURP maneuver necessity during laryngoscopy between video laryngoscopy CMAC ® and the conventional Macintosh.
METHODS
After getting approval from the ethics committee, the Faculty of Medicine Universitas Indonesia, Cipto Mangunkusumo Hospital and consents from patients (No. 878/UN2.F1/ETIK/2015), a randomised, single blinded, control trial was conducted in Cipto Mangunkusumo Hospital, Jakarta from October to December 2015.Population was surgery patients who underwent general anesthesia with endotracheal intubation in Cipto Mangunkusumo Hospital.
The inclusion criteria were adult (18-65) years old, body mass index (BMI) 18.5-30 kg/m 2 , physical status American Society of Anesthesiology (ASA) 1-2, and have consented to participate in this study.The exclusion criteria were pregnancy, history of cardiac diseases, cerebrovascular disease history, hypertension, hypotension, tachycardia, bradycardia, patients consuming cardiovascular drugs, difficult airway suspicion, increased intracranial pressure, and converted general anesthesia patients from regional anesthesia.Furthermore, the dropout criteria were patients who moved during laryngoscopy, patients with desaturation or other emergencies, train-of-four (TOF) score did not reach 0 with induction and relaxation dose according to study protocol, Cormack-Lehane level other than 1 or 2 at the first laryngoscopy attempt with BURP maneuver (60 seconds maximum).
Samples were obtained through non probability sampling with consecutive sampling.Randomization for subjects was done with block randomization using tables.The sample size was calculated using analytic categorical sample size formula for unpaired two groups.Proportion was obtained from previous research.The subjects were divided into two groups of laryngoscopy; the Macintosh blade group and the video laryngoscopy CMAC ® group.
The recorded data were name, age, sex, medical record, height, weight, and ASA status.A vital signs monitor was put on patients on the operating table.Systolic and diastolic blood pressure, mean arterial pressure, and heart rate were recorded/ time-1 (T1).Both groups received midazolam 0.05 mg/kgBW and fentanyl 2 mcg/kgBW intravenous, oxygen 80 via face mask, tidal volume 6-8 mL/kgBW, and respiratory rate 12-14 x/minute.Two minutes after midazolam and fentanyl administration, induction with propofol 10 mg/kgBW was done, followed by continued infusion of propofol 10mg/kg/hour.Atracurium 0.8-1 mg/kgBW was given after eyelashes reflect was lost.After TOF score was 0, systolic and diastolic blood pressure, mean arterial pressure, and heart rate were recorded/ time-2 (T2).Laryngoscopy was performed until Cormack-Lehane grade 1 or 2 for larynx visualization achieved, maximum 60 minutes according to the designated laryngoscopy group.Systolic and diastolic blood pressure, mean arterial pressure, heart rate and BURP maneuver were recorded/ time-3 (T3).
Patients were given face mask, excluded, and treated with the ASA algorithm if laryngeal visualization was failed.In cases of emergency, the advance life support (ALS) and basic life support (BLS) algorithms were done, and patients were excluded.
T1-T2 was analyzed with unpaired T-test, and the result was p>0.05 (not significant difference) in every cardiovascular parameter.This result showed that two groups met the same effect of induction, and there was no confounding factor between two groups.Data were analysed using statistical product and service solutions (SPSS) Cardiovascular response and unpaired T-test and Mann-Whitney-U test as alternative test methods.BURP maneuver was analyzed with a Chi-squared and a fisher test as alternative test methods.
RESULTS
The research flowchart was presented in figure 1. Table 1 shows subjects' characteristic.Cardiovascular responses were shown in Figure 2. It showed the difference between T1-T2 and T2-T3.T1-T2 difference showed a decreasing trend, which means the cardiovascular parameters declined after induction.T2-T3 difference showed an increasing trend.
Table 2 showed BURP maneuver necessity between Macintosh and CMAC ® .Figure 4 showed illustration between conventional Macintosh and CMAC equipment and the effects in larynx.
DISCUSSION
This study evaluated cardiovascular parameters http://mji.ui.ac.id c Medical Journal of Indonesia Cardiovascular parameter of T1 of each group were not different (Table 1), means confounding factors of cardiovascular parameters before induction and during induction were eliminated through inclusion and exclusion criteria.Biometric factors (age, height, weight, BMI) could affect the airway anatomy and airway management difficulty; thus, the inclusion criteria included age and BMI.ASA physical status was limited to 1 and 2 in order for methods to be applied to all subjects and to lessen preoperative confounding factors (e.g.hypovolemia, cardiovascular drugs, arrhythmia, etc.).
Both groups were given similar drug types and doses to eliminate confounding factors.To avoid hypercarbia as a confounding factor of cardiovascular response, laryngoscopy duration was limited to 60 seconds. 13e Macintosh group showed a significant larger change of the parameters in T2-T3 compared to the CMAC ® group.This change was induced by pain during the procedure.Laryngoscopy using Macintosh blade required a laryngeal lift maneuver, so the larynx axis was in line with the operator's eye axis. 8,14Mechanical pain stimulation due to the blade was sensed by the nociceptors in the tongue base mucosa, valecula, and anterior epiglottis surface.Neural stimulus traveled to suprarenal gland, inducing catecholamine release, and thus, it increased sympathic activities such as blood pressure and heart rate. 15,16The difference between Macintosh and CMAC ® showed different pain stimuli produced by each method.CMAC ® has a camera at the end of its blade.The camera would be in front of the larynx in laryngoscopy.
Cormack-Lehane grade 1 or 2 can be achieved by slightly lifting larynx up, which is in accordance with Noppens' study 17 that stated CMAC ® showed better laryngeal visualization; therefore, pain stimulus and catecholamine release were decreased. 17CMAC ® has wider visualization than Macintosh (60° vs 15°), 18 aid to achieve better visualization of larynx and minimize pain stimulus, time for intubation, and BURP maneuver needs. 19art rate showed the smallest change compared to other parameters.T3 heart rate in Macintosh was 10.29±10.41bpm and in CMAC ® was 10.29±10.41bpm.This might be the result of fentanyl administration 5-6 minutes before laryngoscopy.After 5-6 minutes, fentanyl reached its peak plasma concentration.Fentanyl 2 mcg/ kgBW and propofol induction can decreased sinoatrial node frequency. 20Study showed that heart rate during induction was 11-27% lower than pre-induction state, while it decreased 7-15% as laryngoscopy response.
BURP maneuver was less needed in the CMAC ® group compared to the Macintosh group since the camera in CMAC ® blade was almost directly in front of the larynx, facilitating Cormack-Lehane I or II visualization. 17 BURP maneuver, in other hand, could create additional painful stimulus.BURP maneuver exerted pressure to larynx from the outside at the same time when the blade pressed tonguebase, valecula, and anterior epiglottis surface, resulting in simultaneous painful stimuli.
Blood pressure and mean arterial pressure were measured by a non-invasive blood pressure monitor, requiring 20-40 seconds to measure.This method is a standard observation procedure from ASA. Intra-arterial observation is more invasive and more expensive.Confounding factors of cardiovascular parameters were posed by BURP maneuver's association with the parameters and required time to achieve Cormack-Lehan 1 or 2 (from the start of laryngoscopy).Since these are not the aim of the studies, data regarding these were not analyzed.
We suggest further studies measuring blood catecholamine level during laryngoscopy and BURP maneuver in association with cardiovascular parameters needs to be done.Blood catecholamine level increases with pain stimuli, in order that, we can compare the response of pain stimuli better due to Macintosh and CMAC ® .Limitation of this study included catecholamine level that was not measured as a response to laryngoscopy due to high cost of this laboratory examination.Cardiovascular changes were assumed due to painful stimuli of laryngoscopy.
In conclusion, cardiovascular response and BURP maneuver during laryngoscopy with CMAC ® video laryngoscopy were significantly lower compared to conventional Macintosh.
Figure 3 .
Figure3.Statistical test for cardiovascular parameter difference between T2-T3 represented cardiovascular response due to laryngoscopy.T1-T2 (not shown) was analyzed with unpaired t-test, and the result was p>0.05 (not significant difference) in every cardiovascular parameter.This result showed that two groups met the same effect of induction, and there was no confounding factor between two groups.*p<0.001
Figure 4 .
Figure 4. Illustration between conventional Macintosh and CMAC ® equipment and the effect in larynx ,21 Facilitation from CMAC ® video laryngoscopy in larynx visualization had lesser painful stimulus during laryngoscopy procedure compared with the which resulted in lower cardiovascular responses in the CMAC ® group compared to the Macintosh group significantly.
|
2017-09-05T02:34:42.670Z
|
2017-08-18T00:00:00.000
|
{
"year": 2017,
"sha1": "f77a99ca1302182d59e659cde5d71d700a71561a",
"oa_license": "CCBYNC",
"oa_url": "https://mji.ui.ac.id/journal/index.php/mji/article/download/1505/1178",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f77a99ca1302182d59e659cde5d71d700a71561a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
127571399
|
pes2o/s2orc
|
v3-fos-license
|
If the Wave Function Collapse absolutely in the Interaction, how can the Weird Nature of Particles are Born in the Interaction? —A Discussion on Quantum Entanglement Experiments
Objectives: Particles can re-exhibit volatility after their wave function (or wave packet) collapses (proven by experiments such as secondary electron diffraction experiments). Is the volatility recovered or does the collapse process not destroy the volatility or is the collapse process not there at all? After the wave function (or wave packet) of a particle collapses, the superposition state should not exist. Methods: If the wave function can be collapsed by measurement and the quantum characteristics cannot be recovered after the collapse, then the microscopic particles that reach the free motion state through interaction can no longer have the quantum characteristics of quantum parallelism, Quantum Entanglement (QEM) or quantum state superposition. Findings: Restricted by the conditions in this conditional adverbial clause, logically, it is impossible to find Quantum Entanglement State (QEMS) by experiment (recognizing that the quantum superposition state is a non-real state, it is recognized that the state is unobservable). Under the assumption that entangled states (or superposition states) exist and that measurement can destroy entangled states, the related experimental phenomena are interpreted as QEMS. Application: This is clearly a logical cycle. Experiments show that the non-projective measurement cannot eliminate the diffraction effect of electron rays. *Author for correspondence
Introduction
In 1 published an article in Nature, saying that they used the diamond color entanglement to complete the flawless Bell inequality verification experiment. The data for this experiment was not enough and the confidence was not enough (2.1 standard deviation). In addition, there is a very contradictory statement. The title says no loopholes, but the text mentions that no Bell experiment can rule out all local realism. This is actually a true conclusion that cult to block. For the verification of Bell's inequality, it is technically difficult to achieve no loopholes and perhaps it is due to the irremovable loopholes in theoretical logic. It is still necessary to discuss the verification and logical thinking of Bell inequality. I just want to point out the logical fast-knot of Bell's inequality (and the logical fastknot of the QEM concept that is closely related to it). This work has not been done before.
The measurement process is a process of interaction. The generation and liberalization of micro-particles are inseparable from interaction. On the premise that the measurement can lead to the disappearance of quantum properties, the quantum properties of microscopic particles cannot be formed at the time of their birth. For experimental facts, existing orthodox quantum mechanics need this view: As long as the microscopic particles can move away from the original owner and move freely they can resume their volatility (or, as long as a particle moves freely from its original owner, it has wave-particle duality) 2,3 . Along this line of thought, a question new will appear. When microscopic particles change from non-free motion to free motion, can quantum characteristics such as quantum parallelism and QEM be restored? I will also discuss this issue in this article.
Under the premise that the measurement must lead to the disappearance of quantum weirdness, if all processes of interaction belong to the measurement process, then quantum's strange properties are impossible to form and can only be restored when they become free moving particles. This recovery process is the reverse process of the wave function collapse (or wave packet collapse). In other words, if the measurement (observation) must cause the superposition state (wave packet or wave function) to collapse, the experimental method (i.e., observation) cannot be used to prove that the Schrodinger cat state exists. Quantum mechanics scientists believe that quantum superposition states are non-real states. Non-real state also cannot be found by experiment. These are all logical difficulties to prove the existence of Schrodinger's cat state in quantum mechanics by experimental methods. This is also an insurmountable contradiction in the existing quantum theory. The experimental fact is that particles that are detached from the instrument by interaction have a range of quantum characteristics. If the quantum characteristics that have collapsed cannot be recovered, then the quantum's strange properties (including quantum superposition state) should not exist.
The existence of the above problems indicates that it is claimed that the QEMS has been observed and it is highly probable that it is fraudulent (unless it is always recognized that the non-real state such as the quantum superposition state does not exist). However, unfortunately, the experimental interpretation of the existence of QEM must rely on the two hypotheses: 1. The quantum states between conjugated particles must be superimposed (i.e. entangled, quantum is non-local); 2. The superimposed quantum state will collapse when the wave function is observed (measured) (i.e., any measurement will change the superimposed quantum state). The interpretation of QEM experimental phenomena is a circular argument (the starting point or the premise is that the superposition state and the entangled state exist and the end point or the introduction is still the existence of entangled state).
Bell's inequality or CHSH inequality is derived from the assumption the role of the implicit parameter of the local realism exists. The notion that implicit parameters work is the concept of local realism. It can be seen that the assumption of deriving Bell's inequality or CHSH inequality is contradictory to the two hypotheses (both superposition states and QEMS are not localized real states) 4 . The interpretation of experimental phenomena for verifying Bell's inequality or CHSH inequality is precisely based on that the hypotheses are established. Therefore, the experimental phenomenon of verifying Bell's inequality confirms that the premise used in establishing Bell's inequality contradicts the experimental criterion. Using experimental methods to verify QEM requires the application of Bell's inequality. However, judging whether the Bell's inequality is established or not requires QEM to exist beforehand (the phenomenon needs to be interpreted as non-localized association). This is also a circular argument. It is easy to see that the Bell inequality and its verification experiments have no positive significance. The experiment of verifying Bell's inequality can neither confirm that the prediction of quantum mechanics is correct, nor can it prove that Einstein's principle of locality is wrong. In this paper, the weird properties (or strange or singularity characteristics) of particles means the quantum characteristics: Volatility, quantum parallelism, quantum state superposition, QEM, non-reality and nonlocality or one or several of them.
Methods and Materials
The method used in this paper is based on phenomena analysis and logic analysis, supplemented by one experiment. The experimental materials were an electron diffractometer, a permanent magnet and an alternator.
The fact that the Microscopic Particle Diffraction Experiment shows that the inference that Measurement will definitely cause the Wave Function to Collapse is Incorrect
Wave function has wavelength, frequency and amplitude. The observed diffraction phenomena of microscopic particles are determined by the properties of wave functions. If the wave function (or wave packet) collapses, the diffraction phenomenon will not be observed. If we believe that as long as the measurement, the wave function will collapse, we should believe that the particle beam whose wave function (or wave packet) collapses can no longer form a diffraction pattern. That is to say, as long as the wave function of the microscopic particles collapses, diffraction cannot occur. Diffraction can still occur after the particles are measured, indicating that the measurement has not cause the wave function (or wave packet) to collapse. Be aware that letting the particle beam pass through the slit is that the particles are measured (the instrument exerts an influence on the particle beam as the particles pass through the slit). The effect of the slit on the particles beam is much greater than the effect of purely measuring consciousness on the microscopic particles! There are many orthodox quantum mechanics who believe that measurement consciousness can also affect the behavior of microscopic particles, and thus have to use subjective intervention concepts. The fact that the secondary diffraction experiment of electrons can show that the measurement cannot cause the wave function to collapse more clearly than the fact of single-slit electron diffraction experiments. If using a slit to measure particles, the slit must cause the wave function of the particle to collapse. The second diffraction cannot occur. The double-slit diffraction can still occur from the particles coming out of the cyclotron. This fact indicates that if the measurement must cause the wave function to collapse, the weird characteristics of the particles must be restored when they return to freedom (otherwise, the weird properties of particles can only be born in interactions). If the collapse can only happen on the screen, the results of the electron diffraction experiment in the magnetic field described in Section 2.3 can indicate the measurement using the magnetic field does not cause the wave function of the electron to collapse. In this way, hypothesis 2 does not hold.
Wave functions cannot be observed, it is a non-objective and non-reality. The superposition of wave functions is also a non-objective, unobservable and non-reality. It can only return to the reality state after the wave function collapses. This seriously deviates from the requirements of reality and objectivity in philosophy.
Considering the nature of the wave function in the existing orthodox quantum theory, it is not difficult to draw the following conclusions: Only that wave function (or wave packet) collapse process doing not exist, quantum theory is more reasonable (the nature of the wave function can be re-recognized). The photo-junction particle structure model proposed in 5 considers that the real wave is inside the particle. Therefore, when measuring the overall behavior of the particle, the wave and wave function cannot be detected. Under the premise that measurement can cause the wave function to collapse, it is not logical to think that weird quantum features are born in the measurement. Experimental facts also prove that the wave function generally does not disappear when passing through the slit and magnetic field space.
If the Measurement can lead to the Disappearance of Quantum Characteristics, Quantum Characteristics cannot be Produced in the Interaction
Existing orthodox quantum physicists believe that wave function (or wave packet) collapses as long as there is an interaction. However, few people have noticed that this interpretation has a serious contradiction caused by interpreting out of context (only consider the local experience of the particle without considering the full experience of the particle). In an electron microscope, the acceleration of electrons must be affected by a strong electric field and the collimation and focus of the electron beam must be affected by magnetic fields. If a measurement (including the effect of electromagnetic field) must cause the wave function to collapse and the volatility no longer appears, it cannot be amplified by the volatility of the electrons. Let's see how the microscopic particles are formed! Microscopic particles are generally formed in interactions. Non-newly generated free particles are generally obtained by the influence of the electromagnetic field and are separated from the original owner. It can be said that all free particles come from the original interaction. This is also true. This fact can be used to show that the quantum coherence of microscopic particles also comes from interactions. The generation of the quantum characteristics of free particles is synchronized with the formation of free particles themselves.
All measurements are made by interaction. Any process in which interaction exists is a measurement process. Therefore, it can be said that the weirdness of microscopic particles (this paper focuses on non-locality) are also produced in the measurement process. One view that has become very popular is that any measurement will cause the weirdness of microscopic particles to disappear. In this way, the weirdness of microscopic particles is both produced in measurement (interaction) and disappear in measurement (interaction). It really makes people feel lost: Does measurement result in the disappearance of quantum coherence or the generation of quantum coherence? In short, as long as the microscopic particles and their quantum characteristics are generated (or the previous experience of free particles), in the existing quantum mechanical interpretation system, the influence of the same measurement conditions on the generation and disappearance of quantum characteristics is contradictory. To avoid this contradiction, we must at least deny the existence of some quantum characteristics. The path integral quantization method can't do anything to solve this problem. This contradiction has a very significant impact on the interpretation of QEM experiments and the verification of Bell's inequality.
As long as the Wave Function (or Wave Packet) Collapses, the Weird Characteristics of Microscopic Particles should Disappear
Wave function (or wave packet) collapse (or quantum decoherence) contains the disappearance of quantum non-local features. Therefore, the title of this section also means that the experimental method can be used to prove that, after the wave function (or wave packet) collapses, quantum non-locality can be proved experimentally.
In 2017, I spent ¥30,000 yuan to buy a good electronic diffractometer and a small generator. Using this generator as a power source, I made an electron diffraction experiment of an applied magnetic field very autonomously. In one experiment, the instrument continued to work for two hours and the diffraction of the electron beam current lasted for two hours. The easily-flowing electrons that participated in the diffraction cycled many times in the current loop. The same electron usually participates in diffraction more than once (see Figures 1 and 2). It has also been found through experiments that diffraction is not caused by light, but is actually caused by electrons. This experiment also directly proves that easily flowing electrons in a small current loop can still be diffracted after being affected by metal atoms and a strong electric field for accelerating electrons and a magnetic field for collimation. The external magnetic field can deflect and Vol 12 (8) | February 2019 | www.indjst.org deform the diffraction pattern. This action is exactly the same as the deflection of the classic electron beam in the magnetic field. The deformation of the diffraction pattern can be explained by the deflection of the electron beam. This indicates that the electrons after passing through the slit and arriving at the screen are still localized classic particles rather than discrete waves. During the diffraction process of this experiment, the electron beam will also be deflected by the magnetic field. This situation is completely similar to that the electron beam in the cathode ray tube is deflected by the magnetic field. If we have to admit that the collapse process exists, the wave function (or wave packet) collapse does not happen on the screen, but it must occur before the electron reaches the screen. The diffraction characteristics are still maintained after collapse, indicating that the diffraction is independent of the wave function of the particles. The experimental results also show that non projection measurements do not affect the formation of diffraction fringes. Based on this, we can be sure that non projection measurements will not affect the formation of double-slit diffraction fringes. It can be seen that it can only be observed by projection measurement at most that as long as observation, the diffraction fringes disappeared (the diffraction fringes cannot form). As long as you observe, the diffraction fringes cannot form. It may also be a fabricated lie (at least not comprehensive).
If the wave function (or wave packet) collapses when the electron hits the screen, it can be said that the easily-flowing electrons in the wire recover the quantum non-locality when accelerated into a free electron beam. That is, the collapse of the wave function (or wave packet) is reversible. In other words, according to the fact that the same electron can participate in diffraction multiple times in the current loop, it can be known that: If the diffraction of electrons is a manifestation of its quantum characteristics, then the electrons that have collapsed on the screen after diffraction have started from the screen and passed through the current loop surrounded by the wires to reach the entrance of the slit again, the singularity characteristics of electrons are restored. The detailed analysis is as follows: In the small current loop of an electron diffractometer connected in series, these easy-flowing electrons will continue to flow (when the electron diffractometer continues to operate). The free-flowing electrons frequently collide with metal atoms in the wire and are affected by strong electric fields and strong magnetic fields (the former is accelerated electrons and the latter is collimated the electron beam). After diffraction occurs, an electron hitting the screen was recognized by existing textbooks as collapsed to a point. After the electrons are slid off the screen and then passed through the wires and accelerated again into an electron beam and comes to the slit entrance again, is the electron wave function (or wave packet) collapsed or the reverse collapse of the electron wave function (or wave packet) occurs? The difficulty in answering this question is: If it is recognized that the wave function (or wave packet) collapses or continues to collapse during the process, it cannot be admitted that diffraction occurs later; if it is recognized that the quantum characteristics are recovered in the process, it must be acknowledged that the quantum characteristics can be recovered upon measurement. But the experimental fact is that electrons can still be diffracted after collision, acceleration and collimation (i.e. the diffraction properties of the electron are independent of its previous experience). If the wave functions (or wave packet) collapse process exists as long as there is an interaction, collapse will inevitably occur. This fact requires that the electron must complete the inverse process of the wave function (or wave packet) collapse in the process of investigation. This is in contradiction with the existing orthodox quantum theory. There are two ways to overcome this difficulty: One is to deny the non-locality of the particle and the collapse of the wave function (or wave packet collapse) and to deny the principle of state superposition; the other is to think that the electrons can restore its quantum characteristics when it moving in a vacuum (i.e. recognizing that the collapse process is reversible) and has nothing to do with the experience of electrons before they resume their free motion. The second way conflicts with the theory of the existing orthodox quantum mechanics that collapse of the wave function (or wave packet collapse) is irreversible and that as long as there is interaction, the wave function (or wave packet) collapses. Obviously, the first way (considering that quantum characteristics such as quantum parallelism, quantum superposition states and quantum non-locality do not exist) can really overcome the above contradictions.
The above contradiction affects the interpretation of the experimental phenomenon about QEM experiments. The reason is that the interpretation of the experimental phenomenon about QEM affirms the concept that measurement (interaction) can lead to collapse of superposition states and this concept has the above contradiction: As long as the measurement will cause the wave function (or wave packet) to collapse, the nonlocality of the particle cannot be born in the interaction. It is not difficult to find that any experiment claiming to be QEM cannot prove the existence of quantum non-locality (see reason section for details).
For an electron beam, as long as it can produce single slit diffraction, it can produce double slit diffraction. The interpretation of the experimental phenomenon of the double-slit diffraction uses the characteristic of quantum parallelism. If recognize that the volatility of particles recovers in the process of being free from the original owner and becoming relatively free, it is acknowledged that the quantum characteristics of quantum parallelism recover in the process of the particles being separated from the original owner and becoming relatively free. In summary, it is acknowledged that the wave function (or wave packet) collapse of incident electrons in an electron diffraction experiment occurs on a fluorescent screen and it is recognized that, the electron diffraction pattern is determined by the characteristics of the wave function of the electron and the addition of this section is introduced. The result of the electron diffraction experiment in an external magnetic field will show that the electron beam that has not undergone wave function (or wave packet) collapse can be deflected in the magnetic field, just as the electron beam in the cathode ray tube is deflected in the magnetic field. Measurements using magnetic fields do not result in a wave function of the microscopic particles to collapse.
The Current Interpretation of QEM Experiments requires the Assumption that there is Quantum non-locality and that Measurement can lead to the Disappearance of Quantum non-locality
The existence of QEMS needs to use QEM experiments to verify and the interpretation of the phenomenon of QEM experiments requires the assumption that QEMS exist. This is a kind of logical cycle. This indicates that the experimental verification of the QEM phenomenon is weak and weak. So that we can think of the QEM phenomenon is not true. We can even say that QEM is hypothetical. The existence of QEMS is an explanation of those experimental phenomena (and the reasons for this interpretation are very insufficient). How do we believe in the existence of QEMS? Why do we believe that QEMS exists? It is the wish of the theoretical workers that the quantum non-locality and the wave function (or wave packet) collapse during measurement. No one has proved that they are all real effects. However, the reality is that people believe in the existence of QEM (For decades, the non-locality of quantum has been so deeply rooted that it is very difficult to question). What an incredible thing it is! After being observed, the wave function (or wave packet) collapsed and the quantum non-locality disappeared). That is, the measurement causes the nonobservable superposition state to disappear. Assumption 1 is to assume the existence of QEMS. It can be seen that the so-called QEM come from a hypothesis. The reason is simple. If there is not assumption 1, you don't need assumption 2. Without any of these two assumptions, the related QEM experimental phenomena cannot be explained as the existence of QEMS. For sure, the assumption 1 is obviously a hypothesis about the existence of a QEMS. The premise is the existence of a QEMS and the conclusion also is the existence of QEM. Isn't this a typical circular argument (logic loop)? (Yes! this is a logical loophole about the interpretation of QEM experiments). Assumption 2 also means measurement can cause the entangled state to disappear. Since the measurement must cause the entangled state to disappear, then we must not use experimental methods to find the entangled state. The existence of the entangled state before the measurement can only be inferred. This conjecture needs to exclude that the phenomena and states found by the measurements existed before the measurements. However, we can't rule this out experimentally. It is difficult for a rigorous-minded person to believe in the existence of QEM.
In interpreting QEM and quantum non-locality experiments (linking Bell's inequality and Leggett-Garg inequality with experimental phenomena, we get the data on the right side of these inequalities based on experimental phenomena), we must use the concept of that all particles must be in the quantum superposition state before being observed and will change the quantum state of the measured particle. Because these two concepts cannot be verified experimentally, thus, these two concepts can only be two assumptions or speculate. The experimental interpretation of the existing QEM and quantum non-locality can only be assumed. If the particles are really waves, there is only a possibility of superposition in mathematics, not necessarily superposition. It is also a kind of absurd speculation that a particle has two different quantum states simultaneously. There is no solid mathematical foundation for that the quantum state superposition must occur. This indicates that in the interpretation process of QEM experiments, speculation is more than empirical evidence. The description of the next natural section cannot be excluded.
An emission source emits a pair of electrons. In order to ensure conservation of the spin angular momentum, the spin directions of the two emitted electrons must be opposite. After the opposite direction of the spins of the electrons was detected, the result could not indicate that their spin directions were formed at the time of measurement rather than before the measurement when this pair of electrons spin in the opposite direction was detected. A light source emits a pair of conjugate photons. The electric vector of this pair of photons should also be con-served: At the same moment, the electron vector of one photon is radial and the vector of another photon must be down. That is, the polarization direction of these two photons is the same (they vibrate up and down rather than left and right). It can be seen that the polarization direction of a pair of conjugate photons is also not formed The resulting (or existing) problem Conjugated particles must not be mature or incomplete particles.
Unknown superposition mode
The collapse process is unknown (the collapse process is also assumed to eliminate the adverse effects caused by superposition). Table 1. The essential analysis of QEM when measured but is formed before being measured (the explicit polarization directions are formed when they are born). Real and complete electrons must have a definite spin state. As long as we believe that any substance is real, we will not believe that the spin state of the electron is formed by measurement at the time of measurement. The same is true for the polarization of conjugate photon pairs. Only when we negate the reality of matter and with the contradictions pointed out above can we believe that QEM exists. A more intuitive analysis is shown in Table 1. Comparing the premise b in the 1D cell with the interpretation of the experiment phenomenon of QEM, it can be clearly seen that the premise is the same as the result. Without the premise b, the experiments verifying the Bell inequality cannot prove the existence of the QEMS. The a and b in Table 1 are the hypotheses 1 in the introduction.
Conclusion
The conclusions of this paper are as follows: • Non projection measurements cannot affect the formation of diffraction fringes. The collapse of the wave function (or wave packet) either does not exist or does not occur on the screen.
• Through experimental and theoretical analysis, it is proved that: The weird feature of microscopic particles can be recovered after the wave function (or wave packet) collapses or the non-real quantum superposition states and QEMS do not exist (the collapse process does not exist or the measurement cannot cause the collapse of the wave function and wave packet).
• Only when the QEMS exists before the experiment and the measurement can destroy the QEMS, the experimental phenomenon about QEM is interpreted as the existence of QEM. Therefore, QEM is hypothetical.
• To verify the existence of QEM requires Bell's inequality. However, when we acknowledge that Bell's inequality does not hold, we must first acknowledge the existence of QEM. When interpreting experiments to verify Bell's inequality, it cannot be ruled out that measured polarization correlation is the original objective existence and distribution (that is to say, we cannot be sure of the existence of non-local association or QEM). Therefore, the experiment to verify the Bay's inequality is meaningless.
Discussion
Whether the QEM phenomenon exists and whether it has been verified has been controversial. The proposal and verification of Bell's inequality conceals the fact that the explanation of QEM is a circular argument and confuses audiovisual information. This makes more people's understanding vaguer and the debate cannot end in time.
Liangshan Liu said that the concept to be incompatibility with Einstein's locality principle is the interpretation of the wave function (or wave packet) collapse. However, the prediction of quantum mechanics is obtained by Born's probability interpretation of wave function, which is independent of the collapse interpretation of the wave function (not to mention the experimental interpretation of Bell's inequality has a logical loop). Therefore, even if the experiment of testing Bell's inequality confirms that the prediction of quantum mechanics is correct, it also cannot prove that Einstein's principle of locality is wrong 5 . This compromise can neither deny nor affirm the viewpoint of this study. Generally speaking, the understanding of compromise is not as good as that of definite conclusion. The various explanations in the interpretation system of quantum mechanics are not isolated. QEM is related to the principle of state superposition, and the principle of state superposition is related to the interpretation of double-slit diffraction experiments. If we deny the QEM phenomenon, how can the double slit diffraction experiment phenomenon are explained? This requires further research. In 6 made some attempts.
|
2019-04-23T13:23:45.441Z
|
2019-02-01T00:00:00.000
|
{
"year": 2019,
"sha1": "19dacda758ca43582bbff2e747be0711c1094e7d",
"oa_license": "CCBY",
"oa_url": "https://sciresol.s3.us-east-2.amazonaws.com/IJST/Articles/2019/Issue-8/Article11.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "016c8d9118cc97038b222a35a9ba58d167e6cb94",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
17431965
|
pes2o/s2orc
|
v3-fos-license
|
Charge correlations in heavy ion collisions
When hot quark gluon plasma expands and cools down after an heavy ion collision, charge conservation leads to non-trivial correlations between the charge densities at different rapidities. If these correlations can be measured, they will provide information about dynamical properties of quark gluon plasma.
Introduction
The view to the quark gluon plasma phase (QGP) in heavy ion collisions is obscured by events taking place later on. One way around this is to focus on conserved charges, such as electric charge, baryon number or strangeness. 1,2 Later evolution can only change these charges locally, through pair creation or annihilation, but charge fluctuations with long enough wavelengths will remain unchanged.
One possibility is to consider the fluctuation ∆Q 2 of the charge Q in a given rapidity window ∆η. 1,2 Because of charge conservation, this quantity can only change when charges move in and out of the window, but if the window is wide enough, this effect should be small. Therefore, the measured charge fluctuation should reflect the initial value and be different for QGP and hadronic phases.
Ignoring long-range Coulomb forces, one can estimate that in the QGP phase, where the electric charges of the quarks are either 2/3 or −1/3, the fluctuation is ∆Q 2 ≈ 0.2N ch . In the hadronic phase, elementary charges are ±1, and as a consequence the fluctuation is much larger ∆Q 2 ≈ 0.7N ch . 2 In principle, this difference could be used to find out whether QGP formed in the early stages of the collision. However, current experimental results are consistent with the hadronic value. 3,4,5 They are also very close to ∆Q 2 = N ch , which would correspond to a purely random distribution of ±1 charges.
In this talk, I will further explore the information obtainable from charge fluctuations. In particular, I will consider correlations between charge densities at different rapidity values, and show that they carry quantitative information about different stages of the collision. It is hopefully easier to subtract the contributions due to phenomena taking place in the later stages of the collision, such as decays of hadronic resonances, from these correlations than from the charge fluctuation signal ∆Q 2 .
Diffusion in an expanding system
In the first stages of a heavy ion collision, the expansion in the direction of the beam is much faster than that in the orthogonal directions. The effects I will be discussing are all due to this expansion, and therefore I will not consider the orthogonal directions. I will also assume that the two nuclei are moving at the speed of light so that the collision event is boost invariant. In that case, it is convenient to use the Bjorken coordinates τ and η defined by t = τ cosh η, z = τ sinh η. In these coordinates, the Minkowski metric becomes ds 2 = dτ 2 − τ −2 dz 2 , which is the metric of a 1+1 dimensional FRW universe with scale factor a(τ ) = 1/τ . This means that very similar considerations apply to charge considerations in the early universe, as well.
I will assume that the evolution of the charge density is purely diffusive. Charge annihilations in that case have been studied for a long time, 6 but often without coupling the system to a heat bath. In the presence of a thermal bath, the diffusion equation in Bjorken coordinates for the comoving charge densityρ = dQ/dη is where ξ η is a stochastic variable that describes thermal noise. It has a symmetric Gaussian distribution with a two-point function The amplitude of the noise is given by G eq (τ ), which depends on the temperature and can be written as G eq (τ ) = q 2 τ n eq (τ ), where q is the elementary charge and n eq (τ ) is the equilibrium particle density. The stochastic term in Eq. (1) can be eliminated by considering the two-point function G(τ, η − η ′ ) = ρ(τ, η)ρ(τ, η ′ ) . It satisfies the equation of motion The Fourier modes G(k η ) = dηe ikη η G(η) satisfy If we assume that initially at τ = τ ini , the system is in equilibrium, i.e., G(τ ini ) = G eq (τ ini ), then the solution is where Assuming that the charged particles are ultrarelativistic, their equilibrium density depends on the temperature as n eq ∝ T 3 . Entropy density scales in the same way, and therefore the temperature decreases with the expansion as T ∝ τ −1/3 , meaning that G eq remains constant. This means that to a good approximation,Ġ eq is simply a sum of delta functions at phase transitions and other non-adiabatic events. Consequently, G(τ, η) becomes a sum of Gaussians with different heights and widths.
As an example, let us assume that the system is initially in thermal equilibrium in the QGP phase so that G eq (τ ini ) = G QGP . When the system enters the hadronic phase at τ = τ tr , G eq jumps discontinuously to G had , which is higher because the elementary charge q is ±1 instead of 2/3 or −1/3. Thus,Ġ eq (τ ) = (G had − G QGP )δ(τ − τ tr ), and the final two-point function is The charge fluctuation signal ∆Q 2 can be written in terms of G(τ, η) as It is easy to see that it, indeed, gives the QGP result if ∆η ≫ ∆(τ tr ) and the hadronic result otherwise.
If we then assume that the hadrons become non-relativistic and start to annihilate at a later time τ nr , the delta function peak spreads into a Gaussian It is interesting to note that at short enough distances (η < ∼ ∆(τ nr )), the correlator is positive. 6,7 . This also means that for small ∆η, the charge fluctuation should behave as ∆Q 2 ∝ ∆η 2 .
Long-range forces
The discussion in Section 2 applies to global charges such as baryon number or strangeness, but for electric charges, one has to take into account the long-range Coulomb interaction, and the motion of the charges is not purely diffusive. By combining Ohm's law = σ E with Gauss's law ∇ · E = ρ, the diffusion equation gets modified and becomes It is instructive to note that when all the coefficients are time-independent, one finds that the equilibrium two-point function is which shows that the Debye screening mass m D is given by m 2 D = σ/D. Analogously to Eq. (3), we can derive an equation of motion for the two-point function Assuming that initially G(k η ) vanishes, the solution is where Σ(τ ′ ) = τ τ ′ dτ σ(τ ). Again, this is simply a superposition of Gaussians, and can therefore be easily Fourier transformed back to coordinate space.
As an example, let us consider a simple case in which G eq jumps from 0 to 1 at τ tr = 1. Furthermore, we assume that D(τ ) = βτ , where β is constant, and σ(τ ) = m 2 D D(τ ) with constant m 2 D . The correlator G(τ, k η ) has been plotted in the left panel of Fig. 1 for m 2 D = 0 [global charge, from Eq. (7)] and for m 2 D = 0.01 [local charge with long-range forces]. One can see that for a local charge, the correlator is more strongly peaked around zero.
We then imagine that G eq drops instantaneously to zero at τ nr = 10 as the charged particles become non-relativistic. The correlators for global and local charges at τ = 13 are shown in the right panel of Fig. 1. Because of the long-range forces, annihilation is faster and
Conclusions
We have seen that the charge correlators measured at late times carry detailed quantitative information about the properties of the system at different stages of the collision. Particle-antiparticle pairs produced later on by neutral resonances etc. will add an extra contribution to these correlations, but if it is properly understood it can be subtracted, at least in principle. The signals will be much stronger in global charges, i.e., baryon number and strangeness than in the electric charge, but unfortunately they are also much more difficult to measure. It is possible, though, that some of the interesting features survive even in the electric charge correlators, but a more detailed calculation is needed to find out if that is actually the case.
|
2014-10-01T00:00:00.000Z
|
2004-09-03T00:00:00.000
|
{
"year": 2004,
"sha1": "91ef009dcec1921f86abb0eb624965ef31a0e440",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/0409030",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ea7a991f484097d820ab50b86c63a356c803d0a8",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
260463024
|
pes2o/s2orc
|
v3-fos-license
|
Investigations on thermal contact resistance between filled polymer composites and solids using micro thermography
This article reports the use of a new measurement technique based on micro thermography for determining the thermal contact resistances (TCRs) between filled polymers and solids. The thermal conductivity of polymers can be significantly increased by using thermally conductive fillers. For numerous applications, not only is a high intrinsic thermal conductivity required but also a good thermal transfer between the filled polymer and an adjacent solid surface. The physical principles of thermal transport when considering this type of contact have not yet been investigated in detail, and only a few experimental results are available. The most common measurement techniques determine a macroscopic resistance and project it onto the contacting surface. However, the heterogeneous microstructure of a filled polymer causes the TCR to be a volumetric phenomenon in the overall boundary region. The utilized IR camera system takes thermal images with a spatial resolution of less than 15μm per pixel. The new method resolves the TCRs spatially and gives new insights into the microscale effects on the particle level. In addition to the common zero-gap extrapolation for the extraction of TCRs, we propose another evaluation method that considers all microscale effects of the boundary layers and evaluates TCR as a volumetric phenomenon. For the first systematic study, samples consisting of two aluminum substrates and a filled epoxy polymer were prepared and investigated. We studied the effects of filler size, filler material, filler volume fraction, and surface structure, focusing on monomodally filled polymers with filler amounts between 30 and 60v% . The obtained results and the uncertainties of the new method are discussed within this paper.
This article reports the use of a new measurement technique based on micro thermography for determining the thermal contact resistances (TCRs) between filled polymers and solids. The thermal conductivity of polymers can be significantly increased by using thermally conductive fillers. For numerous applications, not only is a high intrinsic thermal conductivity required but also a good thermal transfer between the filled polymer and an adjacent solid surface. The physical principles of thermal transport when considering this type of contact have not yet been investigated in detail, and only a few experimental results are available. The most common measurement techniques determine a macroscopic resistance and project it onto the contacting surface. However, the heterogeneous microstructure of a filled polymer causes the TCR to be a volumetric phenomenon in the overall boundary region. The utilized IR camera system takes thermal images with a spatial resolution of less than 15 µm per pixel. The new method resolves the TCRs spatially and gives new insights into the microscale effects on the particle level. In addition to the common zero-gap extrapolation for the extraction of TCRs, we propose another evaluation method that considers all microscale effects of the boundary layers and evaluates TCR as a volumetric phenomenon. For the first systematic study, samples consisting of two aluminum substrates and a filled epoxy polymer were prepared and investigated. We studied the effects of filler size, filler material, filler volume fraction, and surface structure, focusing on monomodally filled polymers with filler amounts between 30 and 60v%. The obtained results and the uncertainties of the new method are discussed within this paper.
Keywords: thermal contact resistance, filled polymers, TIM junctions, micro thermography, microscale TCR * Author to whom any correspondence should be addressed.
Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
Introduction
Thermally conductive filled polymers are used in a wide range of applications. As thermal interface materials (TIMs), they improve the thermal transition between solid surfaces, as a potting they protect electronic components and conduct excessive heat to the environment and as a case for electronic devices they help with minimizing operating temperatures. Various theoretical and experimental studies have investigated the effective thermal conductivity of the filled polymer, and the effects of filler loading [1][2][3][4], filler materials [1,[4][5][6][7][8], particle shapes [1,9], and sizes [10,11]. Thermal conductivity measurements were performed using laser flash analysis (LFA) (calculated from thermal diffusivity, heat capacity and density) [1,3,10], transient hot-bridge, -wire or -disc methods [4,6,7,9], or the steady-state cylinder method [2,8,11]. The latter can be referred to as the industrial standard for TIMs, and is described in ASTM D5470-17. All methods used are suitable for specific fields of application, depending on the temperature range, ambient conditions, and the sample consistency. In general, the investigated samples must be sufficiently large to avoid the heterogeneous microstructure of the filled polymer affecting the results. In applications where the filled polymers are only used in thin layers (<2 mm) and touch solid surfaces, the effective thermal conductivity as well as thermal contact resistance (TCR) between the filled polymer layer and the contacting solid must be considered. Figure 1 illustrates the typical use of a TIM between an electronic chip and a heat sink.
The filled polymer is used to displace air between the microscopic rough surfaces of the chip and heat sink and to improve thermal transfer. The lower the thermal resistance R th of the transition between the chip and heat sink, the lower the operating temperature of the chip, when considering a constant power loss. This thermal resistance can be described as a serial connection of three single resistances: the thermal resistance of the TIMs bulk R th,TIM and the two contact resistances R th,C to the solid surfaces. The TCRs often significantly increase the total thermal resistance of a TIM-filled gap.
In addition to the absolute thermal resistances R th , it is common, to specify and analyze the thermal resistance of a unit area R th × A, called thermal insulance. Without area dependency, comparisons between different geometries with different heat transfer areas are much easier. In the heat transfer literature, thermal insulance is also called the thermal impedance or specific thermal resistance.
However, the physical principles of thermal transport when considering this type of contact have not yet been investigated in detail, and only a few experimental results are available. In 2018, Xian et al [12] published an extensive summary of transient and steady-state measurement methods for TCR. However, these techniques are used for solid-solid contacts. Typically, solid-solid contacts are described with a thermal contact conductance h, with the resulting contact resistance R th,C = 1 h·A for a contacting area A or the thermal contact insulance (R th × A) C . The TCR are projected on the transition surface and considered to be a surface phenomenon. When using the steady-state cylinder method according to ASTM D5470-17, information about the TCR can be obtained in two different ways. In the steady-state cylinder method, samples are typically placed between two metallic cylinders. One cylinder is heated and the other is cooled. An approximately one-dimensional heat flow is generated from the heated cylinder through the sample into the cooled cylinder. Based on the temperature gradient in the cylinders, the heat fluxq through the measuring section is determined. From this heat flux and the measured temperature drop across the sample ∆T, the thermal insulance can be calculated as (R th × A) = ∆T/q. With the model assumption that the thermal insulance (R th × A)increases linearly with thickness for a homogeneous sample, the TCR between the sample surface and the contacting metallic surfaces can be determined with a zero gap A direct measurement is possible when no sample is inserted between the metallic cylinders. In this case, the result is directly the TCR between the two surfaces of the metallic cylinders. Two exemplary studies using this technique for various solid-solid contacts were published by Rao et al [13] and Mo and Segawa [14]. Rao et al measured the TCR between oxygen free copper samples at different surface pressure levels (direct method). Mo and Segawa started with direct measurements and extended the method to measure the contact resistances between thin solid layers.
In 2007, Teertstra [15] published experimental studies on the thermal resistances of bonded joints with thermally conductive filled adhesives using the steady-state cylinder method. They extracted TCR by zero-gap extrapolation (indirect method) and found that the TCR are of a similar magnitude to the bulk resistances. Schacht et al [16] and Prasher and Matayabas [17] also reported zero-gap extrapolations to predict the TCRs between filled polymers and substrate surfaces.
However, as shown in figure 1, contact resistance can be affected by the overall transition region, when considering microscopic heterogeneous materials such as filled polymers. With the experimental methods described in [12], they are not considered to be a volumetric phenomenon, affected by a layer of a certain thickness. In particular, zero-gap extrapolation using the steady-state cylinder method neglects the fact that TCR can be affected by several microscopic effects in the boundary region. To analyze the effects of the surface structure, filler amount, filler size, and filler material, the thermal resistances in the boundary region need to be investigated on the microscale and resolved spatially. In general, local resistances can be calculated when heat flux and local temperature difference are known. The finer the local resolution of temperature, the finer the local resolution of the thermal resistance. Contacting temperature measurements always disturb the temperature field in the sample and the local resolution depends on the dimensions of the sensor used.
In the present study, we used micro thermography to achieve high spatial resolution of temperature measurements without disturbing the temperature field of the sample. From the single-pixel information, we were able to resolve thermal resistances at the pixel level and thus study the phenomenon of TCR between filled polymers and adjacent solid surfaces, including local inhomogeneities, on the microscale.
In 2015, Burghold et al [18] measured the TCR between two steel samples using infrared (IR) thermography. Two cylindrical samples with direct face-to-face contact were simultaneously heated to two different temperatures during the measurement. An IR camera was used to record the axial temperature profile of both samples during heating. The timedependent temperature field in the samples was used to evaluate the contact resistance between the specimens by solving the corresponding inverse heat conduction problem.
Ishizaki et al [19] used lock-in thermography to measure the TCR between bonded graphite layers on the microscale. The bonded graphite layers were periodically heated with a laser. The temperature response of a cross-section through the two-layer system was resolved spatially, using a lock-in thermography device with an IR microscope. The measured temperature field across the interface was used to calculate local TCR.
Warzoha and Donovan [20] already used micro thermography to measure thermal resistances of TIM junctions. The setup was similar to the steady-state cylinder method. However, the temperature gradient in the two metallic cylinders was not determined with individual temperature sensors but was finely resolved with a microscopic IR camera.
Since only the temperatures in the metallic cylinders were measured, a separation of bulk and contact resistances could only be made with the zero gap extrapolation already described. Warzoha and Donovan reported a high sensitivity of TCR evaluation for the gap width determination during measurement. A local resolution in the contacting zones was not achieved. In 2016, Smith et al [21] reported a similar technique, but without the extraction of contact resistances.
We extended the micro thermography method to obtain spatially resolved thermal information, also within the sample, and analyze the effects of the surface structure, filler amount, filler material, and particle size on the TCRs.
Sample preparation
For our investigations, we prepared multi-layer samples consisting of two aluminum substrates (EN AW-5754) and a filled polymer layer in between. We selected the two-component epoxy polymer SikaBiresin ® TD150 + TD165 and several fillers from different materials with different particle sizes and shapes. Table 1 shows the basic properties of the epoxy system used. Further details can be found in the product data sheet [22]. Table 2 lists the filler properties. All the fillers showed a monomodal size distribution. The median particle sizes (volumetric) range from 7.86 µm to 76.29 µm and were measured using a laser particle sizer Fritsch Analysette 22 NanoTec. The Fraunhofer diffraction theory was applied for evaluation (see e.g. [23].). For the spherical fillers, the median particle size corresponds to the median particle diameter. In the case of irregularly shaped aluminum hydroxide, the size refers to the laser diffraction equivalent diameter. The filler volume fraction ϕ = V filler /V total was varied between 0.3 and 0.6. After the liquid polymer and the granular fillers were mixed, the open sample containers with a maximum filling of 70 ml were degassed for 10 min in a vacuum chamber at room temperature and at ≈0.8 Pa. Aluminum substrates were prepared with different surface structures by sandblasting with different grain sizes (blasting abrasive: glass spheres) and cleaning with isopropanol. The filled polymer was placed and cured between the two aluminum substrates, which were kept at a constant distance using spacers of 0.9-1.2 mm. No surface pressure was applied and the samples were cured for 200 h. The resulting multi-layer samples had dimensions of 100 × 100 mm 2 . For the micro thermography investigations, we cut smaller specimens out of the middle using a waterjet cutter. The cut surfaces were ground and polished. As a result, we obtained precise multi-layer specimens of 20 × 20 × 5 mm 3 with a filled epoxy layer in the middle. Finally, the front surfaces of the samples were coated with an acrylic resin-based graphite spray to obtain a defined and uniform emissivity for the thermography measurements. We initially determined the emissivity ε achieved with this method using an aluminum sample with high thermal conductivity. We coated it with graphite spray, placed it in the measuring section, and inserted a thermocouple directly behind the coated surface. The upper and lower aluminum bars were controlled to the same absolute temperature and the emissivity was calibrated using the measured temperature of the aluminum sample. The result was ε = 0.98 ± 0.01. Since all samples were coated with the same graphite spray, this value can be expected to be constant. Uncertainties in the determination of the emissivity have only a minor influence on our measurement result, since the influence of the emissivity is compensated for by the calibration of the measurement setup, see section 4.
Micro thermography setup
For the micro thermography measurements of the prepared samples, a thermography device was designed and built (figures 2 and 3). The samples are placed between an upper and lower aluminum bar with evenly milled contacting surfaces and the same cross-sectional size as the samples. By heating the upper bar and cooling the lower bar, a onedimensional heat flow through the sample is achieved. Using a side-mounted IR camera, the temperature profile through the multilayer sample is captured. Based on the measured temperature profile and the heat flow through the sample, the local resistance profile of the sample can be calculated.
The front surfaces of the aluminum bars were maintained matte, to avoid excessive reflection. The upper bar is mechanically fixed with a constant clamping force. To avoid high TCRs between the aluminum bars and aluminum substrates of the samples, a water-glycol mixture is used as contact agent. The lower bar is mounted on a cold plate with cooling channels, connected to the coolant circuit of a lab thermostat and kept on T cold = const.. The upper bar is electrically heated to T hot = const.. How T cold and T hot must be set to avoid high environmental losses and with this reduce uncertainties is discussed in section 6.1. For the micro thermography measurements, steady-state temperatures must be reached. The temperature field on the front surface of the samples is captured using the IR camera VarioCAM ® HD head 980 from Infratec with an additional close-up lens 0.5×.
The thermal images have a pixel size of 14.836 µm, when the internal optomechanical MicroScan mode is activated. Using the MicroScan mode, the spatial resolution is increased by the superimposition of single thermal images with a slight local offset. Therefore, the heat radiation is deflected by a mechanically oscillating optical unit between the lens and detector. The resulting temperature field, T (x, z) is saved and evaluated in the second step. Figure 4 shows an example of the captured thermal image of a sample with a filled polymer layer. The graph below shows the extracted average temperature increase along the z-axis.
To obtain information about the thermal insulance (R th × A) of the different layers and the transition insulances in between, not only temperature data but also information about the heat flowQ through the sample is necessary. Therefore, two thermocouples (type K, calibrated to an absolute accuracy of ±0.02 K in the desired temperature range by using a certified PT100 reference sensor) were placed in the lower aluminum bar and used for the heat flow evaluation with The thermal reference resistance R th,ref of the aluminum between the two thermocouples is determined during a calibration procedure performed in advance, see section 4.
Calibration procedure
Temperature measurements are surrounded by high uncertainty, particularly when using an IR camera. In our setup, the local assignment of data using the pixel information of the thermal image causes further high uncertainty.
We decided to perform relative measurements based on a reference sample as opposed to absolute measurements.
We have defined the thermal reference resistance R th,ref of the lower aluminum bar as the calibration variable. Thus, the heat flowQ is no longer an absolute value, but only a relative measurement value for comparisons between a reference sample and the actual sample. Therefore, absolute temperature measurements are no longer necessary.
Prior to each series of measurements, the complete setup is calibrated with a multi-point calibration using a reference sample, as shown in figure 5. Both, calibration, and measurement were performed under controlled environmental conditions with ambient temperature changes of less than 1 K. However, recalibrations were performed at least every 3 h. To ensure that the thermal conditions in the measuring section are the same for the calibration measurement and the actual measurement, the thermal resistance of the reference sample must be similar to that of our multi-layer samples. To attain the overall thermal resistance of the reference sample of the same magnitude as that of the typical samples to be investigated, a stainless steel 304 was selected. Stainless steel 304 was proposed as a reference material by the National Physical Laboratory, UK [24], and its thermal properties have been widely investigated (e.g. [25][26][27][28].). However, the thermal conductivities of the reference materials are often reported at 300 K or higher. As there was no suitable and certified material available for our use, we purchased non-certified stainless steel 304 and determined its thermal conductivity as a function of temperature by LFA (ASTM E1461-13). The thermal diffusivity a, and specific heat capacity c were measured separately. The materials density ρ was measured using the buoyancy method. The thermal conductivity was calculated with λ = aρc. As result we got for our relevant temperature range of T = 15 • C-30 • C. Even though the uncertainty of our LFA is typically ≈ 8%, we set ∆λ ref = 4.8%, following [24] after also comparing our results with the previously published values in [25][26][27][28] and finding a very good agreement in the overlapping temperature ranges.
The reference sample was produced in the same dimension A ref = A as the multi-layer samples and a flat groove of width ∆z M = 3.0 mm and a depth of 0.2 mm was milled into the outer surfaces, see figure 5. The ground of the groove was coated with a graphite spray, while the remaining rib surfaces above and below were polished. To calibrate the local resolution of the IR camera measurements simultaneously, the evaluation window of the thermography software (IRBIS ® 3) is adjusted to the clearly visible edges of the groove in the reference sample surface. The settings are saved and used for all subsequent measurements.
During the calibration procedure, the cooling temperature T cold = 12 • C is kept constant, and the heating temperature T hot is increased stepwise from 30 • C to 70 • C. At each temperature step, steady-state conditions are reached, the temperature field on the reference samples surface is measured, and the temperature differences ∆T TC = (T TC,1 − T TC,2 ) in the lower aluminum bar is recorded. Steady-state conditions were usually reached in five to six minutes, so that changes in ambient conditions during a measurement can be neglected. Using the captured temperature field T (x, z), we determine the mean temperature gradient (dT/dz) calib and with it, the heat flowQ calib during the calibration measurement, as shown in figure 5. Finally, the thermal reference resistance R th,ref is calculated usingQ calib and the respective temperature differences ∆T TC .
In the calibration procedure, the inaccuracies of the temperature measurement and local assignment of the IR camera affect the calculated temperature gradient and thus the reference resistance, R th,ref . However, as our actual measurements are performed with an identical setup and in the same temperature range, the inaccuracies are compensated for when evaluating the thermal resistance of the sample. If we assume an equal sample area A and reference sample area A ref and a temperature-independent reference resistance R th,ref , the increase in thermal insulance of a sample layer to be measured at the height z can be described as If the calibration and measurements are performed at very similar absolute temperature, systematic uncertainties in the measurement of the temperature gradients (dT/dz) and in the temperature difference measurement ∆T TC are compensated as they are just included as a ratio in the evaluation equation, shown above. Additional uncertainties of the thermography due to an inaccurate emissivity or the emitted and reflected radiation of the adjacent components of the measurement setup are included and compensated as well. The only requirement is that these occur equally in calibration and measurement. This is achieved by the unmodified setup and the same absolute temperature level. With this procedure, we avoid the high uncertainties of the IR camera system affecting our results and calibrate the temperature measurements and local resolution simultaneously. A detailed description of the evaluation strategy considering the temperature dependencies can be found in section 5. In addition to systematic uncertainties in temperature difference and local temperature gradient measurement, random variation, occurring between calibration and measurement needs to be considered. Their consideration in the uncertainty calculation can be found in section 6.2. Figure 6 shows the results of an exemplary calibration run.
The calculated thermal reference resistance R th,ref of the lower aluminum bar between the two thermocouples is shown as a function of the mean temperature T TC,mean = 1/2 (T TC,1 + T TC,2 ). The solid error bars show the associated uncertainty of the linear regression of the measured temperature field to determine the temperature gradient. The dashed error bars additionally contain the uncertainty components coming from the reference sample. It is good to see that more than 80% of the total uncertainty is caused by the reference sample (thermal conductivity λ ref and cross-sectional area A ref ). If an even better-known reference sample were available, the micro thermography method could be implemented with even higher accuracy. Only the uncertainty in the temperature gradient is method-related and cannot be reduced. Further details regarding the error propagation calculation can be found in section 6.2. The resulting correlation of the calibration is and can be used for the evaluations of all subsequent measurements.
Evaluation strategy
For each micro thermography measurement, we capture the temperature field on the samples surface T (x, z) and record the temperature difference at the thermocouple ∆T TC = (T TC,1 − T TC,2 ). The heat flowQ through the sample during the measurement is calculated viȧ .
As thermal images are always blurry, we numerically sharpen the captured temperature fields T (x, z) using a reverse Gaussian low-pass filter, considering a point spread function with a standard deviation of 2.3 pixels and taking into account the T 4 -dependency of the radiative heat detector signal. The standard deviation of the respective point spread function was determined by capturing the temperature field on a specimen with a local temperature step function set using different local emissivities. To avoid amplification of the highfrequency noise of the temperature fields, we use a standard Wiener filter to remove the noise in advance. As the same geometrical and optical setup is used for all specimens, no individual adjustment of the sharpening parameters is necessary. After sharpening, the temperature field T (x, z) is first averaged along the x-axis in total to T 0 (z) and section-wise to T i (z), see figure 4. The section-wise evaluation allows us to analyze local deviations and to quantify the effects of random filler and surface structures, see section 7.1. The thermal insulance (R th × A) across the sample is calculated pixel-wise with a constant heat flowQ, the cross-sectional area A of the sample, and the pixel-to-pixel temperature gradient (dT/dz) (z) Finally, the thermal insulance is accumulated across the sample height with a start of (R th × A) (z = 0) = 0. Figure 7 shows the typical course of the cumulative thermal insulance across a multi-layer sample with two aluminum substrates and a filled epoxy composite in between.
The increase in thermal insulance in the aluminum substrate zones is negligible. However, for further evaluations, we fit a linear function to the measured data with a slope of Al . The thermal conductivity of the aluminum substrates is λ Al = 135 W m −1 K −1 (EN AW-5754). To separate the bulk and contacting zones of the filled epoxy, the z-positions of the transitions z C,1 and z C,2 between the filled polymer and substrates is measured externally using a digital light microscope VHX with a dual-light high-magnification zoom lens VH-Z250R and an xy-measurement system VH-M100E from Keyence. The surface roughness and particle arrangement close to the surface cause the thermal transition between the filled polymer and substrate to be smooth and not abrupt. The smooth transition, typically a zone of 20-400 µm, can clearly be seen in the course of thermal insulance R th × A along the z-axis and is marked with gray bars in figure 7. The determined z-levels of the transitions are used to perform zero-gap extrapolations and to calculate macroscopically meaningful thermal contact insulances by surface projection. For the z-level measurements, we aligned the rough substrate surfaces using the mean roughness value. Thus, the measured substrate thicknesses are slightly lower than the material thicknesses measured macroscopically using a caliper gauge or micrometer gauge. However, this method provides the more representative results, in comparison to evaluations projected on minimum or maximum roughness values.
From a macroscopic perspective, one would expect a linear increase in thermal insulance with layer thickness for a homogeneous material. The investigated filled polymers have heterogeneous microstructures. However, as the layer thickness is typically at least ten times the maximum particle size, it seems reasonable to assume a linear increase and fit a linear function (R th × A) (z) to the measured data. The thermal conductivity of the bulk zone can be evaluated using as the reciprocal of the slope of the fitted linear function (R th × A) (z). By extrapolating the linear function to the measured z-levels z C,1 and z C,2 , the share of thermal insulance (R th × A) C solely caused by the contact zones can be determined. The thermal contact conductance can be calculated as the reciprocal of the thermal insulance h = (R th × A) −1 C . In addition to this surface-projected evaluation of contact resistances, the spatially resolved micro thermography results allow us to evaluate the overall boundary region and summarize all the effects on thermal contact insulance (R th × A) * C . We identify the z-level, where the course of cumulated thermal insulance diverges from the linear fit, as the starting point of the boundary effect. In addition to the extrapolated thermal contact insulances, we evaluate the thickness of these boundary layers and the proportion of thermal insulance caused by those layers. Figure 8 shows an exemplary evaluation of a specimen with an unfilled polymer layer, and a microscopic image of the investigated cross section before coating with graphite spray.
For the unfilled polymer, a smooth and small transition to the substrate surfaces was observed, and extrapolation to the zero-gap width seems reasonable for the evaluation of contact insulances. For the specimen shown in figure 8, we measured 2 × (R th × A) C = 157.95 mm 2 K W −1 , for both contacts in total. We did not perform separate evaluations because both transitions were produced equally, and the total evaluation compensates for uncertainties in the z-level-alignment. The obtained value corresponds to a pure polymer layer of approximately 36 µm, which is of the same magnitude as the two roughness depths of the substrates (14 µm per side) and is therefore very sensitive to inaccuracies in z-level alignment. Even though the zero-gap-extrapolation when using the steady-state cylinder method is much easier to perform, this example clearly shows the possible issues when spatially resolved data are not available. The smooth transition between the polymer and substrate could be explained by the rough surface structure, resulting in a thin transition layer, where heat is partially conducted in the substrate and polymer. In this transition layer, the heat flow concentrates on the more conductive substrates' roughness peaks, causing an unequal distribution of heat flux. As a result, we observe a constriction resistance in the first substrate layers, similar to the electrical constriction resistance when considering the electrical contact resistances. However, the smooth transitions at the substrate surfaces are not necessarily a purely thermal phenomenon. The blurriness of the thermal image remaining after image sharpening may also contribute, at least in part, to this smooth transition. In this case, the local resolution limit of the thermography system is reached, and a more detailed evaluation of the transition is not possible. The limit of the local resolution is given on the one hand by the point spread function of the optical system and on the other hand by the physical resolution limit in the used spectral range of 7.5-14 µm (Abbe limit). However, the upcoming evaluations with filled polymers show significantly larger transition areas that are far from this resolution limit. Figure 8 also shows the results of evaluating the overall transition region. We observed boundary layers of ∆z C,1 = 127 µm and ∆z C,2 = 57 µm which cause thermal contact insulances of 2 × (R th × A) * C = 347.81 mm 2 K W −1 in total. Figure 9 shows an exemplary evaluation of a specimen with a filled polymer layer, and a microscopic image of the investigated cross section before coating with graphite spray.
The differences in the courses of the thermal insulance in the boundary region between figures 8 and 9 can be clearly seen. We observed thicker boundary layers of ∆z C,1 = 212 µm and ∆z C,2 = 184 µm, mainly caused by the filled polymer, which did not exhibit typical bulk behavior. The boundary layer thickness is the same magnitude as three to four times the median particle size of the filler used (Alox-07: D 50 = 64.78 µm). The boundary layers are again marked gray and cause thermal insulances of 2 × (R th × A) * C = 411.23 mm 2 K W −1 . For comparison: extrapolation of the linear function (R th × A) (z) of the bulk zone to the measured z-levels z C,1 and z C,2 results in contact insulances of 2 × (R th × A) C = 232.12 mm 2 K W −1 . Our evaluation results in much higher contact insulances because we consider the overall boundary region to be part of the transition. It is obvious, that with such an evaluation the layer thickness of the polymer must be reduced for the calculation of the bulk and substrate resistance. The total thermal resistance of the multilayer-samples remains the same. However, a separate description of the transition region leads to a completely new understanding of the term contact resistance at transitions between filled polymers and solid surfaces. Only through this proposed evaluation is it possible to include the entire transition area and to examine all microscopic effects acting there. The significance of this differentiation is shown in section 7.6, where we present the results for thin-layer samples.
Discussion of uncertainties
To quantify the measurement uncertainty of micro thermography studies, we investigated the effects of environmental losses using a numerical simulation model and performed an error propagation calculation with maximum error estimations (type B uncertainties according to GUM terminology [29]), see section 6.2. Additionally, studies on reproducibility and random variations have been carried out to determine type A uncertainties, see section 6.3. For quantitative validation, we performed additional measurements using the steady-state cylinder method and LFA and compared the results with those of the new micro thermography method.
Dealing with environmental heat losses
First, the effects of the environmental heat losses must be discussed. They can distort the results in two ways: • Environmental heat losses can lead to significant differences in the heat flow through the sample and the reference temperature measurement position. • Environmental heat losses cause the heat conduction within the measurement setup to differ from the assumed one dimensionality. The measured temperature field on the samples surface may not be representative for the inner temperature conditions of the sample.
Madhusudana [30] published an extensive investigation on the effects of environmental losses on thermal contact conductance measurements in 2000. He analyzed radiative and convective losses, and showed their effects at different measurement temperatures and surface pressures for solid-solid contacts. The effects on our micro thermography measurements were investigated using a thermal simulation model. We found that the expected deviations of the measured surface temperature and mean temperature of the respective crosssectional areas are <0.2 K for our samples and temperature conditions, and therefore neglectable. Smith et al [21] reported similar calculations for their comparable setup and also found that environmental heat losses did not significantly affect the measurement results when heating and cooling temperatures were maintained near ambient temperature. Using a quasione-dimensional thermal RC-model with analytical heat radiation and convection modeling, we additionally quantified the environmental losses on the lateral surface of the measuring section and checked the deviations in heat flow determination using the two thermocouples in the lower aluminum bar. The charts in figure 10 show the progression of the environmental loss affected heat flow in the z-direction for different heating and cooling temperature combinations. The sample is placed at z = 0 and illustrated with gray bars in the background of the charts in figure 10. The thermocouple positions are indicated by the dashed lines. Environmental heat losses cause the heat flow through the measuring section to change significantly along the z-axis. Depending on the temperature combination set, the course of the heat flow curve in the lower aluminum bar (negative z-positions) varies. For the temperature combination of T cold = 12 • C and T hot = 50 • C, the difference between the heat flow in the middle of the sample and the heat flow between the two thermocouples for heat flow measurement is the lowest. However, for all relevant temperature combinations (30 • C − 12 • C and 70 • C − 12 • C) also shown in figure 10, the absolute deviation is less than 1% and therefore neglected in the further course. The remaining deviations and effects of further error sources not considered, such as e.g., reflected ambient radiation are compensated for during the calibration procedure. The deviations would be more important if the heat flow measurement would be performed absolute and not be calibrated. The results depend on the ambient temperature T amb , which was set to 20 • C in this simulation, and on the total thermal resistance of the investigated sample. The optimum is reached, when the temperature course along the z-axis crosses the ambient temperature between the thermocouple positions, as shown in figure 10 (vertical black line).
For our measurements, we performed the presented simulations for the recent ambient temperature and actual thermal conductivity of the samples, and adjusted the heating temperature respectively. It varied between T hot = 45 • C and T hot = 55 • C.
Estimation of systematic measurement uncertainty
To estimate systematic measurement errors, we considered all steps and components, beginning with the reference sample and the described calibration procedure. Within the calibration procedure, the thermal reference resistance R th,ref of the lower aluminum bar between the two mounted thermocouples is determined as a function of the mean temperature T TC,mean .
and includes the potentially incorrect determination of the reference sample area ∆A ref , the uncertainty of the linear regression for temperature gradient calculation ∆ (dT/dz), the uncertainty of the reference materials' thermal conductivity ∆λ ref and for the sake of completeness, the expected uncertainty in the temperature difference measurement ∆ (T TC,1 − T TC,2 ) . To determine ∆A ref we considered an uncertainty of 0.01 mm for the manual measurements of sample length and width using a caliper gauge. The uncertainty of linear regression is calculated using for each single measurement based on the pixel data of the thermal images with a height of n pixels and the corresponding z-coordinates z i and Temperatures T i . The uncertainty of thermal conductivity of the reference sample was constantly set to ∆λ ref = 4.8%, as described previously. We set the uncertainty in the temperature difference measurement, ∆ (T TC,1 − T TC,2 ) to zero. The calibration procedure makes absolute temperature difference measurements unnecessary. An uncertainty must be considered only when it occurs while performing the actual measurement and setting the temperature differences into a relation. If the inaccuracy remains constant and allows for comparable measurements, it will not affect the final results. Figure 11 shows the calculated uncertainties and their components corresponding to the calibrated reference resistance, as shown in figure 6.
It can clearly be seen that almost 70% of the total uncertainty is caused by the inaccuracy of thermal conductivity of the reference sample. The value does not change with the temperature level, as well as the ≈ 1 % uncertainty caused by ∆A ref . Only the uncertainty of linear regression for temperature gradient calculation ∆ (dT/dz) changes slightly with the temperature and counts a little bit more than 1 %. To For the actual sample measurement, we estimated the total uncertainty of thermal insulance with The uncertainty of the temperature difference measurement with respect to the previously performed measurement with the same magnitude and without any changes in the measurement chain was estimated as ∆ (T TC,1 − T TC,2 ) = 0.05 K. While all other uncertainty components were determined as type B uncertainty, we consider a statistically determined type A uncertainty for the temperature difference measurement.
Several test measurements at constant conditions have shown that the temperature difference measurement shows stochastic fluctuations of maximum 0.05 K. This value is considered as the maximum deviation to be assumed between calibration and measurement. An additional uncertainty in the IR temperature measurements was not considered since only the differences in comparison to the calibration procedure were evaluated.
The total uncertainty of the thermal insulance is divided proportionally between the bulk and contact insulances. Figure 12 shows the calculated systematic uncertainties and their components for the representative selection of eight samples.
The total uncertainty of the measured thermal insulance ∆(R th × A) increases significantly with the thermal insulance of the sample. The uncertainty components of ∆R th,ref and ∆A ref remain constant with increasing thermal insulance. However, the expected uncertainty in temperature difference measurement ∆ (T TC,1 − T TC,2 ) becomes more important for highly insulating samples, as they cause the temperature difference (T TC,1 − T TC,2 ) to decrease and therefore the considered uncertainty ∆ (T TC,1 − T TC,2 ) = 0.05 K to become more decisive. Over the entire range of investigated samples with total thermal insulances between (R th × A) tot = 500 mm 2 K W −1 and (R th × A) tot = 4500 mm 2 K W −1 , the uncertainty is estimated between 10% and 17%.
For the separation of bulk and contact insulances, also the uncertainty in the linear regression in the bulk zone must be considered. Depending on the quality of the thermal image and the remaining high frequency noise after filtering, this additional uncertainty was evaluated between 1% and 2% of total contact insulances.
Random measurement uncertainty
In addition to the previously described systematic error propagation calculations, studies on reproducibility and random variations have been carried out (type A uncertainties according to GUM terminology [29]). We analyzed a representative selection of specimens, performed several micro thermography measurements on each single lateral surface, and compared the results on each surface and between the surfaces. Within all the samples, a good reproducibility with deviations lower than the variations within one surface was observed. However, we regularly observed significant random variations within one of the four lateral surfaces of the specimen and between the lateral surfaces of the square shaped specimen. We attribute this to the high sensitivity of the TCR to small random variations at the microscopic scale. The local contact resistances are affected by several statistical phenomena, such as local particle-substrate contacts, local particle agglomerations, and different polymer layer thicknesses between the substrate surface and the first particle layer. As a result, the contact resistances of similarly prepared test samples exhibited statistical fluctuations in the range of 25%. Details of the local variations in TCR are discussed in section 7.1.
Method comparison
To validate the obtained results, we performed additional measurements using the LFA (ASTM E1461-13) and the steady-state cylinder method (ASTM D5470-17), and compared the results with our values. As there is no suitable method available that can be used to reproduce the spatially resolved results of TCR, we focused on the thermal conductivity of the filled polymers bulk zone. Systematic errors of the micro thermography method affect the thermal conductivities in the same way as they affect the measurements of the TCR. For this validation study, we prepared additional single-layer samples without aluminum substrates and sequentially used the same specimens for all measurement methods. All specimens have a thickness between 2.0 mm and 2.5 mm. For the measurements according to ASTM D5470-17, discs with a diameter of 30 mm were used. For micro thermography, squares with 20 × 20 mm 2 and for LFA discs with a diameter of 12.7 mm were prepared. Figure 13 shows the measured thermal conductivities of 12 different filled epoxy samples.
Aluminum (Sample no. 1-6) and alumina (Sample no. [8][9][10][11][12] and different filler contents were used. Sample no. 7 is an unfilled epoxy sample. The study showed an overall good agreement among the three different methods. The measurements agreed within a range of 12.9 % in average. The error bars indicate the respective systematic measurement uncertainties. Based on the described calibration procedure and the uncertainties of the reference sample, the expected systematic uncertainties of micro thermography (15.8 % in average) were mainly higher than those of the other two methods (11.8 % in average for ASTM D5470-17 and 8 % for ASTM E1461-13). This comparison has shown that the calibration procedure described in section 4 has been successful and has eliminated all sources of error that cannot be influenced or quantified in any other way.
Results
To identify and study the most crucial effects on TCR, several samples with different combinations of substrate surface structure, filler material, filler size, and filler content were prepared and measured using the introduced micro thermography method.
This section describes the acquired results and their interpretations, divided into several single studies with individual objectives. We present the majority of our results based on the zero-gap extrapolation results, as these surface projected values are more meaningful from a macroscopic perspective. However, as shown in section 5, the overall contact resistances and thicknesses of the transition zones must be considered when analyzing the phenomena from a microscopic perspective.
Section-wise evaluation and variations
As addressed in section 5, we not only evaluated the mean contact resistances on the surfaces of the specimens, but also used the spatially resolved data to investigate local variations along the x-axis. Using the section-wise averaged temperature profile along the z-axis T i (z), the local course of the thermal insulance (R th × A) i (z) was determined using the same algorithm as described in section 5, see figure 14.
Typically, 20 individual sections (intervals) were defined to be considered separately. Figure 14 shows the parallel course of the sections and the proportions of thermal contact insulance for the lower and upper transitions. Visually, only slight differences can be observed. For detailed investigation of local variations, we calculated the standard deviation of the thermal contact insulance and thermal conductivity of the filled polymers bulk and plotted the results for the overall sample width, as shown in figure 15.
We observed significant variations in the local thermal contact insulance along the x-axis. However, no significant variations were observed in the thermal conductivity of the filled polymers bulk. Both, thermal conductivity, and contact conductance can be affected by random filler structures and therefore show local variations. The fact that these variations were not detected for thermal conductivity indicates that the random surface structures of the aluminum substrates, the close-to-surface particle arrangement, and local particlesurface contacts play an important role. For the example shown in figure 15, we determined a standard deviation of 3.2% for the bulk thermal conductivity λ bulk and 9.1% for thermal contact insulance 2 × (R th × A) C . By performing an equal evaluation for all measured samples (95), we found the following dependencies: • The lower the filler volume fraction, the higher the standard deviation of the evaluated intervals. With fewer particles in the transition zone, direct particle-surface contacts become more unlikely, and thus, the local variations increase when evaluating section-wise with a section-width of approx. 0.8 mm. • The smaller the filler particles, the higher the evaluated standard deviation of the thermal contact insulance along the x-axis. Larger particles are expected to form uniform particle layers close to the surface more likely than the smaller particles. For smaller particles, we expect more irregularities and agglomerations in the filler packing, causing higher local variations. • The lower the thermal conductivity of the filler, the lower are the local variations. Local particle-surface contacts become less effective as the established heat path is less conductive.
Effects of filler content
In the first study on filler properties, we prepared samples with different filler loadings of three spherical alumina fillers of different sizes. We combined with two different surface structures (R0 and R1) and measured the thermal contact insulances, as shown in figure 16. There were no clear observable differences between the two different substrate surfaces. However, it can be clearly seen that the different-sized fillers show significantly different effects depending on the filler volume fraction ϕ . For the smallest filler, an overall increase in the contact insulance with increasing filler volume fraction by a factor of ≈ 10 was observed. The medium-sized filler Alox-05 shows a similar behavior for filler volume fractions of ϕ = 0.4 and higher. Only with the lowest filler volume fraction ϕ = 0.3 were significantly higher contact insulances of up to (R th × A) C = 121 mm 2 K W −1 measured. The contact insulance level with the largest filler, Alox-07, is higher over the entire range of filler volume fraction and measures (R th × A) C = 132 mm 2 K W −1 on average. The curves do not show a clear trend.
We expect three different phenomena to affect the thermal contact insulance, superimposing on each other and causing the shown results and dependencies: • Particles close to the surface tend to form uniform layers, which are only superimposed by random structures at a certain distance from the surface. The closer the first particle layer is to the surface, the thinner the remaining polymer layer between the particles and surface, and the lower the contact resistance. Larger particles cause a larger distance between the substrate surface and the center of the first particle layer, and thus, higher contact insulances. Additionally, the number of local particlesurface contacts per area decreases with increasing particle size. • For very low filler volume fractions, direct particle-surface contacts become more unlikely, and thus, the contact insulance tends to increase. • At very high filler volume fractions, the thermal conductivity of the bulk zone increases significantly. The slope of the (R th × A) (z) course decreases, and the zero-gap extrapolation across the first particle layers of the transition zone leads to higher evaluated contact resistances than those for lower filler volume fractions and higher slopes. When evaluating the contact insulance (R th × A) * C of the overall transition zone, no increasing values for higher filler volume fractions were observed.
Effects of filler size
To study the effects of the filler size in detail, we carried out further evaluations using several samples with varying sizes of spherical alumina fillers with different filler loadings combined with different substrate surface structures.
The measured thermal contact insulances clearly increase with filler size. For the smallest filler particles (Alox-03), we measured (R th × A) C = 14 mm 2 K W −1 in average. For the medium-sized filler (Alox-05), we obtained (R th × A) C = 53 mm 2 K W −1 and for the largest filler particles (Alox-07), we obtained (R th × A) C = 139 mm 2 K W −1 . Almost an increase by a factor of 10. The course for the lowest filler volume fraction ϕ = 0.3 differs from those at higher filler volume fractions, as shown in figure 17.
However, these results support our theory as described in the previous section. Larger particles cause a larger distance between the substrate surface and the center of the first particle layer. The mean polymer layer thickness increases and with it the thermal contact insulance. Additionally, the number of local particle-surface contacts per area is decreasing with increasing particle size.
Effects of filler material
To study the effects of the filler material, focused on the fillers thermal conductivity, we selected three fillers with significantly different thermal conductivities. In addition to the alumina Alox-07 already used in the previous studies with an expected thermal conductivity of ≈ 35 Wm −1 K −1 , we selected an aluminum hydroxide (ATH) with a lower thermal conductivity of ≈ 10 Wm −1 K −1 and an aluminum (Al) with a higher thermal conductivity of ≈ 150 Wm −1 K −1 . The exact thermal conductivities of the granular filler materials are unknown and cannot be easily measured. Therefore, the results do not refer to a specific thermal conductivity, but show the qualitative effects of different filler materials, see figure 18.
All fillers had a similar median particle size. However, it should be considered, that only the alumina and the aluminum fillers have spherical particle shapes, whereas the ATH filler has irregularly shaped particles. However, a clear trend was observed. The use of fillers with higher thermal conductivity leads to lower thermal contact insulances. For the most conductive filler particles (Al), we measured (R th × A) C = 80 mm 2 K W −1 in average. For the lowest conductive filler particles (ATH), we obtained (R th × A) C = 150 mm 2 K W −1 and with Alox-07 we obtained (R th × A) C = 122 mm 2 K W −1 with the filler volume fraction applied.
All samples investigated had a filler volume fraction of ϕ = 0.5. The filled polymers were combined with different surface structures (R0, R1, and R2), but no significant differences were observed.
Effects of surface roughness
Variation in surface roughness was included in the studies, as presented in sections 7.2-7.4. No clear trends were observed. However, the standard deviation of the local contact resistances decreases with increasing surface roughness. The higher the roughness peaks, the more likely it is that local particle surface contacts are formed. In the measurements shown in figure 18, the standard deviation of local contact resistances decreased from 11% for surface roughness R0 to 8% for surface roughness R2.
Thin layer phenomena
It is obvious that the effects of the thermal contact insulance become more important with a decreasing thickness of the filled polymer layer between the substrates. TIMs are often used in gaps that are as thin as possible. Our experimental studies showed that the thermal transition zone between the substrate surface and the filled polymer extends over several particle diameters. In the presented studies it was always possible to separate the transition zones from the bulk zone of the filled polymer. Additional samples were produced without using spacers between the substrates and thus allowing the gap to become as thin as possible. Again, no pressure was applied. The designation 'thin' must always be related to the size of the filler particles. To gain a good spatial resolution, we selected the largest spherical alumina Alox-07 with D 50 = 64.78 µm for this study. Figure 19 shows an exemplary evaluation for a specimen with a filled polymer layer of 596 µm. On the left side of figure 19, a microscopic image of the investigated cross section before coating with graphite spray is shown.
The behavior of the filled polymer layer is clearly distinct from the typical bulk behavior. We were able to measure the total thermal insulance of the specimen (R th × A) tot = 499.32 mm K W −1 , but we were not able to separate the contributions of thermal bulk and contact insulances, as there is a gradual transition from one material to the other.
For comparison, a specimen with the same filled polymer composition but a layer thickness of 1.08 mm was investigated. We determined transition zones of approx. 180 µm, a thermal conductivity of 1.48 W m −1 K −1 and by zerogap extrapolation 2 × (R th × A) C = 233.92 mm 2 K W −1 .
Considering these values, one would expect a total thermal insulance (R th × A) calc tot = 2 × (R th × A) C + (R th × A) bulk = 636.93 mm 2 K W −1 for the total layer thickness of the filled polymer of 596 µm. This value is 28 % higher than that obtained in the thin layer measurement. Considering the transition zones of approximately 180 µm in the thicker sample, it appears comprehensible that there was not enough space for the filler packing to develop a microstructure independent of the structure of the adjacent surfaces, as shown in figure 19. Because of the size of the particles compared to the separation of the two boundary surfaces, only a few particle layers in the middle of the sample relaxed to a random orientation and thus exhibited distinctive bulk behavior. This example clearly demonstrates the importance of spatially resolving the transition zone. Macroscopic measurements of different layered samples with zero-gap extrapolation would yield misleading results. However, the spatially resolved micro thermography data contains all the relevant data to analyze the thermal transport phenomena in this multi-layer application.
Conclusion and outlook
In this work, we used micro thermography to investigate the TCRs between filled polymer composites and solid surfaces. This new method resolves the thermal insulance spatially across the investigated transitions and provides new insights into microscale effects at the particle level. We designed and set up a micro thermography device, based on an Infratec VarioCAM ® HD head 980 with a close-up lens 0.5x, and performed measurements with a local resolution of 14.836 µm. Thermal measurements were performed in steady state and were based on a calibrated heat flow measurement, using temperature difference measurements. We extracted the thermal contact insulances using zero-gap extrapolation and proposed a new method to extract and evaluate the thermal insulances caused by the overall boundary regions. The contact insulances were measured with a potential systematic uncertainty between 10% and 17%, depending on the total thermal resistance of the sample.
For the first study, we prepared multi-layer samples with aluminum substrates and a filled epoxy layer in between. Using this simplified reference system, we were able to study the basic effects on TCR.
We found that TCR varies much more with particle arrangement and the materials microstructure than the thermal conductivity of the filled polymer and shows high random variations. We conclude that the TCR can be reduced significantly when direct particle-surface contacts are achieved. Within a random mixture and arrangement of particles, single surface contacts have a decisive effect on the contact resistance, while the randomness affects the thermal conductivity of the filled polymers bulk to a smaller extent. The properties of the filler and the surface both modify the thermal contact insulances. We were able to isolate several of these effects, such as particle size, filler volume fraction, and filler thermal conductivity and to investigate their impact on contact insulances. In general, small particles with high intrinsic thermal conductivity and medium filler volume fractions cause the lowest TCRs. High local variability is expected for smooth substrate surfaces, small particles, and low filler volume fractions.
Numerical simulations have shown that environmental heat losses during measurement are negligible if the setup is kept at ambient temperature. In future studies, we will investigate the effects of multi-modal particle size distributions and filler mixtures. Furthermore, we plan to use this method for commercially relevant filled polymer composites. Adjustments will be made to be able to measure on specimens with elastic or even viscoelastic filled polymer layers. For such samples we also plan to control the surface pressure during the measurement and study the effects of different surface pressures.
Additionally, we will set up a microscale simulation model to obtain further insight into microscale heat transport phenomena at the investigated transitions between the filled polymers and solid surfaces. Experimental approaches are always limited to a certain scale. Using a numerical simulation model, we can overcome these restrictions and investigate the effects of smaller particles and thinner layers. Additionally, numerical studies can help to support, extend, or disprove our interpretation of the different effects presented.
Data availability statement
The data that support the findings of this study are available upon reasonable request from the authors.
|
2023-08-04T15:02:10.874Z
|
2023-08-02T00:00:00.000
|
{
"year": 2023,
"sha1": "856dcd8e66c234f2e668600c97d667e874d257bd",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1361-6501/acec8f/pdf",
"oa_status": "HYBRID",
"pdf_src": "IOP",
"pdf_hash": "a5572207a68b7ff22fb9da451c7a64dc3d763a12",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
}
|
225695468
|
pes2o/s2orc
|
v3-fos-license
|
Ocular involvement of coronavirus disease (COVID-19): A systematic review of conjunctival swab results
Coronavirus disease (COVID-19) is a pandemic disease which is caused by the latest discovered coronavirus. Conjunctivitis is allegedly the first presentation of COVID-19 since it can spread by aerosol contact with the conjunctiva. The present study aimed to systematically review the employment of conjunctival swab with Real-time Polymerase Chain Reaction in detecting SARS-CoV-2. The research is a systematic review of the published scientific literature on findings of conjunctival swab of COVID-19 from PUBMED database and other additional sources (i.e: Google Scholar). The search method was done using “COVID-19 OR coronavirus OR SARS-COV2, AND conjunctivitis, AND ocular manifestations, AND conjunctival swab” as keywords. Inclusion criteria were any papers that related to the entered keywords and have conjunctival swab as a reported outcome. Letters, reviews, and editorials describing other studies reporting COVID-19 and conjunctival swab were excluded. Only four research papers were found and included in the literature review. From the four current research papers, positive SARS-CoV-2 results were yielded from 0-5.26% of conjunctival swab specimens. In conclusion, although the presence of the SARS-CoV-2 virus on the ocular surface remains unclear, the prevention of infection transmission through an ocular surface is mandatory by wearing goggles (or shield), mask (N-95 recommended) and gown. © The Journal 2020. This article is distributed under a Creative Commons Attribution-ShareAlike 4.0 International license.
Introduction
Coronavirus disease (COVID-19) is a communicable disease caused by the novel coronavirus and might cause mild to moderate respiratory symptoms and can be more serious in elderly and people with underlying diseases (chronic respiratory disease, cardiovascular disorder, diabetes mellitus, and cancer). 1 The pandemic COVID-19 began on December 31, 2019 with an outbreak of pneumonialike illness in Wuhan, China. 2 COVID-19 has affected 210 countries and territories around the world. On April 13, 2020, the number of coronavirus cases reached 1,920,250 with 119,413 deaths. The largest number of cases have occurred in the USA amounted to 584,862 cases with 23,555 deaths. 3 On January 7, 2020, the disease was recognized as a novel coronavirus (nCoV) and on 11 February 2020, the WHO officially named it as Coronavirus Disease 2019 (COVID-19) and the original virus was previously named as SARS-CoV-2 (2019-nCoV severe acute respiratory syndrome-related coronavirus 2) by the Coronavirus Study Group of the International Committee on Taxonomy of Viruses. 4 A few reports have been evaluated for the manifestation of SARS-CoV-2 in tear fluid. 5 Experience from health workers in Wuhan revealed that despite being fully dressed in N95 protective, viral infections still occurred with unilateral conjunctivitis as the first symptom, followed by the development of fever a few hours later. 2 An ophthalmologist (Dr. Li Wenliang) at Wuhan Central Hospital was infected by an asymptomatic glaucoma patient in early January
Ocular involvement of coronavirus disease (COVID-19): A systematic review of conjunctival swab results
Mahayana et al.
Ocular involvement of coronavirus disease which caused his death a month later. Without eye protection, the virus allegedly can be transmitted by aerosol contact with conjunctiva and cause infection. 6 Infection caused by 2019-nCoV has been characterized as a lower respiratory syndrome manifesting as pneumonia and/or acute respiratory distress. However, there are still many gaps in our knowledge concerning the global epidemiology of 2019-nCoV, particularly the route of transmission of COVID-19, especially through ocular surface, which has not yet been fully explained. The present study aimed to systematically review the employment of conjunctival swab Realtime Polymerase Chain Reaction (RT-PCR) in detecting SARS-Cov-2.
Literature research
A systematic literature search was conducted during the period of April 2020 using PUBMED database and other additional sources (i.e: google scholar). Search strategies were performed to identify literature pertaining to the following search terms: COVID-19 OR, coronavirus OR SARS-COV2, AND conjunctivitis, AND ocular manifestations, AND conjunctival swab.
No date nor language restrictions were applied.
Data extraction and synthesis
Papers were examined in terms of the instruments, patient selection, and COVID-19 diagnostic protocol used in the study. Inclusion criteria were any papers that related to the entered keywords and have conjunctival swab as a reported outcome. Letters, reviews, and editorials describing other studies reporting COVID-19 and conjunctival swab were excluded (Please see Figure 1). Only four articles were eligible for analysis after identified through the database searches.
Statistical analysis
Total of subjects, mean age, and sex proportion of each study were descriptively analyzed. A comparison of the percentage of positive results of nasopharyngeal swab versus conjunctival swab was analyzed using chi-square tests. The nasopharyngeal swab is the collection of specimens from nasal midturbinate and anterior nares specimen. Conjunctival swab technique is used to obtain conjunctival specimens (tears and conjunctival secretions from lower eyelid fornix) from patients. Table 1 shows a summary of the current research findings related to conjunctival swab to find the appearance of SARS-CoV-2 on the ocular surface by RT-PCR. A study by Xia et al. (2020) aimed to evaluate the presence of SARS-CoV-2 in tears and conjunctival secretions. SARS-CoV-2 results were found only in pneumonia patients with conjunctivitis but not found in patients without conjunctivitis. Therefore, this indicated that ocular surface is not a common transmission route although the risk of transmission could not be eliminated. 4
Discussion
The present systematic review revealed very low results of SARS-CoV-2 nucleotide were found from conjunctival swab. However, patients who come to the ophthalmology clinic or the emergency room with conjunctivitis and have associated risk factors (traveling to high-risk areas or contact with people who have returned from those areas or those known to be infected) can transmit 2019-nCoV infection even before they experience other signs and symptoms of infection. 7 A prospective interventional case series study revealed that SARS-CoV virus was not found in tear secretions of SARS patients in the Prince of Wales Hospital, Hongkong. The study result showed that 17 patients were confirmed positive after being tested with paired convalescent sera. Among these 17 patients, there were 5 samples from nasopharyngeal aspirate and stool specimens that tested positive using RT-PCR, but there were no tear swab and conjunctival scraping specimens which were positive. The authors concluded that the study of conjunctival swabs and conjunctival scraping was not valuable for diagnosing SARS-CoV. 10 However, RT-PCR itself has a high specificity but low sensitivity that can make high false negative rates despite the presence of the virus. One study compares the viral load on nasopharyngeal swab compared to tears collected by Schirmer tear strips collection. It was found that patients with positive COVID-19 results on nasopharyngeal swab showed negative results from tear specimen. 11 Therefore, conjunctival swab has no superiority compared to nasopharyngeal swab. Therefore, the exact route of transmission of SARS-CoV-2 remains unclear, although in this pandemic condition, it is suggested that high alertness is still mandatory regarding aerosol to mucosal virus transmission (through conjunctiva). It it important to apply preventive measures especially thorough hand washing, using personal protective equipment, eye protection (goggles) or face shield (face mask), not to touch the mucous membranes (eyes, nose, or mouth) and avoiding unnecessary direct contact. 2 The American Academy of Ophthalmology recommendations include the use of an N-95 mask and goggles. 7
Conclusion
The present systematic review revealed very low results of SARS-CoV-2 nucleotide were found from conjunctival swabbing. This finding suggests that the virus might not be retained in or spread through the conjunctival tissue. However, the prevention of infection transmission is still mandatory, especially thorough hand washing, and not touching the eyes, nose, and mouth when in a risky location. Control of spreading to health workers can be done by using personal protective equipment in the prevention and control of COVID-19 infections such as the use of an N-95 mask and goggles or shield and not touching the mucous membranes (eyes, nose or mouth) because spreading is associated with transmission through aerosol contact with the conjunctiva.
|
2020-06-20T16:07:16.702Z
|
2020-06-17T00:00:00.000
|
{
"year": 2020,
"sha1": "6f2133cd871cf7ebbf4a131ea14ed98e0b85e787",
"oa_license": "CCBYSA",
"oa_url": "https://jurnal.ugm.ac.id/jcoemph/article/download/55543/28832",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6f2133cd871cf7ebbf4a131ea14ed98e0b85e787",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
255684384
|
pes2o/s2orc
|
v3-fos-license
|
Genetic Redundancy in Rye Shows in a Variety of Ways
Fifty years ago Susumu Ohno formulated the famous C-value paradox, which states that there is no correlation between the physical sizes of the genome, i.e., the amount of DNA, and the complexity of the organism, and highlighted the problem of genome redundancy. DNA that does not have a positive effect on the fitness of organisms has been characterized as “junk or selfish DNA”. The controversial concept of junk DNA remains viable. Rye is a convenient subject for yet another test of the correctness and scientific significance of this concept. The genome of cultivated rye, Secale cereale L., is considered one of the largest among species of the tribe Triticeae and thus it tops the average angiosperm genome and the genomes of its closest evolutionary neighbors, such as species of barley, Hordeum (by approximately 30–35%), and diploid wheat species, Triticum (approximately 25%). The review provides an analysis of the structural organization of various regions of rye chromosomes with a description of the molecular mechanisms contributing to their size increase during evolution and the classes of DNA sequences involved in these processes. The history of the development of the concept of eukaryotic genome redundancy is traced and the current state of this problem is discussed.
Introduction: The Concept of Genetic Redundancy ("Selfish DNA") in Eukaryotes
Fifty years ago, in 1972, Susumu Ohno [1] formulated the well-known C-value paradox stating that there is no correlation between the physical size of an organism's genome, i.e., the amount of DNA, and the organism's complexity. The vision that the entire genome is functional and that the human genome should be the largest (humans are the most complex beings, are they not?) turned out to be an illusion. According to the new idea, not all DNA is important for the function and survival of an organism, and these "freeloaders" constitute rather large portions of the genomes of a very large number of species. Ohno gave it the name of junk DNA. Analyses of DNA reassociation kinetics in eukaryotic species have shown that the unique fraction of DNA consisting mainly of coding sequences makes up a very low percentage of the total DNA, while the biggest proportion of it is represented by repeated DNA sequences occurring in different copy numbers [2]. These experiments initiated the efforts to characterize junk DNA. Based on these genome size measurements involving a large number of species across diverse taxa and then revealing tremendously high variation, Ohno became confident in observing the phenomenon that he called the C-value paradox. For example, plants show 12,500-fold variation in genome size [3]; the Drosophila melanogaster genome is 1.2 × 10 5 kb in size, while the human genome is 3.3 × 10 9 kb in size, with there being little difference in the number of coding genes between them: 1.5 × 10 4 in the former and 2.0-2.3 × 10 4 in the latter [4]. There are many similar examples.
Further insights into junk DNA were expressed by Doolittle and Sapienza [5] and by Orgel and Crick [6], who claimed that the action of natural selection on genomes would inevitably lead to the emergence of DNA sequences that have no effect on organismal phenotype and that only have the "function" of surviving in the genome. This kind of DNA was later called, more diplomatically and accurately, selfish DNA, and Doolittle and Sapienza [5] proposed that many of them are transposable elements. The term junk DNA strongly implies a lack of a selectively beneficial function. The less derogatory and more to-the-point term selfish DNA is often used instead. Since the purpose of this paper is to emphasize that the issue is not settled, we default to the original name of junk DNA here.
In the subsequent discussions on the composition of selfish DNA as well as on its emergence and preservation in eukaryotic genomes, special attention has been given to the question as to what people mean when they say "a function" in relation to DNA. Optimistic supporters of the functionality of all genomic DNA have proposed that transcribable RNAs make up a huge interconnected regulatory network possessing huge evolutionary potential, while the genome is a continuum made up of genes alternating with cisand trans-regulatory regions [7]. It has been long established that many kinds of RNA (tRNA, rRNA) are transcribed from DNA but encode no proteins. ENCODE estimates that no less than 76% of the human genome is transcribed, with only~1.2% of the resulting RNA being involved in encoding proteins, providing strong support to the idea that the genome is functional in its entirety [8,9]. Functions have been identified for the recently discovered other non-coding RNAs (ncRNAs), such as short miRNAs, small interfering RNAs (siRNAs) and piwi-interacting RNAs (piRNAs) [10]. However, the most abundant class of ncRNAs is the one of long non-coding RNAs (lncRNAs), which vary in length, copy number and structure, and that is why their function is not so easy to name. It has been proposed that because most lncRNAs are derived from transposable elements, both lncRNAs and transposable elements should act together in evolution [4].
A wide spectrum of assumptions and opinions as to what a "proper function" of DNA is, and, accordingly, what DNA should be considered "functional" and what should be considered "junk", and what role natural selection has in the emergence and survival of functional and junk DNA, is typical of the ongoing dispute around selfish DNA. The considerations normally uttered rely on a limited number of facts and apply to a limited number of species. This looks natural, as there are far too many difficult-to-explain cases of apparently redundant DNA occurring in quite diverse organisms. Doolittle and Brunet [11] attempted to formulate some general criteria for identifying functional DNA. They proposed that a genome region or a DNA sequence is functional if (1) it is expressed and the phenotypic effect of this expression is detectable at the biochemical, developmental or behavioral level; (2) such expression improves organismal fitness and (3) this DNA sequence is present in past generations due to this effect of expression. The proposed criteria make the definition of "functional DNA" dependable on the selected effect (SE) [11]. Because rarely it is possible to quickly identify the function of a new DNA sequence experimentally, let alone to unravel its evolution, only a small part of large eukaryotic genomes can be seen as necessary for species survival and well-being. On the other hand, with an avalanche of incoming data about a plethora of processes-protein synthesis being one of them-which may involve DNA and transcribed RNA, such as the binding of transcription factors, DNA looping, chromatin composition, nuclear localization, etc., the meaning of the term "functional DNA" is becoming more and more elusive.
Tandem Repeats
In the three most widespread, well-studied and commercially valuable Triticeae species (wheat, rye and barley), rye is deemed to be the most promising for the analysis of and then for discussing the concept of genetic redundancy and considering the presence of selfish DNA in the genomes. The haploid genome of cultivated rye, Secale cereale L., (2n = 2x = 14) is about 8.0 Gbp/1C in size [12], being larger than the genome of an average angiosperm (5.6 Gbp) [13]. For comparison, the genomes of rye's closest relatives, barley and wheat, the haploid chromosome number in them being the same as in any Triticeae species, 7, are 5.1-5.3 Gbp and 5.8-6.1 Gbp in size, respectively. Available whole genome assemblies with annotated genes of diploid species (Triticum urartu and Aegilops tauschii, ancestors of the cultivated wheat, Triticum aestivum), barley, Hordeum vulgare, and rye showed very close coding gene counts, about 40,000 [14][15][16][17][18]. Within the genus Secale, there is about a 15-percent variation in total genome size between the cultivated rye S. cereale (the largest) and the most ancient wild rye S. silvestre (the smallest) [12]. The differences in size between their respective genomes correlate with the differences in the sizes of the heterochromatic regions at the chromosome ends [19]. The facts as listed above obviously suggest that the cultivated rye's superiority in genome size amongst its closest relatives is due to a higher abundance of various classes of repeated DNA sequences. This assumption is supported by the results of the first cereal DNA reassociation kinetic experiments, which showed that more than 90% of the rye genome consists of repetitive DNA [20].
Large heterochromatic blocks in subtelomeric regions represent a chromosomal feature of rye that its most closely related genera, wheat and barley, do not possess. As recently as at the end of the past century, these regions were analyzed for DNA composition. Some tandemly organized families were found to have extremely high monomer copy numbers as was indicated by strong in situ hybridization signals. All together, these families make up 8-12% of the entire rye genome [21,22]. The molecular structure, copy number and monomer lengths were furthermore determined for the three most abundant of them, pSc119.2, pSc200 and pSc250 [23,24]. They are made up of monomeric units 118 and 379, and are 571 bp in length, respectively, with pSc200 contributing to~2.5% of the genome and pSc250 and pSc119.2 each contributing to~1%. Fluorescence in situ hybridization (FISH) experiments suggest that the pSc200 and pSc250 blocks coincide close to the telomere, while some pSc119.2 copies are confined to interstitial sites. The pSc119.2 sequence is also present in some other cereals, but pSc200 and pSc250 are largely rye-specific.
As far as shorter tandem repeats (minisatellites) are concerned, the most probable mechanism promoting the emergence of two or a few successive DNA monomers is the duplication event that followed replication slippage [25]. It is possible that the same mechanism applies to longer monomers, with sizes close to those of tandem families in the rye genome. By sequencing an extended DNA region with BAC119C15 in it [26], we revealed a consistent pattern of the alteration of pSc200 monomers having an average of 93% homology, suggesting an initial duplication, the divergence of the primary sequences of the monomers within the dimer and the subsequent amplification of the dimers. It is likely that virtually any DNA sequence can serve as the initial monomer; several cases of tandem DNAs emerging from pieces of transposable elements have been described [27][28][29]. Dimer amplification and the preservation of tandem repeat arrays can take place in the course of replication or mitotic/meiotic recombination by means of multiple recombination events, such as unequal crossing over between sister chromatids, sequence conversion, translocations that exchange material between non-homologous chromosomes and transpositions [30][31][32]. FISH on meiotic chromosomes shows that each of the families, pSc200 and pSc250, is present on the arms of each chromosome as a separate domain, the size and varying stain intensity of which implies the presence of more than one array within each domain. This assumption is supported by blot-hybridized patterns after pulsed-field gel electrophoresis (PFGE) of DNA in BAC clones [26]. The maximum size of the pSc200 and pSc250 arrays is 550-600 kb [33]; however, the majority of the arrays in each rye chromosome are rather short, not longer than 80-100 kb.
The chromosomal domains consisting of pSc200 and pSc250 arrays are normally separated by non-tandem DNA sequences. Restriction analysis of DNA of the BAC clones in the BAC library of the short arm of rye chromosome 1 (1RS) followed by blot-hybridization showed that the pSc200 and pSc250 arrays, as with human alpha satellite DNA, develop HORs (higher order repeats), each consisting of two to eight monomers. A single HOR is longer than 3 kb (379 bp × 8) in pSc200 and is nearly 3.5 kb (571 bp × 6) in pSc250. The order of HORs and the ratio of different HORs are specific for each array within a single arm [26]. This implies that multiple recombination events had been taking place The tandem families pSc200 and pSc250 have different evolutionary histories. Some pSc200 copies have been found in hexaploid wheat, other Triticeae species [34,35] and more distantly related cereals, such as rice and oats. It gives grounds to assume that this family emerged about 45 MYA, when the rice and oat lineages split [36]. The pSc250 family is younger; its copies, very low in number, are found only in the Triticeae species [35], suggesting that its age is about 15 Myr. However, when scrutinizing the evolutionary history of the pSc200 and pSc250 families, one should remember that the amplification (expansion) of these families was not observed before the radiation of the genus Secale (1.7 MYA) and was running especially high in the cultivated rye (S. cereale) [19]. Thus, pSc200 and pSc250 did not take as long to expand as, for example, human alpha satellite DNA did, which is confirmed by high (not less than 90%) homology between the monomers within each of these families [26]. The evolutionary histories of these two families are strikingly different from that of the third high copy number family of tandem repeats, pSc119.2. This family is much more common across the tribe Triticeae than pSc200 and pSc250 are. Not only rye, but also various Triticum and Aegilops species, as well as most wild barley species [37] including Hordeum bulbosum, which is closely related to the cultivated barley, contain this family in a high copy number. A large number of variant monomers with different lengths and highly heterogeneous primary structures are an indication that sequence homogenization has never taken place in pSc119.2 [37]. By far the most surprising pSc119.2-related observation is that both H. vulgare ssp. spontaneum, which is an immediate ancestor of the cultivated barley (H. vulgare), and the cultivated barley itself are devoid of this family [38]. Thus, the beginning of the barley domestication process caused the mass deletion of thousands copies of pSc119.2 genome-wide in the predecessor species.
Transposable Elements
Transposable elements (TEs) are the predominant component of such large genomes as the cereal ones [39,40]. For this reason, we were not surprised to see that the sequencing of the DNA around the arrays of the pSc200 and pSc250 family tandems in BAC clones revealed various classes of these elements, basically LTR-containing retrotransposons. Are these two main classes of repeats in heterochromatic regions related in some way? Is the composition of the DNA surrounding the arrays of the tandem repeats specific to them-or is it unrelated to them? To answer these questions, we used 454 reads, which served as a basis for the rye genomic library [41]. The benefit of this approach is that no contigs are needed, which, if they were, would represent a major problem in duly profiling repeats, as the genome crawds with them. Nevertheless, the average length of the 454 reads is sufficient for spotting junction regions between tandem monomers and the DNA that surrounds them. Three sets of 454 sequencing-based reads were produced ( Figure 1): two consisted of reads, which were essentially DNA in the "non-tandem DNA-pSc200 (or pSc250) tandem" junction regions, and the third sample consisted of reads with DNA residing in the rest of the genome [26]. Our interest was to find out whether there was a correspondence between the copy number of separate TE families in the "non-tandem DNA-tandem DNA" junction regions and their copy number throughout the entire rye genome. The comparison was carried out using a database of eukaryotic repetitive elements, Repbase and the Triticeae Repeat Sequence Database (TREP). Our analysis of the genome composition for rye showed that 73% of all TEs identified belonged to the gypsy-like superfamily of LTR-containing retrotransposons. Of all the TE families identified, the most abundant was the gypsy-like family Sabrina (Figure 2), which makes up about 15.5% of the total number of TEs identified genome-wide, which greatly exceeds the abundance of the next two top TE families, the CACTA superfamily and the gypsy-like family, WHAM. At the same time, the abundance of Sabrina dropped abruptly when this family was flanked by arrays of tandem repeats. Some TE families, common in the rye genome, virtually do not occur flanked by arrays of tandem repeats. For the example, neither CACTA nor Cereba nor Derami nor Sumana were flanked by pSc250, and Fatimawas never flanked by pSc200.
identified belonged to the gypsy-like superfamily of LTR-containing retrotransposons. Of all the TE families identified, the most abundant was the gypsy-like family Sabrina ( Figure 2), which makes up about 15.5% of the total number of TEs identified genome-wide, which greatly exceeds the abundance of the next two top TE families, the CACTA superfamily and the gypsy-like family,WHAM. At the same time, the abundance of Sabrina dropped abruptly when this family was flanked by arrays of tandem repeats. Some TE families, common in the rye genome, virtually do not occur flanked by arrays of tandem repeats. For the example, neither CACTA nor Cereba nor Derami nor Sumana were flanked by pSc250, and Fatimawas never flanked by pSc200. Thus, a surprising feature revealed by comparing the abundance of the DNA sequences in the entire rye genome and when these sequences were flanked by the arrays Thus, a surprising feature revealed by comparing the abundance of the DNA sequences in the entire rye genome and when these sequences were flanked by the arrays of the tandem monomers pSc200 and pSc250 was a nearly reciprocal replacement of TE families. In terms of genome-wide abundance, Sabrina, CACTA and WHAM yield precedence to Daniela and Olivia flanked by pSc200 and Laura and Gypsy-13_TA-I flanked by pSc250, but the abundance of solo-LTRs designated as Xalas or Xalax is just rocketing, especially when these are flanked by pSc250. As the ectopic exchange model suggests [42], this should be indicative of very frequent ectopic exchanges in the immediate neighborhood of the tandem arrays pSc200 and pSc250 while these arrays were in the making. The main sources of solo-LTRs are probably unequal cross-overs and the within-chromosome ectopic recombination between the LTRs of the same or different elements, provided that they share quite extended homologous regions, such as those shared by Xalas and Xalax, and Daniela and Olivia. Vicient et al. [43] distinguish four forms of retrotransposon recombination. One of these forms is the LTR-LTR recombination, which results in solo-LTRs or tandem arrays consisting of LTRs and internal domains. According to this mechanism, if an LTR borders with any other DNA sequence, for example, pSc200 or pSc250 monomers, then the LTR-LTR recombination will generate a tandem array of these monomers next to the LTR.
While the mechanisms of recombination processes occurring within the arrays of tandem monomers and leading to the formation and genome-wide distribution of higherorder repeat units, varying in length and internal organization, have been known for decades, little is known about a potential impact that the immediate DNA neighborhood of the tandem monomer arrays might have on these processes. Our analysis of the junction regions between TEs and the monomers of the tandem arrays pSc200 and pSc250 revealed that most TE transpositions occurred either directly to the monomers of the tandem array or next to them following ligation, which added a very short spacer, DNA 1-10 bp in length-a situation typical of non-homologous end joining (NHEJ). About 90% of these junction regions put together pSc250 and two groups, Laura and Xalas, the most abundant when surrounded by TEs: about 70% for pSc200 and Daniela, and 58% for pSc200 and Olivia.
In many organisms, recombination events occur at certain sites, the so-called "hot spots", which have a particular nucleotide context [44]. In the nucleotide context of DNA of the transposable elements bordering the tandem arrays pSc200 and pSc250, motifs that could participate in recombination other than HR (homologous recombination) or NHEJ (non-homologous end-joining) and promote the propagation of the arrays of tandem monomers were revealed to occur with a probability much higher than could have been expected by random chance [26]. The lengths of these motifs are 8-12 bp, which are sufficient for recombinases to start aligning a single-strand DNA with a homologous duplex elsewhere in the genome, which can promote recombination. Thus, the LTRs containing microhomologous DNA within this TE inserted next to the pSc200 or pSc250 monomers could, together with captured monomers, recombine these sites with other copies of the same TE occurring elsewhere in the genome and thus promote the distribution of the tandem monomer arrays.
Rye Genomic Libraries Detailed the Organization of Subtelomeric Heterochromatin
The rye genomic libraries generated in recent years using modern DNA sequencing techniques and contig assemblies [17,18] allow for large genomic regions to be analyzed. As is known, the heterochromatic chromosome regions and, first of all, long arrays of tandemly organized monomers are especially difficult to read. Nevertheless, the contigs in the genome libraries mentioned are long enough to shed light on the structural organization of the subtelomeric heterochromatic regions of the rye chromosomes, including both classes of repeats: monomer tandems and copies of TE families. Some contigs begin from the chromosome ends, as the presence of the contig region containing telomeric monomers (TTTAGGG)n suggests. The telomere monomers are immediately followed by the array of the pSc119.2 monomers on one arm of chromosome 3R [18] and the pSc200 array on the other [17]. However, the arrays of telomeric and subtelomeric repeats are not immediate neighbors in all chromosomes. In the rye line 'Weining', the chromosome 7R telomere and the array of pSc250 monomers are separated by four pieces of the transposable elements Xalax and Gypsy13_TA-I, and the pSc250 array itself is interspersed with pieces of various transposable elements. The arrangement of the arrays of all of these tandem repeat families, pSc200, pSc250 and pSc119.2, being relative to each other may be extremely varying; they may (1) lie very close to each other, (2) be separated by variously sized tracks consisting mostly of pieces of diverse TE families or (3) occur in different combinations.
The schematic in Figure 3 shows one of the longest scaffolds, s291 (line 'Lo7'), as an example to illustrate the structural organization of a chromosome region and to list all of the above-described main characteristics of subtelomeric heterochromatin. The pSc200 cluster in it is 258.3 kb in size and consists of several long monomer arrays interrupted by three regions. One of these regions, which is about 17 kb in length and located at the beginning of the scaffold, is populated by several rearranged copies of the copia-like family WIS-2, while the other two, being rather short, contain some of the WIS-2 sequences and some of the gypsy-like family Olivia. From position 80 kb, the array of pSc200 monomers forms an 1897-bp-long region consisting of 13 HORs (purple in the schematic), each consisting of five monomers. It is possible that more improved sequencing techniques and the assembly of genomic libraries will allow us to reveal genes within the subtelomeric heterochromatin of the rye chromosomes. Some genes have been found and described within small regions of constitutive heterochromatin in Drosophila melanogaster [45]; however, these regions do not have powerful arrays of tandem repeats in them, where as such regions in rye chromosomes do. The chicken pan-genome constructed from 20 de novo assembled genomes with high sequencing depth [46] brings hope that functional genes within the subtelomeric heterochromatic regions of rye chromosomes will eventually be found. As a result, the authors found 1335 previously unannotated protein-coding genes, the majority of which were located in subtelomeric chromosome and minichromosome regions and were surrounded by huge arrays of tandem repeats that made sequencing impossible. Looking over the amount of knowledge that we currently have about the structural -TEs -HORs, higher-order repeat units -monomers of pSc200 -unidentified DNA -truncated monomers of pSc200 It is possible that more improved sequencing techniques and the assembly of genomic libraries will allow us to reveal genes within the subtelomeric heterochromatin of the rye chromosomes. Some genes have been found and described within small regions of constitutive heterochromatin in Drosophila melanogaster [45]; however, these regions do not have powerful arrays of tandem repeats in them, where as such regions in rye chromosomes do. The chicken pan-genome constructed from 20 de novo assembled genomes with high sequencing depth [46] brings hope that functional genes within the subtelomeric heterochromatic regions of rye chromosomes will eventually be found. As a result, the authors found 1335 previously unannotated protein-coding genes, the majority of which were located in subtelomeric chromosome and minichromosome regions and were surrounded by huge arrays of tandem repeats that made sequencing impossible. Looking over the amount of knowledge that we currently have about the structural organization of subtelomeric heterochromatin in rye chromosomes, we must confess that it is difficult to make assumptions about and understand how the long arrays of tandem DNA repeats that are prevalent in it and occur genome-wide in thousands of copies, in which ever-running recombination processes affect the neighboring TEs, most of which appear in the form of relatively short pieces such as solo-LTRs, could participate in encoding, regulation or any other molecular process related to survival, reproduction or behavior, or any other process that is beneficial to the metabolism and well-being of living organisms. It is therefore logical and reasonable to assume that what DNA in such chromosome regions does is only ensure its survival, which is consistent with the criteria of "selfish DNA" and the concept of genetic redundancy in eukaryotic genomes.
Gene Duplications
Can we, based on our current level of understanding of the molecular processes unfolding in cells, answer the question as to whether DNA in the subtelomeric heterochromatic regions of the rye chromosomes is the only repository of the sequences, whose possible functions are not yet known? Gene duplications that occur due to polyploidization following whole genome doubling (WGD) or due to local gene duplications (small-scale duplication, SSD), are often considered to be possible sources of evolutionary novelty and adaptation [47,48]. This opinion is based on frequent cases of deletion of new copies that had not acquired new functions [48]. However, some studies show that neither acquisition of new functions (neofunctionalization) nor delegation of complementing parts of the original function to both copies (subfunctionalization) are indispensable for the survival of both copies in a genome. An analysis of 901 SSD-derived gene pairs in Brachypodium distachyon, Oryza sativa and Sorghum bicolor showed that only 23.8% of the resulting copies had acquired new functions, 0.4% underwent subfunctionalization and 15.2% underwent a rapid specialization followed by neofunctionalization to the effect that both copies acquired functions that were unlike each other's and unlike their ancestral genes' [49,50]. The highest percentage of the genes' second copies, 60.6%, performed the same function that their originals did, i.e., supported the existing function [49].
We have not been lucky enough to learn from the literature how many duplicated genes there are in the rye genome. For this reason, we will confine ourselves to the results that we have obtained from analyzing the structure and expression of genes for the centromere-specific histone H3 variant (CENH3 in plants). The CENH3 proteins encoded by these genes play a universal, important role in cells as they determine the position of the centromere in chromosomes. Any error in the transcription, translation, modification or transport of this protein can compromise the formation of active centromeric chromatin, leading to impairments in kinetochore assembly and cell division. We found that the rye genome, as with wheat and barley, contains two genes, each encoding a separate form of the protein, αCENH3 and βCENH3, differing (1) in size due to an extended deletion in the N-terminal domain of βCENH3 and (2) in intron-exon structure [51]. An intriguing aspect of CENH3 evolution is that some well-studied and very common cereal species, such as rice and maize, are quite comfortable with possessing only one form of the protein and, accordingly, with possessing only one copy of the gene to encode this protein [52,53].
An analysis of genome and transcriptome libraries for 23 cereal species made it possible to establish that the evolutionary process leading to the formation of the two-copy system of encoding CENH3 was confined to the CENH3 locus, which emerged about 50 MYA in a common ancestor of the subfamilies Bambusoideae, Oryzoideae and Pooideae [54]. An example of the initial organization of the CENH3 locus is rice (Oryza sativa), in which the locus consists of the syntenic genes CDPK2, CENH3 and bZIP and is small, at 15.3 kb ( Figure 4). The βCENH3 gene was for the first time found as a part of the locus in the species of the tribes Stipeae and Brachypodieae and the subfamily Pooideae; it emerged about 35-40 MYA. The duplication was accompanied by changes in the intron-exon structure. Figure 4 and Table 1 present the results of analysis of the structure of the CENH3 locus in some cereal species representing the most important stages in the evolution of the subfamily Pooideae. The main trend in the evolution of the locus is its growth due to the growth of the spacer between the αCENH3 and βCENH3 genes, with the spacer growing in parallel with the genomes. This trend is a feature shared by the Pooideae branch leading to species in the tribe Triticeae (barley, rye, wheat) and by the branch leading to species in the subtribe of Aveninae (oats, Avena sativa, Figure 4). In Bromus tectorum (the tribe Bromeae, which shares a common direct ancestor with Triticeae [55]), IS2 is already as large as nearly 55 kb in size. At the same time, in the common direct ancestor, an inversion happens to the βCENH3 gene. IS2 grows in size due to a mass introduction to it of various families of LTRcontaining retrotransposons: gypsy and copia, and-albeit to a lesser extent-transposons of the CACTA superfamily. The TE families largely occur as short pieces of DNA, with the largest share of clusters consisting of the copies of diverse families imbedded in the copies of other families. Some copies of TEs and tracks of simple repeats without signs of substantial rearrangements were noted, too, but they were few in number. The further evolution of the CENH3 locus within the tribe Triticeae is characterized by the preservation of the CDPK2 gene located to the left of βCENH3 in the subtribe Hordeinae species (barley) and by the replacement of CDPK2 with LHCB3-l in the subtribe Triticinae species (rye, wheat) (Figure 4). The locus and IS2 further grow in size, reaching their respective top values of 218 kb and nearly 190 kb. The domestication process, too, leads to an increase in IS2 size, which follows from a comparison of its sizes between (1) the cultivated barley (Hordeum vulgare) and its immediate ancestor wild H. vulgare, ssp. spontaneum, and between (2) the genome A of the cultivated wheat (Triticum aestivum) and the donor of wheat's A genome, T. urartu. Each intergene spacer in each species, including evolutionarily close species in the tribe Triticeae, is characterized by its specific set of TE families [54], suggesting that it is very unlikely for them to possess any evolutionarily fixed functions.
of 15
iceae [55]), IS2 is already as large as nearly 55 kb in size. At the same time, in the common direct ancestor, an inversion happens to the βCENH3 gene. IS2 grows in size due to a mass introduction to it of various families of LTR-containing retrotransposons: gypsy and copia, and-albeit to a lesser extent-transposons of the CACTA superfamily. The TE families largely occur as short pieces of DNA, with the largest share of clusters consisting of the copies of diverse families imbedded in the copies of other families. Some copies of TEs and tracks of simple repeats without signs of substantial rearrangements were noted, too, but they were few in number. The further evolution of the CENH3 locus within the tribe Triticeae is characterized by the preservation of the CDPK2 gene located to the left of βCENH3 in the subtribe Hordeinae species (barley) and by the replacement of CDPK2 with LHCB3-l in the subtribe Triticinae species (rye, wheat) ( Figure 4). The locus and IS2 further grow in size, reaching their respective top values of 218 kb and nearly 190 kb. The domestication process, too, leads to an increase in IS2 size, which follows from a comparison of its sizes between (1) the cultivated barley (Hordeum vulgare) and its immediate ancestor wild H. vulgare, ssp. spontaneum, and between (2) the genome A of the cultivated wheat (Triticum aestivum) and the donor of wheat's A genome, T. urartu. Each intergene spacer in each species, including evolutionarily close species in the tribe Triticeae, is characterized by its specific set of TE families [54], suggesting that it is very unlikely for them to possess any evolutionarily fixed functions. . IS1 is the genome region (intergene spacer) between the left-hand marker of the locus, CDPK2 and the βCENH3 gene. In T. urartu, T. aestivum and S. cereale, the CDPK2 gene is replaced with LHCB3-l. IS2 is the intergene spacer between the βCENH3 and αCENH3 genes. IS3 is the intergene spacer between the αCENH3 gene and the right-hand marker of the locus, bZIP.
The formation of the intergene spacers in the CENH3 locus and the process of IS2 expansion, which accelerated during Triticeae domestication, are, in a way, similar to the process of expansion of the heterochromatic regions of the rye subtelomeres. The clusters consisting of multiple degenerate copies of various TE families transposed to each other look like the monomers of tandem repeats put together in HORs. A mix of diverse domains belonging to diverse TE families, which alternate with short arrays of micro-and mini-satellite DNA as well as AT-rich regions, indicates that violent recombination events were very frequent within the locus in its past evolutionary history. Whether the effect of duplication triggered the above processes around the CENH3 paralogs during the evolution of the genomic region contained in the CENH3 locus remains unknown. If we looked into many more regions such as this (that is, the ones possessing these paralogous genes), we would perhaps find the key.
It would be especially exciting to analyze the expression of the paralogous genes αCENH3 and βCENH3. The level of expression of both was directly correlated with the intensity of cell division at different stages of development of two phenotypically different rye cultivars, 'Imperial' and K69 [56]. The level of transcription of αCENH3 was much higher than that of βCENH3, especially during sprouting and in the reproductive tissues, when and where division is much more intensive than it is in leaves and stems. In stems, where the intensity of cell division is the lowest, the expression of these genes falls to its minimum basal level, and there the level of transcription of βCENH3 becomes higher than that of αCENH3. Next, the transcription products and the αCENH3 and βCENH3 proteins were transported to cell nuclei and were included in the nucleosomes of centromeric chromatin [56]. These results, however, are not sufficient for us to expect evidence of neofunctionalization, subfunctionalization or specialization in these paralogous genes. The pressure exerted on the coding part of the βCENH3 gene due to purifying selection [54] led to the preservation of its conserved function, and the benefit of possessing two genes perhaps consists of the gene dosage effect in the maintenance of the balanced total number of the CENH3 proteins, which serves to maintain the necessary level of cell division intensity at various stages of plant development. Another advantage is that each paralog encodes an N-terminal tail (NTT); so, the resulting NTTs have substantial differences in amino acid sequences [51], thus maintaining their stoichiometric relationship with partners in multicomponent interactions in changing external conditions [48,57]. As Fagundes et al. opine, if a genetic element (the βCENH3 gene in our case) supports the organism's adaptive level, it is qualified as functional and its "supporting function" is a proper biological function [58].
Conclusions: The Concept of Genetic Redundancy in Eukaryotes Revisited
Over the decades that have elapsed since the C-value paradox and junk DNA were proclaimed, many attempts have been made to propose a role-in a manner that was well substantiated at each particular point of time-for the excess DNA-or at least for some of it. The first attempt consisted in the division of total DNA into two categories: DNA in one of them encodes proteins; DNA in the other category controls the cell volume and cell nuclear volume [59]. The hypothesis about the latter function was put forward based on the correlation between these values and the total amount of DNA in many species. However, we have yet to know what the molecular mechanism underlying the latter function is. We mention this attempt because if the existence of long arrays of tandemly organized DNA monomers had been known of at that time, it would be logical to give this class of sequences a starring role in the performance of said function. However, a correlation does not necessarily mean that one trait (a factor) defines the characteristics (the size) of another. We have searched the literature for reasonable assumptions about possible functions of the above-described long tandem DNA regions, which form part of the subtelomeric heterochromatin in rye, but to no effect. For this reason, we will next proceed to discuss other classes of DNA sequences, which may potentially be defined as junk DNA.
Until recently, transposable elements were at the center of discussions about whether they may possess a useful function as regulators of gene activity. The most frequently nominated candidates were SINEs (in particular, the Alu family) because of their wide occurrence in mammalian genomes and rather small sizes [60]. However, neither LINEs nor SINEs are very abundant in plants. In particular, their percentages are infinitesimally low in the large genomes of Triticeae species which are abundant with other TE classes. The other TE classes have normally been regarded as having a possible potential utility of useful genetic material, which could in the future result from DNA rearrangements and be due to novel genes that might arise in these TEs. Such assumptions have been cutely characterized as being "theological" [11], for evolution has no foresight [61]. The assumptions about the possible participation of TEs in the regulation of gene expression were limited to those DNA regions and those TEs that occurred close to coding DNA sequences [62]. Not only did these assumptions ignore tandem monomer arrays, they likewise ignored extended TE clusters measuring dozens of kilobases in size, which appear as a mess of fragmented copies belonging to diverse LTR-retrotransposon families. For example, DNA regions similar to the spacers between the αCENH3 and βCENH3 genes in Triticeae species are characterized by the species-specific sets of TEs without signs of conservatism or collinearity, the two properties being characteristic of the coding regions in these species.
After the publication of ENCODE results [8], the focus of discussions on genetic redundancy shifted to the type of RNA known as long non-coding RNA (lncRNA). Their characteristics, such as occurring in a larger copy number than any other transcript and being diverse in length, amount and structure, make it somewhat difficult to single out a universal function and at the same time make it easy to produce more assumptions. For example, those who advocate for the total functionality of lncRNAs see these sequences participate in a broad range of functions related to the chromosome architecture, the expression of genes in the three-dimensional structure of the nucleus, signal transduction and cell migration [63]. On the other hand, their opponents suggest that differences between functional, non-coding RNA and junk RNA be identified first, and as long as the line has not yet been drawn, the researchers should abide by the null hypothesis: "An uncharacterized non-coding RNA likely has no function, unless proven otherwise" [64]. The estimates that less than 10% of the DNA in the human genome is under purifying selection [65,66], while nearly 80% of the entire genome is subject to transcription, led to the conclusion that at least 87% of transcribable DNA produces junk [67]. This conclusion follows from the criterion proposed by Palazzo and Koonin: if an RNA molecule has a positive effect of adaptation under the pressure of purifying selection, no matter how weak the pressure, the molecule is functional [67]. The transcripts that do not meet this criterion deliver material for a large number of strongly diverse lncRNAs to evolve via the non-adaptive mechanism of neutral evolution, and so these transcripts should end up as functional lncRNAs. Considering widespread transcription events (80% of the genome is subject to transcription), almost 90% of the junk RNAs represent intermediate long non-coding RNAs. "However abhorrent the concept of junk DNA might be to many biologists, this conclusion is inescapable", state Palazzo and Koonin [67].
With his C-value paradox, Ohno [1] disillusioned us about the functionality of the entire genome. Progress in understanding genetic redundancy makes us confess that natural selection is not so immensely powerful as it used to seem and that this power does not always work exclusively for the cause of adaptation. Progress in molecular biology provides makes Doolittle's and Brunet's statement [11] that "evolution by natural selection operates independently (and sometimes oppositely) at different levels of the biological hierarchy (gene, cell, organism, species)" sound more and more convincing. We believe that the word "gene" in this statement can be fairly replaced with "various classes of DNA sequences".
|
2023-01-12T16:45:56.576Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "abded0f5a0a0de51846b758051618470f3914188",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2223-7747/12/2/282/pdf?version=1673085316",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e9dbee86a019e9e860b8d0371772e33c9f343e0a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
265192677
|
pes2o/s2orc
|
v3-fos-license
|
The Effect of Nitrogen-and Oxygen-Containing Functional Groups on C 2 H 6 /SO 2 /NO Adsorption: A Density Functional Theory Study
: This paper investigates the mechanism of nitrogen-and oxygen-containing functional groups in the collaborative adsorption of harmful gases by activated carbon through numerical simulation. The aim is to provide theoretical guidance for the industrial production of high-performance and universally applicable activated carbon. By employing density functional theory, we explore the impact of pyridine, pyrrole, carboxyl, and carbonyl groups on the co-adsorption of C 2 H 6 /SO 2 /NO by activated carbon through analyzing surface electrostatic potential (ESP), physical adsorption energy, and non-covalent interaction. The findings demonstrate that the presence of nitrogen-and oxygen-containing functional groups within activated carbon surfaces enhances their polarity, while simultaneously forming strong non-covalent interactions with C 2 H 6 and SO 2 . The N-atom of NO can form a strong C-N ionic bond with the C-atom of the benzene ring. The adsorption site of NO is influenced by the nitrogen-and oxygen-containing functional groups. On an activated carbon model containing a pyrrole functional group, NO exhibits meta-adsorption behavior, while on activated carbon with pyridine, carboxyl, and carbonyl groups, it shows ortho-adsorption characteristics. The interaction between C 2 H 6 and SO 2 , as well as NO, primarily involves the H-bond, whereas the interaction between SO 2 and NO is predominantly driven by dipole–dipole interactions. These intermolecular forces significantly contribute to the mutual adsorption of these molecules.
Introduction
In the process of industrial production and biomass combustion, large amounts of sulfur-and nitrogen-containing pollutants, hydrocarbons, and CO 2 are discharged, and the emission of these pollutants has a negative impact on the global climate, aggravating the greenhouse effect and climate change [1][2][3].The study of collaborative control systems for pollutants will become a key content in the fight against climate change and the improvement of the ecological environment [4].Collaborative treatment of gaseous pollutants by activated carbon adsorption can better meet the needs and objectives of current waste gas treatment.The advantage of the adsorption method is that it is a simple process to operate, does not require the use of overly complicated devices, is easy to implement for automatic control, does not suffer from secondary contamination, and regenerates the adsorbent [5][6][7][8].
The adsorption properties of activated carbon are determined by its pore structure distribution, chemical composition, and molecular structure.The adsorption process primarily relies on the interaction between the activated carbon surface and the adsorbent.The type and strength of this interaction predominantly hinge upon the microcrystalline structure of the graphite layer present on the activated carbon surface [9].Chemically binding atoms and atomic groups other than carbon, the carbon atoms within this graphite layer form a diverse array of surface functional groups that significantly impact its adsorption performance [10,11].The oxygen-containing functional group of coal-based graphene acts as an electron acceptor and transporter, and plays an important role in enhancing charge transfer and hindering the recombination of electron-hole pairs [12,13].N doping remodels the local electron density of graphene surfaces and greatly promote adsorption with various interactions [9].
According to the interaction between the adsorbent surface and the adsorbate, adsorption can be classified into physical and chemical adsorption.Physical adsorption dominates when activated carbon absorbs small gas molecules [14].The physical adsorption process is primarily governed by noncovalent interaction, encompassing dispersion, H-bonds, π-π interaction, dipole-dipole interaction, and ion-dipole interaction [15].To assess these interactions' properties and their contribution to small molecule adsorption on activated carbon accurately and efficiently, simulation calculations based on density functional theory (DFT) can be conducted due to its advantages of cost-effectiveness, high research productivity, and calculation precision.In recent years, there has been a growing body of research focusing on the adsorption behavior based on DFT [16,17].Numerous research examples demonstrate that DFT is highly reliable and convenient for investigating molecular-level adsorption phenomena.The research methods and contents of activated carbon adsorption are summarized in Table 1.As depicted in Table 1, the current research on adsorption by DFT predominantly concentrates on a singular adsorbate, while the investigation into the interaction between multiple adsorbates remains relatively limited.In addition, the role of carbonyl groups in the adsorption process has not been thoroughly studied.−34.95 [18] The co-adsorption of C 2 H 6 /SO 2 /NO on activated carbon surfaces at the molecular level was investigated through Gaussian numerical simulations.The electron distribution and polarity on graphene surface were modified by doping various nitrogen-and oxygencontaining functional groups (pyridine, pyrrole, carboxyl, and carbonyl group) onto the activated carbon model, demonstrating the action mechanism of these functional groups Energies 2023, 16, 7537 3 of 13 in collaborative adsorption of C 2 H 6 /SO 2 /NO on activated carbon via an independent gradient model based on Hirshfeld partition (IGMH) analysis and adsorption energy calculation.
Calculation Models
Considering the accuracy and efficiency, a clustered graphene structure with five fused rings is employed as a model for activated carbon to discuss the effect of nitrogen and oxygen functional groups on the physical adsorption of C 2 H 6 /NO/SO 2 , as illustrated in Figure 1.For simplicity, we denote the original graphene model as OG, while the models containing pyridine functional group, pyrrole functional group, carboxyl group, and carbonyl group are denoted as PD, PR, CBX, and CBN, respectively.group SO2 Opt:M06-2X/6-31G(d, p) Sp: M06-2X/6-311+G G(d, p) −34.95 [18] The co-adsorption of C2H6/SO2/NO on activated carbon surfaces at the molecul level was investigated through Gaussian numerical simulations.The electron distributio and polarity on graphene surface were modified by doping various nitrogen-and oxyge containing functional groups (pyridine, pyrrole, carboxyl, and carbonyl group) onto t activated carbon model, demonstrating the action mechanism of these functional grou in collaborative adsorption of C2H6/SO2/NO on activated carbon via an independent gr dient model based on Hirshfeld partition (IGMH) analysis and adsorption energy calc lation.
Calculation Models
Considering the accuracy and efficiency, a clustered graphene structure with fi fused rings is employed as a model for activated carbon to discuss the effect of nitrog and oxygen functional groups on the physical adsorption of C2H6/NO/SO2, as illustrat in Figure 1.For simplicity, we denote the original graphene model as OG, while the mo els containing pyridine functional group, pyrrole functional group, carboxyl group, an carbonyl group are denoted as PD, PR, CBX, and CBN, respectively.Gaussian 16 [23] software was used to simulate the adsorption of C2H6, NO and SO Based on DFT, the structure was optimized at the level of B3LYP-D3(BJ) [24,25] /6-311+G [26].To improve the accuracy of the calculations, the single point energy of the optimiz structure was calculated at M062X [27] /jun-cc-pVTZ [28,29], taking into account the DF D3 [30] dispersion correction.The adsorption energy ( ) is defined as:
Calculation Methods
Gaussian 16 [23] software was used to simulate the adsorption of C 2 H 6 , NO and SO 2 .Based on DFT, the structure was optimized at the level of B3LYP-D3(BJ) [24,25] /6-311+G** [26].To improve the accuracy of the calculations, the single point energy of the optimized structure was calculated at M062X [27] /jun-cc-pVTZ [28,29], taking into account the DFT-D3 [30] dispersion correction.The adsorption energy (E ads ) is defined as: where E tot , E AC , and E mol represent the single point energies of physical adsorption products, activated carbon models and adsorbates (such as C 2 H 6 ), respectively.
The VMD [31] software and Multiwfn [32] software were used to display the ESP distribution on the molecular van der Waals (vdW) surface through an isosurface coloring of the electron density to study possible active sites, as shown in Figure 2.
Energies 2023, 16, 7537 4 of 14 where , , and represent the single point energies of physical adsorption products, activated carbon models and adsorbates (such as C2H6), respectively.
The VMD [31] software and Multiwfn [32] software were used to display the ESP distribution on the molecular van der Waals (vdW) surface through an isosurface coloring of the electron density to study possible active sites, as shown in Figure 2. In the IGMH analysis, Multiwfn software was used to analyze the wave function of the optimized structure, and the value of sign(λ2)ρ was mapped to the vdW surface by VMD software for the study of non-covalent interactions between molecules, as shown in Figure 3. λ2 is the second largest eigenvalue of the electron density (ρ) Hessian matrix, and sign() is the meaning of taking a sign.Meanwhile, the δG atom proposed by Lu et al. [33] was mapped to atoms to measure the contribution of each atom to the intermolecular in- In the IGMH analysis, Multiwfn software was used to analyze the wave function of the optimized structure, and the value of sign(λ 2 )ρ was mapped to the vdW surface by VMD software for the study of non-covalent interactions between molecules, as shown in Figure 3. λ 2 is the second largest eigenvalue of the electron density (ρ) Hessian matrix, and sign() is the meaning of taking a sign.Meanwhile, the δG atom proposed by Lu et al. [33] was mapped to atoms to measure the contribution of each atom to the intermolecular interaction, as shown in Figure 4.In the IGMH analysis, Multiwfn software was used to analyze the wave function of the optimized structure, and the value of sign(λ2)ρ was mapped to the vdW surface by VMD software for the study of non-covalent interactions between molecules, as shown in Figure 3. λ2 is the second largest eigenvalue of the electron density (ρ) Hessian matrix, and sign() is the meaning of taking a sign.Meanwhile, the δG atom proposed by Lu et al. [33] was mapped to atoms to measure the contribution of each atom to the intermolecular interaction, as shown in Figure 4.
ESP Analysis
ESP is crucial for the investigation and prediction of intermolecular interactions, particularly in relation to electrostatic interactions.The effect of N-and O-atom doping on the ESP of the activated carbon model is explored in Figure 5.The ESP exhibits negativity above the benzene ring of OG, positivity at the H-atom edge, and a relatively narrow and uniform range from −14.03 to 17.30 kcal/mol.Doping of the pyridine functional groups and pyrrole functional groups changes the ESP distribution of OG [10,20].After embedding the functional groups containing oxygen and nitrogen, the absolute value of ESP above the benzene ring decreases, except for PR, so it can be speculated that the electrostatic interaction of C2H6/SO2/NO adsorbed on it also decreases. of the electron density to study possible active sites, as shown in Figure 2. In the IGMH analysis, Multiwfn software was used to analyze the wave function of the optimized structure, and the value of sign(λ2)ρ was mapped to the vdW surface by VMD software for the study of non-covalent interactions between molecules, as shown in Figure 3. λ2 is the second largest eigenvalue of the electron density (ρ) Hessian matrix, and sign() is the meaning of taking a sign.Meanwhile, the δG atom proposed by Lu et al. [33] was mapped to atoms to measure the contribution of each atom to the intermolecular interaction, as shown in Figure 4.
ESP Analysis
ESP is crucial for the investigation and prediction of intermolecular interactions, particularly in relation to electrostatic interactions.The effect of N-and O-atom doping on the ESP of the activated carbon model is explored in Figure 5.The ESP exhibits negativity above the benzene ring of OG, positivity at the H-atom edge, and a relatively narrow and uniform range from −14.03 to 17.30 kcal/mol.Doping of the pyridine functional groups and pyrrole functional groups changes the ESP distribution of OG [10,20].After embedding the functional groups containing oxygen and nitrogen, the absolute value of ESP above the benzene ring decreases, except for PR, so it can be speculated that the electrostatic interaction of C2H6/SO2/NO adsorbed on it also decreases.
ESP Analysis
ESP is crucial for the investigation and prediction of intermolecular interactions, particularly in relation to electrostatic interactions.The effect of N-and O-atom doping on the ESP of the activated carbon model is explored in Figure 5.The ESP exhibits negativity above the benzene ring of OG, positivity at the H-atom edge, and a relatively narrow and uniform range from −14.03 to 17.30 kcal/mol.Doping of the pyridine functional groups and pyrrole functional groups changes the ESP distribution of OG [10,20].After embedding the functional groups containing oxygen and nitrogen, the absolute value of ESP above the benzene ring decreases, except for PR, so it can be speculated that the electrostatic interaction of C 2 H 6 /SO 2 /NO adsorbed on it also decreases.
As shown in Figure 5b, the ESP of PD ranges from −35.76 to 19.62 kcal/mol, suggesting that the presence of the pyridine functional group enhances both the range and heterogeneity of ESP distribution.While the ESP is negative near the N-atom, indicating that it is more likely to act as an electron donor, the H-atom of C 2 H 6 and the N-atom of NO as well as the S-atom of SO 2 are predicted to be adsorbed.For PR, a maximum ESP value of 40.47 kcal/mol is located near the H-atom attached to the N-atom of pyrrole, which can act as an electron acceptor to adsorb the C-atom of C 2 H 6 as well as O-atoms in NO and SO 2 .When the carboxyl group is doped in the activated carbon model, the ESP on the vdW surface ranges from −33.94 to 47.97 kcal/mol.The ESP is negative near the two O-atoms of the carboxyl group and positive near the H-atom of the carboxyl group, so that the carboxyl group acts not only as an electron donor, but also as an electron acceptor.For CBN, the ESP varies from −43.49 to 22.49 kcal/mol, with the O-atom in the carbonyl group having the most negative ESP and serving as an electron donor.As shown in Figure 5b, the ESP of PD ranges from −35.76 to 19.62 kcal/mol, suggesting that the presence of the pyridine functional group enhances both the range and heterogeneity of ESP distribution.While the ESP is negative near the N-atom, indicating that it is more likely to act as an electron donor, the H-atom of C2H6 and the N-atom of NO as well as the S-atom of SO2 are predicted to be adsorbed.For PR, a maximum ESP value of 40.47 kcal/mol is located near the H-atom attached to the N-atom of pyrrole, which can act as an electron acceptor to adsorb the C-atom of C2H6 as well as O-atoms in NO and SO2.When the carboxyl group is doped in the activated carbon model, the ESP on the vdW surface ranges from −33.94 to 47.97 kcal/mol.The ESP is negative near the two O-atoms of the carboxyl group and positive near the H-atom of the carboxyl group, so that the carboxyl group acts not only as an electron donor, but also as an electron acceptor.For CBN, the ESP varies from −43.49 to 22.49 kcal/mol, with the O-atom in the carbonyl group having the most negative ESP and serving as an electron donor.
Adsorption Simulation of C2H6
C2H6 and OG form weak C-H…C bonds, shown as green isosurfaces in Figure 6a.When C2H6 is adsorbed at the edge of the PD, the pyridine group acts as an electron donor and forms a weak H-bond with the H-atom of C2H6, which appears as a green isosurface.The H atom is a bright red color, indicating that it plays a dominant role in the interaction.Small green flakes form between the C-atom of C2H6 and the H-atom of PD, indicating a weak H-bond.After the embedding of the pyridine functional group, the region of the green isosurface between C2H6 and the activated carbon model increases, and hence the adsorption energy, as shown in Table 2.A weak H-bond forms between the C-atom of C2H6 and the H-atom of PR, in which the pyrrole group acts as an electron acceptor.From the isosurface area in Figure 6a-c and the adsorption energy in Table 2, it can be judged that the pyrrole functional group is more conductive to the adsorption of C2H6 than the pyridine functional group.
Adsorption Simulation of C 2 H 6
C 2 H 6 and OG form weak C-H. ..C bonds, shown as green isosurfaces in Figure 6a.When C 2 H 6 is adsorbed at the edge of the PD, the pyridine group acts as an electron donor and forms a weak H-bond with the H-atom of C 2 H 6 , which appears as a green isosurface.The H atom is a bright red color, indicating that it plays a dominant role in the interaction.Small green flakes form between the C-atom of C 2 H 6 and the H-atom of PD, indicating a weak H-bond.After the embedding of the pyridine functional group, the region of the green isosurface between C 2 H 6 and the activated carbon model increases, and hence the adsorption energy, as shown in Table 2.A weak H-bond forms between the C-atom of C 2 H 6 and the H-atom of PR, in which the pyrrole group acts as an electron acceptor.From the isosurface area in Figure 6a-c and the adsorption energy in Table 2, it can be judged that the pyrrole functional group is more conductive to the adsorption of C 2 H 6 than the pyridine functional group.As can be seen in Figure 6d, C 2 H 6 and CBX form two green disk-like isosurfaces representing the C-H. ..O and O-H. ..C bonds, respectively.The iso-surface area of the O-H. ..C bond surpasses that of the C-H. ..O bond, and the H-atom of the O-H. ..C bond is brown, indicating that the O-H. ..C bond contributes more than the C-H. ..O bond to the interaction and plays a dominant role.Upon the embedding of the carboxyl group, there is a significant increase in adsorption energy from 19.85 to 24.39 kJ/mol.This is because the carboxyl group can act as both an electron donor and an electron acceptor, and thus can interact with multiple atoms of C 2 H 6 , in agreement with ESP predictions.Figure 6e illustrates the adsorption configuration of C 2 H 6 on CBN.Instead of a non-covalent interaction with the carbonyl group, C 2 H 6 forms a H-bond with the H-atom next to the carbonyl group, so that the adsorption energy of C 2 H 6 on CBX is 0.27 kJ/mol lower than that on the OG.As can be seen in Figure 6d, C2H6 and CBX form two green disk-like isosurfaces representing the C-H…O and O-H…C bonds, respectively.The iso-surface area of the O-H…C bond surpasses that of the C-H…O bond, and the H-atom of the O-H…C bond is brown, indicating that the O-H…C bond contributes more than the C-H…O bond to the interaction and plays a dominant role.Upon the embedding of the carboxyl group, there is a significant increase in adsorption energy from 19.85 to 24.39 kJ/mol.This is because the carboxyl group can act as both an electron donor and an electron acceptor, and thus can interact with multiple atoms of C2H6, in agreement with ESP predictions.Figure 6e illustrates the adsorption configuration of C2H6 on CBN.Instead of a non-covalent interaction with the carbonyl group, C2H6 forms a H-bond with the H-atom next to the carbonyl group, so that the adsorption energy of C2H6 on CBX is 0.27 kJ/mol lower than that on the OG.
In Figure 7, the green isosurfaces formed by the different activated carbon models and C2H6 mainly represent dispersive interactions, and there is no obvious difference between them, as can also be seen from the adsorption energies in Table 2.This is mainly because the adsorption site of C2H6 on the activated carbon plane is far away from the functional group, so the functional group has almost no effect on the adsorption of C2H6 on the activated carbon plane.The isosurface area and adsorption energy of C2H6 on In Figure 7, the green isosurfaces formed by the different activated carbon models and C 2 H 6 mainly represent dispersive interactions, and there is no obvious difference between them, as can also be seen from the adsorption energies in Table 2.This is mainly because the adsorption site of C 2 H 6 on the activated carbon plane is far away from the functional group, so the functional group has almost no effect on the adsorption of C 2 H 6 on the activated carbon plane.The isosurface area and adsorption energy of C 2 H 6 on activated carbon planes are larger than those at activated carbon edges, indicating a higher likelihood for C 2 H 6 to be adsorbed onto activated carbon planes.
Energies 2023, 16, 7537 7 of 14 activated carbon planes are larger than those at activated carbon edges, indicating a higher likelihood for C2H6 to be adsorbed onto activated carbon planes.
Co-Adsorption Simulation of C2H6 and SO2
As shown in Figure 8a, the adsorption sites of C2H6 and SO2 are located on the OG plane.C2H6 and SO2 form continuous large-area green isosurfaces with the activated carbon model, respectively, dominated by dispersive interactions.The isosurface between the S-atom of SO2 and the C-atom of OG appears light blue, which is a dipole-dipole interaction.The dispersion-dominated interaction between the O-atom of SO2 and the Catom of the benzene ring is also clearly shown in Figure 8a.At the same time, there is a bond path between the S-atom of SO2 and the C-and H-atoms of C2H6, forming a weak interaction dominated by dispersion and H-bond.
Co-Adsorption Simulation of C 2 H 6 and SO 2
As shown in Figure 8a, the adsorption sites of C 2 H 6 and SO 2 are located on the OG plane.C 2 H 6 and SO 2 form continuous large-area green isosurfaces with the activated carbon model, respectively, dominated by dispersive interactions.The isosurface between the S-atom of SO 2 and the C-atom of OG appears light blue, which is a dipole-dipole interaction.The dispersion-dominated interaction between the O-atom of SO 2 and the C-atom of the benzene ring is also clearly shown in Figure 8a.At the same time, there is a bond path between the S-atom of SO 2 and the C-and H-atoms of C 2 H 6 , forming a weak interaction dominated by dispersion and H-bond.As shown in Figure 8b, the adsorption site of C2H6 remains positioned above the zene ring, while the adsorption site of SO2 moves from the activated carbon plane to activated carbon edge following the incorporation of the pyridine functional group.is because SO2 is an acidic oxide and pyridine is an alkaline functional group, and tends to interact with the pyridine functional group.The S-atom, being less electron tive than the O-atom in SO2, carries a positive charge and forms a blue-centered isosur with the N-atom through a typical dipole-dipole interaction.Meanwhile, a weak H-b is formed between the O-atom of SO2 and the H-atom of PD.The co-adsorption energ C2H6 and SO2 on the PD is 75.94 kJ/mol, shown in Table 3, is higher than on the OG, to the strong dipole-dipole interaction between SO2 and the pyridine functional grou As shown in Figure 8b, the adsorption site of C 2 H 6 remains positioned above the benzene ring, while the adsorption site of SO 2 moves from the activated carbon plane to the activated carbon edge following the incorporation of the pyridine functional group.This is because SO 2 is an acidic oxide and pyridine is an alkaline functional group, and SO 2 tends to interact with the pyridine functional group.The S-atom, being less electronegative than the O-atom in SO 2 , carries a positive charge and forms a blue-centered isosurface with the N-atom through a typical dipole-dipole interaction.Meanwhile, a weak H-bond is formed between the O-atom of SO 2 and the H-atom of PD.The co-adsorption energy of C 2 H 6 and SO 2 on the PD is 75.94 kJ/mol, shown in Table 3, is higher than on the OG, due to the strong dipole-dipole interaction between SO 2 and the pyridine functional group.The adsorption site of C 2 H 6 moves from the activated carbon plane to the activated carbon edge after the embedding of the pyrrole functional group, as shown in Figure 8c.C-H. ..C bonds are formed between C 2 H 6 and PR.Due to the high absolute value of ESP, the green isosurface area is large near the H-atom of the pyrrole functional group.The adsorption site of SO 2 occurs on the activated carbon plane.The interaction of SO 2 adsorbed on the PR is similar to that on the OG.The interaction between C 2 H 6 and SO 2 is the C-H. ..C bond, formed by the H-atoms of C 2 H 6 and the O-atoms of SO 2 .As mentioned earlier, the interaction of C 2 H 6 adsorbed at the activated carbon edge is weaker than that on the activated carbon plane, so that the co-adsorption energy of C 2 H 6 and SO 2 on the PR is 59.40 kJ/mol is smaller than that on the OG.
Figure 9a shows that the adsorption site of C 2 H 6 is located on the activated carbon plane, while that of SO 2 is located on the activated carbon edge, with no interaction observed between C 2 H 6 and SO 2 .The S-atom of SO 2 is positively charged and forms a blue-centered isosurface with the O-atom of the carboxyl group, which is a dipole-dipole interaction.There is also a very distinct blue-centered isosurface between the O-atom of SO 2 and the H-atom of the carboxyl group, indicating the formation of a strong O-H. ..O bond.Due to the strong interaction between SO 2 and the carboxyl group, co-adsorption energy for C 2 H 6 and SO 2 on CBX amounts to 71.07 kJ/mol, surpassing that observed on OG.
Energies 2023, 16, 7537 9 o adsorbed on the PR is similar to that on the OG.The interaction between C2H6 and SO the C-H…C bond, formed by the H-atoms of C2H6 and the O-atoms of SO2.As mentio earlier, the interaction of C2H6 adsorbed at the activated carbon edge is weaker than on the activated carbon plane, so that the co-adsorption energy of C2H6 and SO2 on the is 59.40 kJ/mol is smaller than that on the OG. Figure 9a shows that the adsorption site of C2H6 is located on the activated car plane, while that of SO2 is located on the activated carbon edge, with no interaction served between C2H6 and SO2.The S-atom of SO2 is positively charged and forms a b centered isosurface with the O-atom of the carboxyl group, which is a dipole-dipole teraction.There is also a very distinct blue-centered isosurface between the O-atom of and the H-atom of the carboxyl group, indicating the formation of a strong O-H…O bo Due to the strong interaction between SO2 and the carboxyl group, co-adsorption ene for C2H6 and SO2 on CBX amounts to 71.07 kJ/mol, surpassing that observed on OG.As can be seen from Figure 9b, the adsorption site of C2H6 is still located on the a vated carbon plane, and a dispersion-led interaction is formed between C2H6 and the b zene ring.There is a green isosurface between the S-atom of SO2 and the O-atom of carbonyl group, which is a dipole-dipole interaction.The O-atoms of SO2 form dispers and weak H-bonds with the C-and H-atoms in the benzene ring, respectively.The O-a of SO2 and the H-atoms of C2H6 form weak H-bonds with small iso-surface area.Beca of the strong interaction between SO2 and the carbonyl group, co-adsorption energy C2H6 and SO2 on CBN is higher than that on OG, indicating that the carbonyl grou beneficial to the co-adsorption of C2H6 and SO2.
Co-Adsorption Simulation of C2H6 and NO
When C2H6 and NO are co-adsorbed on the OG, both C2H6 and NO adsorption s are located on the activated carbon plane, as shown in Figure 10a.The center of the iso face formed between the N-atom of NO and the C-atom of the benzene ring shows obvious blue color, indicating the formation of a strong C-N ionic bond.The O-atom more electronegative than the N-atom in NO, and the electron pair is biased towards As can be seen from Figure 9b, the adsorption site of C 2 H 6 is still located on the activated carbon plane, and a dispersion-led interaction is formed between C 2 H 6 and the benzene ring.There is a green isosurface between the S-atom of SO 2 and the O-atom of the carbonyl group, which is a dipole-dipole interaction.The O-atoms of SO 2 form dispersion and weak H-bonds with the C-and H-atoms in the benzene ring, respectively.The O-atom of SO 2 and the H-atoms of C 2 H 6 form weak H-bonds with small iso-surface area.Because of the strong interaction between SO 2 and the carbonyl group, co-adsorption energy of C 2 H 6 and SO 2 on CBN is higher than that on OG, indicating that the carbonyl group is beneficial to the co-adsorption of C 2 H 6 and SO 2 .
Co-Adsorption Simulation of C 2 H 6 and NO
When C 2 H 6 and NO are co-adsorbed on the OG, both C 2 H 6 and NO adsorption sites are located on the activated carbon plane, as shown in Figure 10a.The center of the isosurface formed between the N-atom of NO and the C-atom of the benzene ring shows an obvious blue color, indicating the formation of a strong C-N ionic bond.The O-atom is more electronegative than the N-atom in NO, and the electron pair is biased towards the O-atom, which will be repelled by the negative charge accumulated on the activated carbon plane.Simultaneously, after the N-atom acquires electrons from the C-atom, the electron distribution in the outer layer of the C-atom is the same as that of the N-atom, and the two form a stable shared electron pair.So, there is a strong interaction between the N-atom of NO and the C-atom of NO of the benzene ring.The adsorption sites and interactions of C2H6 and NO on the PD are similar to those on the OG.The N-atom is adsorbed on the C-atom adjacent to the pyridine nitrogen.This is because pyridine is an electron-withdrawing and ortho-para-positioning group, and the C-atom directly connected with the pyridine nitrogen loses electrons and interacts with NO.The O-atom of NO and the H-atoms of C2H6 form a small discontinuous green isosurface, indicating a weak H-bond.The co-adsorption energy of C2H6 and SO2 on the PD is lower than that on the OG because the interaction of the NO adsorbed on the PD is less strong than on the OG.
Since pyrrole is an electron-donating group and a meta-directing group, a blue-centered isosurface can be observed between the N-atom of NO and the meta-carbon atom of pyrrole nitrogen in Figure 10c.The adsorption site of C2H6 on the PR is located at the activated carbon edge, and the interaction is mainly weak H-bonds, making the co-adsorption energy small, as shown in Table 3.
The adsorption site of C2H6 is located in the CBX plane in Figure 11a.Similar to the pyridine functional group, the carboxyl group is an electron-withdrawing and ortho-positioning group, and thus the adsorption of NO occurs in the ortho-carbon atom of the carboxyl group.The co-adsorption energy of C2H6 and NO on CBX is the highest, indicating that the carboxyl group is most favorable for their co-adsorption.The adsorption sites and interactions of C 2 H 6 and NO on the PD are similar to those on the OG.The N-atom is adsorbed on the C-atom adjacent to the pyridine nitrogen.This is because pyridine is an electron-withdrawing and ortho-para-positioning group, and the C-atom directly connected with the pyridine nitrogen loses electrons and interacts with NO.The O-atom of NO and the H-atoms of C 2 H 6 form a small discontinuous green isosurface, indicating a weak H-bond.The co-adsorption energy of C 2 H 6 and SO 2 on the PD is lower than that on the OG because the interaction of the NO adsorbed on the PD is less strong than on the OG.
Since pyrrole is an electron-donating group and a meta-directing group, a bluecentered isosurface can be observed between the N-atom of NO and the meta-carbon atom of pyrrole nitrogen in Figure 10c.The adsorption site of C 2 H 6 on the PR is located at the activated carbon edge, and the interaction is mainly weak H-bonds, making the co-adsorption energy small, as shown in Table 3.
The adsorption site of C 2 H 6 is located in the CBX plane in Figure 11a.Similar to the pyridine functional group, the carboxyl group is an electron-withdrawing and orthopositioning group, and thus the adsorption of NO occurs in the ortho-carbon atom of the carboxyl group.The co-adsorption energy of C 2 H 6 and NO on CBX is the highest, indicating that the carboxyl group is most favorable for their co-adsorption.activated carbon edge, and the interaction is mainly weak H-bonds, making the co-adsorption energy small, as shown in Table 3.
The adsorption site of C2H6 is located in the CBX plane in Figure 11a.Similar to the pyridine functional group, the carboxyl group is an electron-withdrawing and ortho-positioning group, and thus the adsorption of NO occurs in the ortho-carbon atom of the carboxyl group.The co-adsorption energy of C2H6 and NO on CBX is the highest, indicating that the carboxyl group is most favorable for their co-adsorption.After the carbonyl group is embedded, the adsorption sites of C 2 H 6 and NO are located on the activated carbon plane.The O-and N-atoms of NO form weak H-bonds with the two H-atoms of C 2 H 6 , respectively, appearing as two connected disk-like isosurfaces.The carbonyl group is an electron-withdrawing and ortho-positioning group, so the dipoledipole interaction is formed by the N-atom of NO and the ortho-carbon atom of the carbonyl group.Compared to other activated carbon models, CBX has the smallest absolute value of ESP, producing a lower strength dipole-dipole interaction with NO rather than a higher strength ionic bond, so C 2 H 6 and NO have the lowest co-adsorption energy on CBX.
Co-Adsorption Simulation of C 2 H 6 , SO 2 and NO
For OG, the co-adsorption site of C 2 H 6 /NO/SO 2 is located on the activated carbon plane.However, the iso-surface area between C 2 H 6 /SO 2 and graphene is small, indicating a weak interaction between them.A non-covalent interaction is observed between C 2 H 6 /NO and SO 2 .The H-atom of C 2 H 6 forms a C-H. ..O bond with the O-atom of SO 2 , while the O-atom of NO engages in a dipole-dipole interaction with the S-atom of SO 2 .The co-adsorption sites and interactions of C 2 H 6 /NO/SO 2 on PD resemble those mentioned above for pairwise adsorption.But the strength of the interaction decreases, as evidenced by the adsorption energy in Table 3 and the iso-surface color in Figure 12b.For PR, the adsorption sites of SO 2 and C 2 H 6 are similar to those in the pairwise adsorption.SO 2 is adsorbed on the activated carbon plane, while C 2 H 6 is adsorbed near the pyrrole functional group.Compared to Figure 8c, the number of H-atoms involved in the formation of the non-covalent interaction between C 2 H 6 and PR increases, and thus the interaction region and strength increases, while the interaction region and strength between SO 2 and PR decreases.In comparison with Figure 10c, the adsorption site of NO is changed and the strength of the interaction between NO and PR is weakened.This is mainly due to the dipole-dipole interaction between the O-atom of SO 2 and the N-atom of NO, which weakens the adsorption strength of SO 2 and NO on the PR.
In Figure 13a, the adsorption site of SO 2 is on the activated carbon plane, and NO is ortho-adsorption, similar to that in Figure 11a.C 2 H 6 is adsorbed at the CBX edge, with its C-and H-atoms forming dispersive interactions with the carboxyl functional group.In contrast to planar adsorption, the area and strength of the interaction are weakened, resulting in a reduction in the adsorption energy.The O-atom of NO forms a dispersiondominated weak interaction with the H-atom of C 2 H 6 and the O-atom of SO 2 , thereby promoting mutual adsorption.Figure 13b shows that the adsorption site of SO 2 is similar to that in Figure 9b.The three H-atoms connected to the same C-atom in C 2 H 6 form a weak H-bond with the activated carbon plane.Unlike previous models, the O-atom of NO forms weak H-bonds with the H-atoms of CBN and C 2 H 6 , while the N-atom engages in a dipole-dipole interaction with the O-atom of SO 2 .Due to the weak strength of the interaction formed by C 2 H 6 and NO with CBN, the adsorption energy is only 75.49kJ/mol, which is almost equal to the co-adsorption energy of C 2 H 6 and SO 2 .tional group.Compared to Figure 8c, the number of H-atoms involved in the formatio of the non-covalent interaction between C2H6 and PR increases, and thus the interactio region and strength increases, while the interaction region and strength between SO2 an PR decreases.In comparison with Figure 10c, the adsorption site of NO is changed an the strength of the interaction between NO and PR is weakened.This is mainly due to th dipole-dipole interaction between the O-atom of SO2 and the N-atom of NO, which weak ens the adsorption strength of SO2 and NO on the PR.In Figure 13a, the adsorption site of SO2 is on the activated carbon plane, and NO ortho-adsorption, similar to that in Figure 11a.C2H6 is adsorbed at the CBX edge, with i C-and H-atoms forming dispersive interactions with the carboxyl functional group.I contrast to planar adsorption, the area and strength of the interaction are weakened, r sulting in a reduction in the adsorption energy.The O-atom of NO forms a dispersion dominated weak interaction with the H-atom of C2H6 and the O-atom of SO2, thereby pro moting mutual adsorption.Figure 13b shows that the adsorption site of SO2 is similar t that in Figure 9b.The three H-atoms connected to the same C-atom in C2H6 form a wea H-bond with the activated carbon plane.Unlike previous models, the O-atom of NO form
Conclusions
The co-adsorption of C2H6/SO2/NO by the activated carbon has been investigated a the molecular level.The presence of nitrogen-containing and oxygen-containing func tional groups alters the ESP distribution of activated carbon models, thereby influencing the adsorption mechanism of activated carbon onto C2H6/SO2/NO.
The adsorption interaction of C2H6 on the activated carbon plane is higher than tha on the activated carbon edge, and the functional group has little effect on the adsorption of C2H6 on the activated carbon plane.The non-covalent interaction of C2H6 adsorption on the activated carbon edge is an H-bond.Except for the carbonyl group, other functiona groups promote the adsorption of C2H6 on the activated carbon edge.C2H6 does not inter act non-covalently with the carbonyl group, but forms an H-bond with the H-atom nex to the carbonyl group, resulting in a slightly low adsorption energy.
The S-atom of SO2 can form dipole-dipole interactions with the N-atom of the pyri dine functional group, as well as the O-atoms of both the carboxyl and carbonyl groups facilitating its adsorption at the activated carbon edge.Additionally, SO2 can also be ad sorbed on the activated carbon plane.For the co-adsorption of C2H6 and NO, the N-atom of NO can form a strong C-N ionic bond with the C-atom of the benzene ring.NO is meta adsorbed on the activated carbon model containing a pyrrole functional group, while or tho-adsorbed on activated carbon models containing other functional groups, but the ad sorption site of NO will change due to the influence of C2H6 and SO2.In particular, for th co-adsorption of C2H6/SO2/NO, the O-atom of NO forms a weak H-bond with the H-atom of the benzene ring on the activated carbon model containing a carbonyl group.The inter action between C2H6 and SO2 is dominated by the H-bond formed by the O-atom of SO and the H-atom of C2H6.Both N-and O-atoms of NO can form weak H-bonds with H atoms of C2H6.Moreover, the S-and O-atoms of SO2 can form a dipole-dipole interaction with the O-and N-atoms of NO, respectively, thereby promoting mutual adsorption.
Conclusions
The co-adsorption of C 2 H 6 /SO 2 /NO by the activated carbon has been investigated at the molecular level.The presence of nitrogen-containing and oxygen-containing functional groups alters the ESP distribution of activated carbon models, thereby influencing the adsorption mechanism of activated carbon onto C 2 H 6 /SO 2 /NO.
The adsorption interaction of C 2 H 6 on the activated carbon plane is higher than that on the activated carbon edge, and the functional group has little effect on the adsorption of C 2 H 6 on the activated carbon plane.The non-covalent interaction of C 2 H 6 adsorption on the activated carbon edge is an H-bond.Except for the carbonyl group, other functional groups promote the adsorption of C 2 H 6 on the activated carbon edge.C 2 H 6 does not interact non-covalently with the carbonyl group, but forms an H-bond with the H-atom next to the carbonyl group, resulting in a slightly low adsorption energy.
The S-atom of SO 2 can form dipole-dipole interactions with the N-atom of the pyridine functional group, as well as the O-atoms of both the carboxyl and carbonyl groups, facilitating its adsorption at the activated carbon edge.Additionally, SO 2 can also be adsorbed on the activated carbon plane.For the co-adsorption of C 2 H 6 and NO, the N-atom of NO can form a strong C-N ionic bond with the C-atom of the benzene ring.NO is meta-adsorbed on the activated carbon model containing a pyrrole functional group, while ortho-adsorbed on activated carbon models containing other functional groups, but the adsorption site of NO will change due to the influence of C 2 H 6 and SO 2 .In particular, for the co-adsorption of
Figure 2 .
Figure 2. Color scale bar of the electron density.
Figure 2 .
Figure 2. Color scale bar of the electron density.
Figure 2 .
Figure 2. Color scale bar of the electron density.
Figure 4 .
Figure 4. Coloring method for mapping δG atom to atoms.
Figure 2 .
Figure 2. Color scale bar of the electron density.
Figure 4 .
Figure 4. Coloring method for mapping δG atom to atoms.
Figure 4 .
Figure 4. Coloring method for mapping δG atom to atoms.
Figure 5 .
Figure 5. ESP distribution of (a) OG, (b) PD, (c) PR, (d) CBX, and (e) CBN.The surface local minima and maxima of the ESP are represented as cyan and orange spheres, respectively.The transparent ones correspond to the extrema on the backside of the graph.
Figure 5 .
Figure 5. ESP distribution of (a) OG, (b) PD, (c) PR, (d) CBX, and (e) CBN.The surface local minima and maxima of the ESP are represented as cyan and orange spheres, respectively.The transparent ones correspond to the extrema on the backside of the graph.
Figure 6 .
Figure 6.Sign(λ2)ρ-colored vdW surface corresponding to IGMH analysis for the adsorption of C2H6 at the edge of (a) OG, (b) PD, (c) PR, (d) CBX, and (e) CBN.Atoms are colored by δG atom to highlight their relative contributions.BCPs (orange spheres) and bond paths (brown lines) are also shown.
Figure 6 .
Figure 6.Sign(λ 2 )ρ-colored vdW surface corresponding to IGMH analysis for the adsorption of C 2 H 6 at the edge of (a) OG, (b) PD, (c) PR, (d) CBX, and (e) CBN.Atoms are colored by δG atom to highlight their relative contributions.BCPs (orange spheres) and bond paths (brown lines) are also shown.
Figure 7 .
Figure 7. Sign(λ2)ρ-colored vdW surface corresponding to IGMH analysis for the adsorption of C2H6 on the plane of (a) OG, (b) PD, (c) PR, (d) CBX, and (e) CBN.Atoms are colored by δG atom to highlight their relative contributions.BCPs (orange spheres) and bond paths (brown lines) are also shown.
Figure 7 .
Figure 7. Sign(λ 2 )ρ-colored vdW surface corresponding to IGMH analysis for the adsorption of C 2 H 6 on the plane of (a) OG, (b) PD, (c) PR, (d) CBX, and (e) CBN.Atoms are colored by δG atom to highlight their relative contributions.BCPs (orange spheres) and bond paths (brown lines) are also shown.
Figure 8 .
Figure 8. Sign(λ 2 )ρ-colored vdW surface corresponding to IGMH analysis for the co-adsorption of C 2 H 6 and SO 2 on (a) OG, (b) PD, and (c) PR.Atoms are colored by δG atom to highlight their relative contributions.Bond paths (brown lines) are also shown.
Figure 9 .
Figure 9. Sign(λ2)ρ-colored vdW surface corresponding to IGMH analysis for the co-adsorptio C2H6 and SO2 on (a) CBX and (b) CBN.Atoms are colored by δG atom to highlight their relative tributions.Bond paths (brown lines) are also shown.
Figure 9 .
Figure 9. Sign(λ 2 )ρ-colored vdW surface corresponding to IGMH analysis for the co-adsorption of C 2 H 6 and SO 2 on (a) CBX and (b) CBN.Atoms are colored by δG atom to highlight their relative contributions.Bond paths (brown lines) are also shown.
Figure 10 .
Figure 10.Sign(λ2)ρ-colored vdW surface corresponding to IGMH analysis for the co-adsorption of C2H6 and NO on (a) OG, (b) PD, and (c) PR.Atoms are colored by δG atom to highlight their relative contributions.Bond paths (brown lines) are also shown.
Figure 11 .
Figure 11.Sign(λ2)ρ-colored vdW surface corresponding to IGMH analysis for the co-adsorption of C2H6 and NO on (a) CBX and (b) CBN.Atoms are colored by δG atom to highlight their relative contributions.Bond paths (brown lines) are also shown.
Figure 10 .
Figure 10.Sign(λ 2 )ρ-colored vdW surface corresponding to IGMH analysis for the co-adsorption of C 2 H 6 and NO on (a) OG, (b) PD, and (c) PR.Atoms are colored by δG atom to highlight their relative contributions.Bond paths (brown lines) are also shown.
Figure 11 .
Figure 11.Sign(λ2)ρ-colored vdW surface corresponding to IGMH analysis for the co-adsorption of C2H6 and NO on (a) CBX and (b) CBN.Atoms are colored by δG atom to highlight their relative contributions.Bond paths (brown lines) are also shown.After the carbonyl group is embedded, the adsorption sites of C2H6 and NO are located on the activated carbon plane.The O-and N-atoms of NO form weak H-bonds with the two H-atoms of C2H6, respectively, appearing as two connected disk-like isosurfaces.
Figure 11 .
Figure 11.Sign(λ 2 )ρ-colored vdW surface corresponding to IGMH analysis for the co-adsorption of C 2 H 6 and NO on (a) CBX and (b) CBN.Atoms are colored by δG atom to highlight their relative contributions.Bond paths (brown lines) are also shown.
Figure 12 .
Figure 12.Sign(λ2)ρ-colored vdW surface corresponding to IGMH analysis for the co-adsorption C2H6, SO2, and NO on (a) OG, (b) PD, and (c) PR.Atoms are colored by δG atom to highlight the relative contributions.Bond paths (brown lines) are also shown.
Figure 12 .
Figure 12.Sign(λ 2 )ρ-colored vdW surface corresponding to IGMH analysis for the co-adsorption of C 2 H 6 , SO 2 , and NO on (a) OG, (b) PD, and (c) PR.Atoms are colored by δG atom to highlight their relative contributions.Bond paths (brown lines) are also shown.
Figure 13 .
Figure 13.Sign(λ2)ρ-colored vdW surface corresponding to IGMH analysis for the co-adsorption o C2H6, SO2, and NO on (a) CBX and (b) CBN.Atoms are colored by δG atom to highlight their relativ contributions.Bond paths (brown lines) are also shown.
Figure 13 .
Figure 13.Sign(λ 2 )ρ-colored vdW surface corresponding to IGMH analysis for the co-adsorption of C 2 H 6 , SO 2 , and NO on (a) CBX and (b) CBN.Atoms are colored by δG atom to highlight their relative contributions.Bond paths (brown lines) are also shown.
Table 1 .
Research methods and contents of activated carbon adsorption by DFT.
Table 2 .
Adsorption energy (kJ/mol) of C 2 H 6 on the plane and edge of the activated carbon model.
Table 2 .
Adsorption energy (kJ/mol) of C2H6 on the plane and edge of the activated carbon model.
Table 3 .
Co-adsorption energy (kJ/mol) of C 2 H 6 /SO 2 /NO on the activated carbon model.
|
2023-11-15T16:23:25.947Z
|
2023-11-12T00:00:00.000
|
{
"year": 2023,
"sha1": "ea3053184a4d0521a5fa89472e666aef7f669d3e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/16/22/7537/pdf?version=1699844931",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c7280c6f37bffd293d011143fc24b8a79bfa794d",
"s2fieldsofstudy": [
"Chemistry",
"Environmental Science",
"Materials Science"
],
"extfieldsofstudy": []
}
|
139143378
|
pes2o/s2orc
|
v3-fos-license
|
The Hot Corrosion of Metals by Molten Salts
Some theoretical mechanisms for the hot corrosion attack of metals and alloys under thin fused salt films are discussed. The chemical fluxing (dissolution) of the protective oxide on a pure metal is proposed to occur wherever the gradient in the oxide solubility in the salt at the oxide/salt interface is negative, such that a reprecipitation of dissolved oxide occurs. In turn, the oxide solubility gradient is established by the nature and site of the electrochemical reduction step which always generates local basicity. The interrelation of the basicity gradient in the melt to the oxide solubility map decides the occurrence of continuing hot corrosion. In the hot corrosion of alloys, the possibility of a syner gistic coupling of the dissolution for the several oxides of the components can result depending upon the specific details of the oxide solubility plots and the local ba sicity as established by the site of the cathodic reduc tion step. Proceedings of The Electrochemical Society, PV 1981-10, 159-177 (1981)
Introduction
AJloys experience accelerated corrosion attack upon exposure at elevated temperatures to an oxidizing gas when a thin film of fused salt coats the surface. Corrosion problems related to the attack of metals by molten nitrates, carbonates, hydroxides, sulfates, coal ash, etc. are well knownx and very important to the functioning of many en gineering systems.
In the operation of aircraft gas tur bines near and over the ocean, a fused Na2S0 -NaCI film from an injested sea-salt aerosol may coat the hardware and lead to accelerated oxidation of the turbine alloys. In this corrosion environment, the oxidant gas contains the products of fuel combustion including SO^and excess 02. In this paper, some novel theoretical mechanisms and cri teria for the hot corrosion of metals and alloys in Na9S0. are presented. These mechanistic proposals, however, 2 4 should find general applicability in the analysis of other metal/salt/oxidant corrosion reactions.
A chemical mechanism(s) of hot corrosion has been previously described in terms of an acid-base dissolution of the protective oxide film (A1 0 or Cr 0 ) pn-ja high -temperature alloy or coating. ^Other authors5' nave conducted electrochemical studies of metals submerged in deep sulfate melts to establish anodic and cathodic polar ization curves. However, a generalized theory which inte grates the chemical and electrochemical phenomena for corrosion beneath a thin salt film has not been proposed, nor have previous experiments been properly designed to evaluate the details of a mechanism for hot corrosion. The presentation of such a generalized mechanistic theory and suggested experiments is the purpose of the present paper.
A considerable insight into the mechanism of accele rated oxidation beneath a thin fused salt film can be won from the literature on aqueous solution corrosion at ambi ent temperature. The geometry of hot corrosion (thin film electrolyte coating) closely resembles that for aqueous "atmospheric corrosion1 ' where the electrochemistry and the rate-limiting step have been analyzed by Mansfeld and Kenkel1 ' . However, considerable differences with respect to ease of an electron transfer step and the rate of dis solved gas transport must be expected. Both for aqueous solutions and for fused Na^SO-the thermodynamic stabili ties of phases are graphically illustrated as a function of redox potential and an acid-base parameter by Pourbaix diagrams.
In Figs. 1 and 2, Pourbaix-type phase stability plots are presented for the Na-Al-S-0 and Na-Cr-S-0 systems for those conditions where Na SO is the stable phase in the could be related to Na-S-0 system.The ordinate of log Pq an electrode potential if a proper reference electrode to indicate log PQ were available. The basicity of the mel is described u2 in terms of the thermodynamic activity of sodium oxide, aN q relative to pure sodium oxide. Cr,0T + 2 02'+ 3/2 0 ,^2 C r 0 l ' basic dissolution(ld) 2 3 Â lternatively, lacking a prior knowledge for the solute activity coefficients, or wishing to test the validity of Eqs. (la-d), one can establish experimentally the so lubility of the oxides in Na2S0^ as functions of Pq and aNa20.
Stroud and Rapp
developed an electrochemical elec trode to measure the sodium oxide activity in Na2S0^ melts equilibrated at given oxygen pressures. Fronr Na2S0^ solutions equilibrated with excess a -A^O^ or r2^3> samPles were taken and analyzed to establish the solubilities of these oxides as a function of aN Q at fixed Pq at 1200K. Superimposed plots for A^u^ sol ubility 2at P q = 10"4 and 1 atm are given as Fig. 3. For the purposes of present paper, we do not need to assume that the data of Fig. 3 are exactly correct; rather we only want to admit that in general the solubility curve (s) for any given oxide must resemble those of Fig. 3, with specific slopes for the acidic and basic solutes which are decided by the stoichiometric coefficients in the disso lution reactions. Of course, the solubility curves for some oxides may be complicated be the presence of more than two important solutes. For the discussion to follow, the sol utes. For the discussion to follow, the solute species of Eqs. (la-d) and the general form of the solubility curves of Fig. 3 are accepted as correct.
An Acid-Base Fluxing Mechanism For Hot Corrosion of a Pure Metal
As a criterion for the continued (stable) hot corrosion of a pure metal, we propose that the gradient at the oxide/ salt interface in the solubility of the protective oxide (as acid or basic solute species) is negative, i.e., The concentration gradient criterion of Eq. (2) may be considered as an empirical condition for dissolution and re~ precipitation in the salt. However, in an isothermal system, the concentration gradient would not serve as the driving force for the transport of the soluble ionic species because no gradient in the chemical potential of the oxide would exist in a salt film in local equilibrium with the oxide throughout.
In general, the diffusion flux of the oxide, MO, can be expressed according to a linear law as follows: where p , is the chemical potential of the component and is the transport co efficien t. When oxide is reprecipitated in the s a lt, p ĵ q becomes uniform i f local equilibrium is assumed. Then, in an isothermal system, the f ir s t term of the r.h .s. of Eq. (3) would be zero, as would the fourth term, while the third term should be negligible small. But the second term of the r .h .s . of Eq. (3) would not be zero so long as the basicity gradient and the cross-term L,~ were not zero. 1 The only other means for the transport of the soluble ionic species is local convection set up by density differen ces in the salt film. The Stokes sinking of the heavier oxide precipitates might in this case contribute to convec tive transport.
Because a reprecipitated oxide cannot form as a con tinuous protective layer, a voluminous, porous oxide pro duct interspersed with sa lt is expected; this morphology is indeed representative of hot corrosion products. This model illu stra ted schematically in Fig. 4 may be considered as an empirical criterion which assumes local equilibrium at the oxide/salt interface and throughout the salt film. The oc currence of oxide reprecipitation in the salt film is common with the acid-base fluxing mechanism proposed by Goebel and 18 P e ttit , but these authors assumed that basic fluxing arose from a chemical reaction to form sulfides (sulfidation) in the metal. Indeed, the reduction of the sulfate ion to form sulfide would release oxide ions at the oxide/salt interface in excess of those required for the growth of the oxide scale, and basic fluxing should occur. Later, acid fluxing of the scale was described by Goebel etal^ as arising from the re moval of oxide ions in the sa lt upon the formation of basic complexes by other m etallic components in the metal, e .g ., In the present acid-base fluxing model for a pure metal, we propose that the gradient in the so lu b ility of formation of M oO alloy.^ following the oxidation of M o in the the protective oxide (as given by Fig, 3, for example) in the sa lt film is established by the local variation of sodium oxide a c tiv ity , and perhaps P« , across the salt film. In turn, these conditions areu2 established p rin ci pally by the b asicity necessarily generated at the site of the electrochemical reduction reaction, as well as the chemical interaction between the oxidant and the s a lt. As in any electrochemical process, the open-circuit h a lf-c e ll potential En for each possible reduction reaction is express ed as : u
V Eo |[F~ ln tare d 1 (3)
[aoxid ] for the h a lf-ce ll reduction reaction oxidized species+ n e -^-reduced species where Eq is the standard open-circuit h a lf-ce ll potential for the reduction reaction. A tentative, standard e lectro chemical reduction series for Na-SCL at 1200K is proposed as Table I. Quantitative values^ ^for the redox reactions of Table I have not been determined, and indeed, a standard reference electrode has not been decided. Perhaps-a Au/ 02-S0-(1:2) electrode, as suggested by Rahmei would represent^ suitable standard reference electrode (equiva lent to standard hydrogen electrode), with the Ag/10 mole % Ag2S0^ : Na^SO^ electrode serving as the equivalent of the calomel electrode in aqueous solutions.
According to Table I, the effects of melt b asicity, oxidation potential of melt, the gaseous environment, and the presence of transition metal ions are a ll important in deciding the predominant reduction reaction, as is also the case in aqueous solution corrosion. In a highly basic melt (perhaps, a Na20 >10 ) reduction reactions a and b should ,-1 5 be favored. In a highly acid melt (perhaps aNa Q < 10 reduction reaction c would be favored, at d2u least for a salt film thin enough to support the necessary mole cular SOg and S02 countertransport.As illu strated in Fig. 5a, oxygen reduction by reaction d would be favored only for reasonably neutral, thin films of Na2S04,as w ill be ), discussed later. If the metal or alloy should provide dissolved transition metal ions to a highly oxidized melt, then the reactions e-g could predominate, and should intro duce a very important sh ift in the site of the reduction reaction to the salt/gas interface, as shown in Fig.5b.
3+
For dilute M and M concentrations, counterdiffusion of these species would be required. But i f the total concen tration of the mixed valence transition metal ions were su fficien tly high, then electronic exchanges between these ions (M2++ M*++ M2+) could introduce a more rapid trans port by electronic conduction through the melt. Electronic conduction was established for SiC^-CaO-FeO-Fe^^ slags at high temperatures by Engell and Vygen20.
If a ll these reactions with higher Eq exhibit a lower E because of concentration polarization according to Eq.
(3), then the sulfate anion it s e lf can be reduced according a sequence of reduction reactions such as h-k, although the details of these reactions have not been sp ecifica lly estab lished. The reduction of the sulfate anion is obviously fa vored for deep, reasonably neutral melts, especially at the start of a reaction before the transition metal ions become available. Because of the general a v a ila b ility of sulfate anions, reduction of the anion would certainly not be lim ited by a slow arrival flux to the oxide/metal interface. The reduction of the sulfate anion could lead to such high local levels of sulfur a ctiv ity that the formation of metal sulfides could occur; frequently, hot corrosion is associ ated with simultaneous sulfidation. However, when s u lf i dation is viewed as the result of only one of the possible reduction reactions, hot corrosion is not necessarily tied to sulfidation.
The other half of the electrochemical hot corrosion mechanism, the oxidation h a lf-c e ll reaction is the oxidation of the metal at the metal/scale interface. As revealed by the Pourbaix thermodynamic diagrams, most metals could not exist in local equilibrium in contact with Na^O^, so gen erally, an intervening electronically-conducting oxide film is expected. The oxidation of the metal at the metal/oxide interface with the ensuing transport of cations plus e lec trons to the oxide/salt interface would equal the rate of the electrochemical reduction reaction. A time-independ ent rate of oxide dissolution at the oxide/salt interface with an ensuing oxide reprecipitation would lead to the maintainance of a constant steady-state oxide thickness and constant steady-state reaction kinetics as is frequently found in hot corrosion. An exactly analogous model exists for the coupling of scale growth and scale evaporation in 21 22 metal/gas reactions 9 . Then for "linear" hot corrosion kinetics at steady-state, i f the in terfacial reactions can be reasonably assumed to sa tisfy local equilibrium, the corrosion rate would be controlled by the diffusion of ca tions through the oxide in series with diffusion of reac tants and products of the electrochemical reduction reaction through the salt film.
Let us return to an integration of the electrochemical reduction reaction into the hot corrosion criterion of Eq.
(2) as illu strated in Fig. 4. Each electrochemical reduction reaction of Table I introduces increased local basicity at the reduction s ite . As illu stra ted in Fig. 5a, the reduc tion reaction would usually be expected to occur at the oxide/salt interface, but as shown in Fig. 5b, the presence of mixed valence transition metals could allow the reduction step to occur at the salt/gas interface. In either case, the s ite of the reduction reaction would be expected to be the most basic local condition in the sa lt film. Figure 6 illu stra tes a schematic oxide so lu b ility plot with superposition of four different sets of relative ba s ic itie s at the salt/gas interface II and at the oxide/salt interface I which would set up and support continued hot corrosion of a pure metal according to the model outlined here. In each case, the condition of Eq. (2) as illu straed in Fig. 4 is sa tisfied . Of course, for some oxides, the so lu b ility of the oxide (C^O^, for example) also depends upon Pn so that a three-dimensional diagram would be more u 2 suitable.
According to cases A and C of Fig. 6, the basicity gra dient in the sa lt film is opposite in direction, but then the mode of oxide dissolution is also opposite. For case B, one would expect continued hot corrosion whenever the local b a sicitie s at interfaces I and II straddle the minimum. In general, one would expect that the value for the basicity at the oxide/gas interface would be fixed approximately by the value of P^q in the ambient atmosphere, unless the salt basicity were somehow otherwise established by the detailed mode of continuing sa lt deposition. If the relative basic itie s at the interfaces I and II for cases A and C of Fig. 6 were reversed, then the hot corrosion criterion of Eq. (2) would not be met, and one would expect the entire salt film to saturate with the oxide consistent with the basicity at interface I after which time accelerated hot corrosion should stop. Nickel coated with a film of Na9SO. and exposed to air experiences transient accelerated hot corrosion To test this dissolution model for the hot corrosion of a pure metal, a knowledge of the oxide so lu b ility as a func tion of the Pq and basicity (log a^a q) must be known and experimental determination of Pn u2 and basicity at both the oxide/salt and the salt/gas interfaces are required. The la tte r determinations have not been previously made for hot corrosion beneath thin fused salt films.
In Fig. 7, a novel experimental arrangement is intro duced for electrochemical studies of the hot corrosion of metals beneath thin fused salt films. The provision of two monitoring electrodes, a ZrC^-CaO oxygen sensor and a mullite -e le ctro ly te probe with fixed PgQ 3 and PQ , , so that combined with the reading of the zirconia probe, the local basicity at the corroding metal and at the gas/salt interface (Au electrode) can be established. These are exactly the measurements required to test the mechanism and criterion for hot corrosion out lined here. Such measurements have been completed for sev eral metals in several gaseous ambients and w ill be report ed shortly.
The experimental arrangement of Fig. 7 further provides the novel p o ssib ility for anodic and cathodic polarization studies beneath a thin layer of fused sa lt.
In such stud ies, either gold or the Zr02 or the mullite probe could be used as the reference electrode with the corroding metal as the working electrode, and a gold electrode as the counter electrode. Because the so lu b ility of the oxide film on the working electrode may depend upon both Pq and a Na20» two probes are again required to independently track these pa rameters at the corroding metal (working electrode) surface. The past use of a Ag/Ag2SO^ electrode as the sole reference electrode in polarization studies in deep melts fixes only at the oxide or metal/salt interthe ratio a^/ P^172 face, and this control may be inadequate to specify or know the local so lu b ility.
As a further use of the arrangement of Fig. 7, a fre e ly corroding specimen can be shorted through a high-impedance microammeter to an immersed gold electrode to form a galvanic couple. As applied by Mansfeld and Kenkel^, '^ to study atmospheric corrosion under aqueous thin films, the galvanic current can represent exactly the corrosion cur rent when transport through the electrolyte film consti tutes the rate limiting step. Preliminary electrochemical thin film polarization and galvanic coupling studies have been completed and w ill be reported shortly.
Mansfeld and Kenkel* 9 have shown that the kinetics of atmospheric corrosion are limited by the diffusion of molecular oxygen through the thin aqueous surface film. From recent measurements of the so lu b ility 2^ and diffu-25 siv ity of molecular oxygen in molten Na2S04, a Fickrs f ir s t law calculation can be made to test the rate of ar rival of oxygen for the cathodic reduction reaction in 1) in rapid hot corrosion, the arrival of oxy gen may be inadequate to supply the cathodic reduction of oxygen, and 2) for hot corrosion experimentation in deep (crucible) melts, oxygen reduction cannot represent the cathodic step. Understandably, in the experiment of Goe-18 bel and Pettit, nickel sulfide was observed as a corro sion product from the reduction of the sulfate anion. Fi nally, the hot corrosion of metals beneath a thin fused salt film and the atmospheric corrosion of metals beneath aqueous films do not generally share the same rate limit ing step (transport-limited oxygen reduction).
Finally, subsequent to the mechanism and criterion for continued hot corrosion of a pure metal proposed here, one might inquire about the behavior expected for the hot corrosion of a binary A-B solid solution alloy. Unlike some of the variable-composition, amorphous corrosion prod ucts formed in aqueous solutions, at elevated temperatures, the products usually are crystalline and frequently exhitit little mutual solubility. Thus, the adherent corrosion products in fused salt corrosion might be represented by AO and BO, each of which would exhibit solubility curves similar to those shown in Fig. 3 for and A^O^.
These solubilities would depend upon a Na20 and perhaps up on Pn . The two solubility curves .would naturally exhibit u 2 some lateral and vertical displacements relative to each other. One may consider Fig, 3 as illustrative for the superposition of two solubility curves.
In highly acidic (low a^a q ) salts, the AO and BO ox ides would each exhibit acidic dissolution which would pro vide oxide ions as a soluble product of each oxide. Undet this condition, i.e., for interfacial values of a^a q to the right of each solubility minimum, the previously proposed criterion for a pure metal could be applied to each oxide separately. However, in an environment which would fix the aNa 0 va^ue at the oxide/salt interface at a value between the two minima, the oxide with its minimum on the right would exhibit basic dissolution upon complexing with oxide ions, while the other oxide would experience acidic disso lution to release soluble oxide ions. Under this condition, the combination of an acidic and a basic dissolution, rapid synergistic attack (AO + BO + A02 + B ) would be expected.
Obviously, a knowledge of solubility curves formJthe basis for testing and avoiding this condition. But further, the experimental assembly of Fig. 7 is required to establish the independent values of aNa Q and PQ at the oxide/salt interface. 2 When the electrochemical probes indicate that the oxide/ salt interface lies to the left (is more basic) than either of the minima of the solubility curves, then each oxide should exhibit basic dissolution without rapid synergistic attack. The previously suggested criterion would then be individually applied for each product oxide.
In conclusion, we suggest here that accelerated hot corrosion corresponds to a rapid dissolution and repre cipitation of an oxide film, a process supported by a nega tive gradient in the solubility of the oxide across the salt film. The local values for oxide solubility should be established by an increased basicity necessarily intro duced at the site of the electrochemical reduction reac tion. A novel experimental arrangement is proposed to test these ideas by independent measurements of aN Q and Pn locally in thin fused salt films. A model is suggest-u2 ed for the occurrence of rapid synergistic hot corrosion of a binary alloy.
|
2019-04-30T13:09:09.317Z
|
1981-01-01T00:00:00.000
|
{
"year": 1981,
"sha1": "cdb98b74919bb162af553c7a0dcf0050a6e900d0",
"oa_license": null,
"oa_url": "https://doi.org/10.1149/198110.0159pv",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "523a4816bc328186062cdeda68c3dc86bb41ba95",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
257560540
|
pes2o/s2orc
|
v3-fos-license
|
Solving the plastic dilemma: the fungal and bacterial biodegradability of polyurethanes
Polyurethane (PU) is a plastic polymer which, due to its various desirable characteristics, has been applied extensively in domestic, industrial and medical fields for the past 50 years. Subsequently, an increasing amount of PU waste is generated annually. PU, like many other plastics, is highly resistant to degradation and is a substantial threat to our environment. Currently PU wastes are handled through conventional disposal techniques such as landfill, incineration and recycling. Due to the many drawbacks of these techniques, a ‘greener’ alternative is necessary, and biodegradation appears to be the most promising option. Biodegradation has the potential to completely mineralise plastic waste or recover the input materials and better enable recycling. There are hurdles to overcome however, primarily the efficiency of the process and the presence of waste plastics with inherently different chemical structures. This review will focus on polyurethanes and their biodegradation, outlining the difficulty of degrading different versions of the same material and strategies for achieving more efficient biodegradation. Supplementary Information The online version contains supplementary material available at 10.1007/s11274-023-03558-8.
Introduction
Polyurethane (PU) has had extensive applications in the domestic, industrial and medical fields for the past 50 years. Its intrinsic structure gives it high tensile strength, high resistance to hydrolysis and a high melting point (Ibrahim et al. 2009;Loredo-Treviño et al. 2012). It is synthetically prepared and used in adhesives, foams, food-grade coatings, insulators, tyres, sponges and many more products (Matsumura et al. 2006). The global PU production in 2016 was around 22.3 million tonnes with a growth rate of 4.0% per annum (Austin and Hicks 2017). In Europe alone polyurethane contributed to nearly 7.7% of the total plastic demand; around 4 million tonnes in 2017 (PlasticsEurope 2018). In Australia it is estimated that approximately 2.5 million tons of plastic waste were generated in the year 2016-17, out of which only 12% was recycled and 87% was dumped in landfills, where it will remain for many years to come (Pickin et al. 2018). If the world continues producing plastic waste at the current rate, then over 12000Mt will accumulate by the year 2050 (Geyer et al. 2017). This review will focus on polyurethanes and their biodegradation, in order to define in greater detail this portion of the larger issue of the environmental hazards of plastic.
Environmental impact of polyurethanes
Increase in PU demand over the last 50 years has led to a major bottleneck in determining an efficient and clean method for its disposal. Subsequently, it accumulates in the environment, both terrestrial and aquatic. Due to their low density, most plastics tend to float, and so are carried by floods, high tides etc. into waterways and finally reach the oceans, where they cause irreversible damage to that ecosystem (Derraik 2002). Various surveys have revealed that out of the five major Ocean gyres, the South Pacific subtropical gyre and the eastern side of the North Pacific Ocean gyre are the hotspots for the accumulation of plastic debris (Kaiser 2010;Eriksen et al. 2013;Law et al. 2014). The lack of suitable waste management practices in many coastal countries is directly related to this problem (Jambeck 1 3 122 Page 2 of 11 et al. 2015). Almost 90% of the oceanic plastic waste comes from just ten rivers which originate from Asia and Africa (Schmidt et al. 2017), and studies have shown that at least 690 marine species are directly affected by plastic debris, be it through ingestion, entanglement, and/or smothering (Gall and Thompson 2015).
A major concern is the formation of microplastics from these debris. Plastic waste undergoes limited degradation in the environment and fragments into microplastics (Rillig 2012;Su et al. 2019). Due to their minute size (< 5 mm), they easily invade the local food chain and bioaccumulate. Microplastic contamination has reached a stage where they have even been found in the placentas of unborn babies (Ragusa et al. 2021). Microplastics also have a tendency to absorb organic pollutants from the surrounding environment, becoming toxic (Rillig 2012;Zhu et al. 2019). Persistent organic pollutants (POPs) such as pesticides, polycyclic aromatic hydrocarbons (PAHs) and polychlorinated biphenyls (PCBs) are of particular concern (Rios et al. 2010;Fisner et al. 2013).
Polyurethane-production and properties
Polyurethane was first produced in the 1930s by Otto Bayer and his team in pursuit of a new polymer to compete with Nylon 6,6. Initially, they produced polyurea by reacting diisocyanates with either aliphatic or aromatic diamines, but these polymers were too hydrophilic for their use as plastics. By exchanging diamines for diols, they were able to produce polyurethanes, which proved much more useful. One polymer produced by reacting 1,4-butanediol and hexamethylene diisocyanate (HDI) had very similar properties to Nylon, but with better electrical and mechanical stability, and so quickly became widely popular (Heath and Cooper 2013).
Properties and characteristics
Polyurethanes are segmented polymers made up of crystalline and amorphous regions. The crystalline segments provide the polymer with strength, while the amorphous segments impart flexibility. The hardness and the elasticity of polyurethane can be customised by controlling the ratio of each, making polyurethanes suitable for a range of products, from rigid foams to elastomer fibres (Shelke et al. 2014). Due to this versatility, polyurethanes are applied as dispersions, coatings, adhesives, foams, fibres and more. Over the last two decades, due to their biocompatibility they have been useful in biomedical fields, such as in the making of artificial pacemakers, blood bags, catheters, insulators, grafts etc. (Heath and Cooper 2013). Table 1 lists some physical properties of some representative polyurethanes, demonstrating the versatility of this group of polymers.
Polyurethane chemistry, synthesis and manufacturing
Polyurethanes are synthesised by reacting diisocyanates, polyglycols and chain extenders in a specific order and proportion depending on the type of polyurethane needed (Heath and Cooper 2013). They are produced generally by the exothermic reaction between the reactive hydroxyl groups of the polyglycols and the isocyanate groups of the diisocyanates present in excess (Shelke et al. 2014). This initially results in the formation of a prepolymer with regular urethane groups as shown in Fig. 1. This prepolymer is then reacted with chain extenders; commonly low molecular weight diols or diamines (Shelke et al. 2014). Some of the commonly used diisocyanates, diols and chain extenders are listed in Table 2.
Different combinations of diisocyanates and diols give PU materials of varied characteristics. Polyglycols such as polyethers, polyesters, polycaprolactones and polycarbonates constitute the major fraction of polyurethanes (Mahajan and Gupta 2015). The polyglycol segments of polyurethane chains account for the amorphous sections of the bulk materials and so longer segments/higher content of polyglycols results in a softer PU with greater flexibility. The crystalline segments are mainly due to extensive hydrogen bonding between the urethane groups formed from the diisocyanates. Using shorter and/or lower content of polyglycols or using shorter diisocyanates during the synthesis of PU increases the frequency of urethane bonds and will result in a harder, more rigid PU (Young and Lovell 2011).
The ratio of the amorphous to crystalline segments and the specific polyglycols used will determine how susceptible a PU is to biodegradation. Softer (i.e. more amorphous) segments and polyester polyglycols are preferred for a more biodegradable PU (Kim and Kim 1998). Polyester PUs are more biodegradable due to the presence of hydrolysable ester moieties that are more prone to microbial enzyme attacks, and more amorphous chain packing makes these groups more accessible (Young and Lovell 2011;Mahajan and Gupta 2015;Kemona and Piotrowska 2020;Jin et al. 2021).
Conventional disposal of polyurethane
Polyurethane products are commonly 'single use' items, and so are often discarded a short time after having fulfilled their intended purpose. There are three conventional methods employed to dispose of plastic waste and attempt to mitigate its accumulation. Each has its limitations, as outlined below, and alternative approaches such as largescale biodegradation must be developed.
Landfill
Landfill is the simple act of burying wastes and leaving them to their fate. Plastics, and especially polyurethane, are highly resistant to environmental degradation, and this is exacerbated by the limited exposure to sunlight and oxygen. Thus they persist for a very long time, and the land where these wastes are buried remains occupied for many years to come (Yang et al. 2012). Landfill is slow and occupies space that could otherwise be used for more productive purposes (e.g., farming, energy production). Another drawback is the slow and steady release of microplastics and other harmful chemicals into the surrounding soil. This obviously impacts the immediate environment, but the effects can spread when these materials leach off during rain and floods and/or are otherwise transported into nearby water bodies (Oehlmann et al. 2009;Teuten et al. 2009;Su et al. 2019). Because of this, disposal in landfill is not an efficient or future-friendly alternative
Incineration
By burning plastics some of the energy contained in the material can be recovered in the form of heat, which can then be converted to electricity or other mechanical force. It is estimated that burning a kilo of polyurethane waste yields a calorific value of around 7000 kcal/kg, which is equivalent to coal (Yang et al. 2012). Incinerating PU waste also reduces its solid volume by 99%, thus eliminating the need for huge land space associated with landfills. However, this method has a very big disadvantage of releasing a large number of air pollutants. Carbon dioxide (CO 2 ), carbon monoxide (CO), hydrogen cyanide (HCN), hydrogen halides, oxides of nitrogen, isocyanates and more are released into the atmosphere (McKenna and Hull 2016). Air pollution control systems (APCs) can offset some of these pollutants, but not all incineration plants are capable of achieving good emission standards (Makarichi et al. 2018). Some of these compounds are chemical asphyxiant gases, like CO and HCN, and cause irreversible damage and death (Hartzell 1996). Hydrogen halides, isocyanates and oxides of nitrogen are irritant gases and cause a range of complex symptoms from skin irritation, tears, and pain in the chest to severe respiratory disruption. Carbon dioxide is of course one of the most powerful greenhouse gases and the primary cause of the greenhouse effect (McKenna and Hull 2016).
Recycling
Polyurethanes are commonly recycled via chemical processing or mechanical processing (Yang et al. 2012). As the name suggests, chemical processing involves the use of chemicals to create useful raw materials from scrap PU (Yang et al. 2012). The polyurethane is depolymerised by chemolysis, generating end products dependent on the chemicals used and the PU composition. Chemical processing can be performed by glycolysis/alcoholysis (using low molecular weight alcohols), hydrolysis (water), aminolysis (alkanolamine) and phosphate ester methods (dimethyl phosphonate) (Xue et al. 1995;Troev et al. 2000;Yang et al. 2012). The end products are then purified and used as raw materials for producing new plastic. The process is however costly and tedious, has high safety requirements and can produce hazardous by-products. Mechanical processing is somewhat more convenient and cheaper. Mechanical recycling of polyurethane is performed by grinding into granules, powder or flakes via mechanical force and using these in the production of new materials. The chemical structure of the scrap PU remains essentially the same and only the physical form changes. PU flakes can be re-bound with isocyanate or coated with binders and pressed under heat to produce new items. Granules or powder can be used as a filler material in making new parts, and granules can also be injectionmoulded to make recycled PU materials. The scrap granules are exposed to high temperatures (~ 180 ℃) and high pressure (~ 350 bar), which allows the particles to melt and integrate/bind without using any additional binding agents (Scheirs 1998;Yang et al. 2012). All in all, while recycling overcomes the limitations of landfill and incineration, it is generally costly and not very efficient, and the physical integrity of the product decreases with every cycle.
A comparison of the three standard disposal methods is presented in Table 3.
Environmental degradation of polyurethane
There are four types of degradation that polymers such as polyurethane can undergo in the environment: photodegradation, thermo-oxidative degradation, hydrolytic cleavage and biodegradation (Andrady 2011). Degradation is commonly kick-started by photodegradation, triggering thermooxidative degradation. Hydrolytic cleavage can also occur, further facilitating fragmentation. Microbes in the environment can potentially exert their limited influence at any point (Cosgrove et al. 2007;Andrady 2011), but mineralisation only occurs to a limited degree (Fig. 2). The degree to which each type of degradation occurs largely depends on the environment and the polymer in question, e.g. studies show that (Stokes et al. 1995;Christenson et al. 2004).
Photo-oxidative degradation
Ultraviolet light ranges from ~ 295 to 380 nm wavelengths.
Bonds such as C-C, C-H and O-H which are commonly found in polymers absorb light of wavelengths below 200 nm, whereas carbonyls and conjugated double bonds absorb light between 200 and 300 nm. Most plastics also contain impurities or additives which absorb UV light. UV radiation excites the electrons in either the polymer or the impurities R*, supplying the energy to transfer them to an acceptor, most commonly O 2 (I). This causes the formation of RO 2 * free radicals, which react with the polymer (RH), forming hydroperoxide groups (II). This reaction results in the breakage of the polymer chain. Hydroperoxide is unstable under light and increased temperature and breaks down to give two free-radicals RO* and HO* (III), which are then available to continue the process (Feldman 2002). Photooxidative degradation occurs preferentially at the material surface due to the low permeability of oxygen and generally results in the yellowing of clear polymers or discolouration of coloured ones, loss of surface shine and surface cracking (Singh et al. 2001;Feldman 2002).
Hydrolytic degradation
Certain functional groups within the backbone of a polymer can render it susceptible to hydrolysis. Figure 3 shows some possible hydrolytic degradation reactions that can occur in various types of PU. Hydrolysis can occur at ester bonds, urethane bonds and urea bonds present in the polymer. Hydrolysis of ester bonds cleaves the polymer chain, with one of the newly generated free ends terminated by a carboxylic acid group and the other by an alcohol (Fig. 3.1). Hydrolysis of a urethane bond yields an alcohol and an amine, emitting a molecule of CO 2 in the process (Fig. 3.2). Similarly, hydrolysis of a urea bond yields two amines and the emission of CO 2 (Fig. 3.3) (Cauich-Rodríguez et al. 2013). Hydrolytic degradation is limited both by the availability of water and the access it has to the relevant functional groups.
Biodegradation
During biodegradation, the microbes in the environment can use the plastic as a carbon and/or energy source, incorporating carbon liberated from the polymer backbone into its own structures or converting it to carbon dioxide or small organic molecules and thereby generating energy. The biodegradation process can be divided into four stages. The first, biodeterioration, is where biotic factors such as the action of enzymes, production of organic acids, or oxidation processes cause deleterious changes to the polymer, generally starting from the surface and affecting the matrix (Lucas et al. 2008). This is followed by biofragmentation, where the long polymeric chain is cleaved at specific vulnerable sites such as ester groups. This type of action mainly results in the formation of hydrophilic monomers/oligomers (Jin et al. 2021). These compounds are Fig. 3 Possible hydrolysis of some types ofbonds in PU. Heath and Cooper (2013) Page 7 of 11 122 more easily taken up and metabolised by other microbes in the vicinity, leading to "bioassimilation". The final stage, mineralisation, is where the carbon in the polymer is ultimately converted to cell mass and carbon dioxide.
Biodegradation of polyurethane
A trend is seen in polyurethane biodegradation in which the amorphous segments of the PU start disintegrating before the crystalline segments (Huang and Roby 1986). The amorphous segments of PU consist of less densely packed polymer chains and contain functional groups such as esters, which are easily hydrolysed (Jin et al. 2021). These regions are more readily accessible to the microbes for biodeterioration, unlike the crystalline regions which are denser and more orderly packed (Kemona and Piotrowska 2020). Biofragmentation can then more readily occur (Ibrahim et al. 2009;Thirunavukarasu et al. 2015), and subsequently greater bioassimilation (Kemona and Piotrowska 2020).
The rate at which polyurethanes biodegrade depends greatly on the material composition and properties. Magnin et al. (2019a) determined how efficiently four thermoplastic PU (TPU) with different macromolecular architectures were degraded by fungi. They found that poly(ester ether)-based TPU and polyether-based TPU showed no significant weight loss over two months, whereas TPU containing polycaprolactone or fatty acid dimers reduced in mass from 1.7 to 9% after two months. Pfohl et al. (2022) examined the effect of hard segment content, presence of hydrolysis stabilisers and cross-linking on degradability. It was found that the extent and the rate of biodegradation decreased as the density of cross-linking and the amount of hard segment increased, and there was less fragmentation of the PU particles in the presence of hydrolysis stabilisers. The greatest extent of biodegradation they observed was 72.3%.
To make biodegradation viable for eliminating PU waste, a thorough understanding of how the organisms biodegrade polyurethane is necessary. Properly facilitated, biodegradation has the potential to achieve complete mineralisation of plastic waste. It can often be a commensal process, requiring different organisms to contribute to each stage of biodegradation to different degrees (Lucas et al. 2008). The main players in the biodegradation of plastics are bacteria and fungi, and in the case of polyurethane degradation fungi dominate. Each has ways, however, of degrading polyurethane with inherent pros and cons (Crabbe et al. 1994;Barratt et al. 2003;Cosgrove et al. 2007).
Fungal biodegradation
Fungi have higher enzyme diversity and biotic and abiotic stress tolerance than bacteria, and thanks to this they dominate the biodegradation of polyurethane. Although many bacteria can degrade PU, many studies have shown that samples of PU collected from dumping sites were more heavily colonised by fungi than bacteria (Barratt et al. 2003;Cosgrove et al. 2007). Examples exist in the literature of fungi that have been shown to be able to survive using polyurethane as sole carbon and/or energy source (Khruengsai et al. 2022;Liu et al. 2023), however degradation rates can vary significantly by material. For example, Liu et al. (2023) described a strain of Cladosporium which could solubilise 32.42% of a poly(butylene adipate)based material over 28 days, while Magnin et al. (2019a) achieved 8.9% weight loss of a polycaprolactone based material over two months with a strain of Penicillium. Fungi are also known to secrete a wide range of substances, including enzymes, into their surroundings (Yang 2006). This is a useful trait for PU biodegradation, because for the bulk polymer to degrade it must come in direct contact with hydrolytic enzymes. Supplementary Table S1 gives a brief list of fungi that have been identified to have PU-degrading potential, and the substrate they were tested on.
In addition to producing hydrolytic enzymes, the mycelia of fungi when growing can exert pressure on the surface of the PU, cracking it and providing a larger exposed surface area for biodegradation to take place (Khan et al. 2017). These mycelia can also grow into the cracks, further weakening the PU in the process. Magnin et al. (2019a) tested three different fungi for their capacity to degrade various PUs, and when samples were viewed by scanning electron microscopy (SEM) after treatment with a strain of Penicillium irregular channels on the surface of the PU were observed. It was also observed that the diameter of the Penicillium filaments and the width of the channels were very similar. In the same study, a strain of Alternaria produced deep holes on the surface, and treatment with a strain of Aspergillus made the surface of the PU appear irregular with occasional small cracks (Magnin et al. 2019a).
Bacterial biodegradation
While bacteria are generally considered less potent than fungi when it comes to degrading polyurethane, many pieces of evidence suggest that they do have good potential to do so. In previous literature the majority of the bacteria that can utilise PU belong to the genera Bacillus, Pseudomonas, and Acinetobacter (Kay et al. 1991;Gautam et al. 2007;Howard et al. 2012;Shah et al. 2013;Nakkabi et al. 2015;Rafiemanzelat et al. 2015;Hung et al. 2016;Vargas-Suárez et al. 2019;Espinosa et al. 2020). Amongst these bacteria, P. aeruginosa ATCC 13388 is recommended by the American Society for Testing and Materials (ASTM) for testing whether a polymer is resistant to bacterial degradation (Kay et al. 1991). Some other genera such as Comamonas (Nakajima-Kambe et al. 1995), Micrococcus (Rafiemanzelat et al. 2015;Vargas-Suárez et al. 2019), Alicycliphilus (Pérez-Lara et al. 2016), Corynebacterium (Shah et al. 2008) and Staphylococcus (Curia et al. 2014) have also been associated with PU biodegradation. Supplementary Table S2 contains a list of bacteria that can degrade PU and the types of PU they are known to be able to degrade.
In a study carried out by Shah et al. (2013) polyester PU film pieces were incubated with a strain of Pseudomonas aeruginosa for 4 weeks. When the surfaces were analysed using scanning electron microscopy (SEM), the bacteria were seen attached to the polymer surface with small cracks originating from the point of contact, suggesting that the bacteria had some effect of degradation on the polymer. Rafiemanzelat et al. (2015) showed holes, pits, black spots, erosions, and cracks on the surface of a PU block copolymer after incubation with Bacillus amyloliquefaciens M3. A 30-40% reduction in the PUs weight was also observed. FTIR analysis showed that absorption bands corresponding to C=O (urethane, amide, and urea), N-H, C-N, C-O, and C-H changed significantly compared to the control, indicating a change to its chemical backbone. Curia et al. (2014) also demonstrated the formation of micro-and nano-sized particles from bulk PU surfaces by Staphylococcus aureus.
Most research focuses on isolating PU degrading microbes from environmental samples in vitro, but there is another approach that is proving to have even greater potential, i.e. degradation of PU with the help of insects and their gut microbiome (Liu et al. 2022;Yang et al. 2023). The larvae of Tenebrio molitor can grow on a diet purely containing polyether-PU foams, resulting in a 67% weight loss of the PU foam after 35 days. The survival rate of the larvae was similar to those that were bran-feed. The larvae frass contained foam fragments that showed ether and urethane bond scission, indicating partial biodegradation. Through high-throughput sequencing, it was confirmed that there was an increase in the microbial population belonging to the families of Enterobacteriaceae and Streptococcaceae in the polyether-PU foam-fed larvae (Liu et al. 2022).
Enzymes involved
A comprehensive mechanism underlying the degradation of polyurethane by microbes is not known, but the enzymes involved have esterase, protease or lipase activity and may be membrane-bound or secreted (Akutsu et al. 1998;Howard and Blake 1998;Howard et al. 2001). A study carried out on an esterase purified from the bacterium Comamonas acidovorans showed that the enzyme degrades the PU in two steps: first, binding of the enzyme to the PU surface hydrophobically through the enzyme's surface-binding domain and second, hydrolysing the ester bonds present via its catalytic domain (Akutsu et al. 1998). This type of action by the membrane-bound enzymes could have advantages over secreted enzymes as it can overcome the hydrophobic nature of many polyurethanes (Barcoto and Rodrigues 2022). In terms of fungal enzymes, only a few studies related to their characterisation have been published to date. Supplementary Table S3 lists some of the enzymes that have been purified, their sources and the PU substrate(s) they were tested upon. For more effective degradation multiple enzymes can work cooperatively; a mixture of esterase and amidase was proven to work synergistically to degrade PU and enable recovery of some of its building blocks (Magnin et al. 2019b). The esterase enzyme performed initial biofragmentation of PU, releasing fragments of low molecular mass containing urethane bonds, which were then further hydrolysed by an amidase.
Conclusion and future directions
Given the ever-increasing demand for plastics like polyurethane, it is essential to also realise the impacts its waste will have on the environment. Due to the many drawbacks of the conventional methods to dispose of these wastes, a 'greener' alternative is necessary, and biodegradation appears to be the most promising option. Biodegradation has the potential to completely mineralise plastic waste, or recover the input materials and better enable recycling. There are hurdles to overcome however, primarily with the efficiency of the process. Inherently different chemical structures of plastic waste further complicate this difficulty. To overcome these limitations, more research needs to be performed not only on improving PU biodegradation but extending what is known to a greater variety of plastics. Future research should aim to find and/or develop new microbes that can degrade multiple different plastics rapidly and efficiently, and to create consortia of plastic-degrading microbes by using the already known plastic degraders. Current progress in the field is encouraging and gives cause to be hopeful that we can soon resolve one of our world's biggest and most persistent problems.
Author contributions PB collated the data and wrote the first draft of the manuscript. MB and HW defined the scope and revised the manuscript. All authors reviewed the final draft.
Funding Open Access funding enabled and organized by CAUL and its Member Institutions. PB is a recipient of a tuition fee waiver from Swinburne University of Techonology. No other funds or grants were received during the preparation of this manuscript.
Conflict of interest The authors declare no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
|
2023-03-17T14:31:01.594Z
|
2023-03-17T00:00:00.000
|
{
"year": 2023,
"sha1": "b56a13d5479d7ada7fb615457d8913c6767ed70b",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11274-023-03558-8.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "a8c4bb07e90fb6bac7fe267862cee21cfb7eae34",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
13008529
|
pes2o/s2orc
|
v3-fos-license
|
Quantitative Assessment of Chronic Skin Reactions Including Erythema and Pigmentation after Breast Conserving Therapy
Purpose: To evaluate long-term skin reactions following breast-conserving therapy by using the melanin-erythema index meter. Patients and Methods: 164 patients were followed for at least three years after breast-conserving therapy. For both the erythema and the melanin indices, the ratio of the irradiated-side index to the non-irradiated-side index was calculated. The time course of index ratios alternation was examined. Influences from additional therapies and patients’ age were also evaluated. Result: Both erythema and melanin index ratios of the breast skin were recovered to pre-radiation level three years after radiotherapy. However, both index ratios of the area administrated with 10-Gy boost irradiation were still high even after five years after radiotherapy. Endocrine therapy, chemotherapy and age had no significant influence on skin color reactions three years after radiotherapy. Conclusion: Quantitative assessment using the melaninerythema index meter demonstrated that chronic skin reactions following breast conserving therapy had recovered to pre-radiation level for three years after irradiation except for the 10-Gy boost irradiated area.
Introduction
The morbidity of breast cancer has been increasing in Japan, and breast cancer has become the first common cancer among Japanese women.Breast conserving therapy is the treatment of choice for early breast cancers [1] [2] and the number of breast cancer patients undergoing breast conserving therapy has been increasing in Japan [3].The role of radiotherapy is considered important for local control and post-operative radiotherapy is strongly recommended.For good prognosis, it is crucial to assess quality of life and adverse effects caused by surgery and irradiation in addition to survival or recurrent rates [4] [5].Skin changes are important factors in adverse effects.Adverse effects of radiotherapy on the skin are as follows: erythema, damages of sweat gland, dry desquamation and moist desquamation as acute reactions; hyperpigmentation, dyspigmentation, telangiectasia, atrophy, ulcer and necrosis as chronic adverse reactions [6]- [8].There are some criteria to evaluate radiation adverse effects [9] [10]; however, there have not been sufficient studies objectively assessing skin reactions following radiotherapy.We have already reported an objective method to assess acute skin reactions after breast conserving therapy using the melanin-erythema index meter (Mexameter® MX 18) [11].In this study, we used this method and evaluated the chronic skin reactions following breast conserving therapy.
Patients
The patients' characteristics are shown in Table 1.There were 164 patients who underwent radiotherapy following breast conserving surgery in our radiology department from January 2007 to December 2009 and were followed periodically on an outpatient basis for 3 years or more after radiotherapy.Seventy of these patients were followed for 5 years after radiotherapy.Whole breast irradiation was conventionally performed as follows: 4 MV X-ray, tangential field, wedge filter 15˚, 2 Gy/day, 5 fractions/week and total dose 50 Gy.Additional boost irradiation was administered to 31 patients with positive or close surgical margins; 10 Gy/5 fr of 7 or 10 MeV of electron beam radiation was additionally applied to the tumor bed.
This study was approved by our institutional reviewed board, and informed consent was obtained from the patients for the examinations.
Methods
The measurements were performed on the Multiprobe adaptor MPA: Mexameter® MX 18 (Integral, distributor for Courage + Khazaka Electronic GMBH, Germany).Mexameter® is an instrument that calculates melanin and erythema indices by using differences in the absorption between melanin and hemoglobin.Skin reflectance was measured by applying three different wavelengths (568 nm: the absorption peak for hemoglobin, 660 nm: no absorption for hemoglobin, and 880 nm: almost no absorption for hemoglobin and melanin) to calculate the hemoglobin and melanin indices (the erythema index and the melanin index, respectively) [12].
Breast skin that was sufficiently distant from the surgical scar was selected for measurements' point in each patient in order to eliminate the skin changes caused by the surgical procedures.For comparison, the corresponding area of the non-irradiated breast was analyzed in the same manner.Four measurements were taken at each site in the sitting position; the average of the four measurements was used in the analysis.Measurements were taken before and at the end of radiotherapy and for 3 years or more after the completion of radiotherapy.
For both the erythema and the melanin indices, the ratio of the irradiated-side index to the non-irradiated-side index was calculated.The relationship between the changes in the index ratios and the length of time after radiotherapy was assessed.In the 164 patients who were followed for three years or more after the completion of radiotherapy, the time course for the erythema and melanin index ratios alternation was determined.For patients who had additional electron beam irradiation, the erythema and melanin indices were determined not only for the breast skin, but also for the skin where the electron beam irradiation was applied.A ratio of the 10-Gy boost irradiated-area index to the index of the irradiated side where boost irradiation was not applied was calculated, and the correlation between the index ratio and the length of time following radiotherapy was determined.
The index ratios at three years after radiotherapy were evaluated to examine the influences by endocrine therapy, chemotherapy and the age of the patient.Table 1 shows the details of each group.
Statistical Analysis
Paired t-test and one-way repeated measure ANOVA were used for the evaluation of significance in time course skin changes.Unpaired t-test and one-way factorial ANOVA and multiple comparison tests were used for the evaluation of influences in additional therapies and age.A p-value of less than 0.05 was taken as significant.
Time Course for Skin Color Alternation
a) Erythema index ratio Figure 1(a) shows time course of erythema index ratio.Erythema index ratio before irradiation had no remarkable differences in both breasts (mean ± SE: 1.089 ± 0.026).Erythema index ratio also showed a significantly high value at the end of radiotherapy, then decreased gradually for three years after radiotherapy with statistical significance.However, no significance was recognized after three years.
We divided erythema index ratio values into three groups: high (1.2 or over), medium (under 1.2, and 0.8 and over), and low (under 0.8).Almost eighty percent of patients had high erythema index ratios at the end of radiotherapy and gradually decreased thereafter.On the other hand, low erythema index ratios appeared one year after radiotherapy and a quarter of our patients had low erythema index ratio three years after radiotherapy (Figure 1(b)).
b) Melanin index ratio Figure 2(a) shows time course in melanin index ratio alternation.Melanin index ratios were low before radiation therapy (mean 0.900 ± 0.028); this signifies that the melanin index of the affected side is lower than that of the healthy side even before irradiation.Melanin index ratio at the end of radiotherapy showed no significant change compared to ratio of pre-radiotherapy.Melanin index ratio also increased one year after radiotherapy and then decreased for three years after radiotherapy in a statistically significant manner.
We divided melanin index ratio values in the same manner as erythema index ratio values.Figure 2(b) shows patients with low melanin index ratio increasing gradually after radiotherapy, and almost thirty percent of them had low melanin index ratio (<0.8) at three years after radiotherapy.c) Influence of boost irradiation We examined chronological changes of index ratios in the boost irradiation field after radiation therapy.Erythema index of 10-Gy boost irradiated area was approximately 1.7 times of that of non-boost area at the completion of radiotherapy, and it was still at a high value even after five years of radiation therapy (Figure 3(a)).Melanin index ratio showed the same pattern as that of the erythema index ratio (Figure 3(b)).
Influences of Additional Therapies and Age
We investigated chronic influences of some factors to erythema and melanin index ratio.We compared index ratios at three years after radiation therapy in each group.a) Endocrine therapy We divided endocrine therapy into three types: aromatase inhibitor, anti-estrogen (±LH-RH agonist), and no endocrine therapy.Erythema index ratios at three years after radiotherapy of aromatase inhibitor group, anti-estrogen group, and no-endocrine group were 0.964 ± 0.032, 1.055 ± 0.044 and 1.078 ± 0.048 respectively.Erythema index ratios in aromatase inhibitor group were generally low; however, there was no statistically difference in these groups.Melanin index ratios in each group were as follows; 0.921 ± 0.026, 0.962 ± 0.046, and 0.887 ± 0.044 respectively, and no significance was recognized as a result of different types of endocrine therapy (Figure 4(a)).
b) Chemotherapy
Erythema index ratios three years after radiotherapy of patients with chemotherapy and without chemotherapy were 1.076 ± 0.080 and 1.002 ± 0.023 respectively.Melanin index ratios were 0.871 ± 0.049 and 0.936 ± 0.023 respectively.Neither erythema index ratio nor melanin index ratio had significant differences as a result of chemotherapy (Figure 4(b)).
b) Age at the start of radiotherapy Erythema index ratio of patients who were fifty years old or above was 1.002 ± 0.027, and that of patients under fifty years old was 1.045 ± 0.044.Melanin index ratios were 0.919 ± 0.023 and 0.946 ± 0.045 respectively.The age of the patient (fifty and above, and under fifty) at the beginning of radiotherapy had no significant influences on each skin color index ratio at three years after radiotherapy (Figure 4(c)).
Discussion
Breast conserving therapy is the treatment of choice for early breast cancers [1]- [3], and it is crucial to assess quality of life and adverse effects caused by surgery and irradiation in addition to survival or recurrent rates [4] [5].In order to evaluate skin damages, the National Cancer Institute CTCAE v.4.0 [9] and LENT SOMA Toxicity Grading [10] are used as standard guidelines.However, there have not been any sufficient studies to evaluate skin reactions objectively.There is a possibility that results differ by subjective evaluation (by the investigator or by the time of evaluation); therefore, follow-up of the patient requires an objective evaluation.Some investigators reported an objective assessment for radiation-induced skin reactions; for example, skin blood flow using a laser flow meter [13] [14], skin temperature by thermography [15], electrical resistance analysis [15], skin color by L*a*b* values [16], and moisture analyses by Corneometer® [16].
We already reported an objective method to assess acute skin reactions following breast conserving therapy using Mexameter® (melanin-erythema index meter) in a previous study [11].Skin color is normally affected by the amount of sub-epidermal melanin and the amount of dermal blood that is present.Mexameter® is a commercial-available instrument that calculates melanin and erythema indices by using differences in the absorption between melanin and hemoglobin [12].The Mexameter® can be operated easily and the data can be obtained promptly by placing a probe with a built-in light emitter and light receiver on the skin surface.The Mexameter® and other similar devices have been already used to evaluate the effect of laser treatment [17] and skin reactions by UV radiation [18].
In this current study, we evaluated chronic skin reactions that followed breast conserving therapy.To the best of our knowledge, chronic assessments for irradiated skin by using an objective method have not been reported.Erythema index ratios were at high values at the end of radiotherapy and then gradually declined.Maximum changes in Melanin index ratio were observed one year after radiotherapy and then decreased thereafter.Both erythema and melanin index ratios one year after radiotherapy were significantly higher compared with before radiotherapy.Three years after radiotherapy, the index ratios returned to pre-radiation level, and both index values became lower than that of the non-irradiated side in a quarter of our cases.It is considered to reflect one chronic adverse effect, dyspigmentation, and it is suggested that whitening of the skin after radiation results from not only dyspigmentation but also decreased blood flow.
In our data, erythema index ratios significantly declined for three years after radiotherapy; however, there was no significance in the time course of index ratios between three to five years after radiotherapy (p = 0.07).There was also no statistical significance in melanin index ratios from three to five years after radiotherapy (p = 0.7).This suggests that skin color reactions caused by 50-Gy irradiation of breast conserving therapy are considered to become stable three years after radiotherapy.
However, 10-Gy boost irradiation resulted in different reactions in our study.It was documented that administration of 10-Gy boost irradiation did not deteriorate cosmetic results, but increased the occurrence of telangiectasia [19] [20].Yoshida et al. [16] reported that at the end of radiotherapy, lower L* values (darker) and higher a*values (more red) resulted from boost irradiation.In the present study, both erythema and melanin indices values were still higher than 50-Gy irradiated sites even five years after the completion of radiotherapy.It is considered that our method could explain severe skin damages caused by boost therapy objectively and quantitatively.
Endocrine therapy is important for breast cancer treatment, and 131 of our cases were administrated endocrine therapy.Skin color reactions at three years after radiotherapy were not influenced by endocrine therapy.Also chemotherapy had no significant effects on skin color reactions.Yoshida et al. [16] also reported no significant influences on skin color reactions by chemotherapy, but followed one year after radiotherapy.Neither patient's age at the beginning of radiation had statistical significance to both skin color indices at three years' post-radiotherapy.We chose as the cut-off for fifty years old close to the age of menopause.Forty five and fifty five years old were also examined as a cut-off, no significant difference showed (date not shown).Although there are a lot of factors including additional therapies and ages, it is considered that radiation therapy influences skin color reaction in breast conserving therapy the most.In other words, objective assessment of skin reactions is important for evaluation of adverse radiation effects.
Conclusion
This quantitative assessment using melanin-erythema index meter was considered useful in evaluating chronic skin color reactions after breast conserving therapy.It demonstrated that chronic skin color reactions following breast conserving therapy had recovered to pre-irradiation level for three years post irradiation except 10-Gy boost irradiated area.
Figure 1 .Figure 2 .
Figure 1.Time course of erythema index ratio.(a) Index ratio was high at the completion of radiotherapy, and declined significantly for three years after radiotherapy.Abbreviation: pre = before radiotherapy, post = at the completion of radiotherapy, 1, 2, 3, 4, 5 Y = 1, 2, 3, 4, 5 years after radiotherapy.(b) Patients with low index ratios (<0.8) account for almost a quarter at three years after radiotherapy.
Figure 3 .Figure 4 .
Figure 3.Time course of index ratio in the area received 10-Gy boost irradiation.(a) Erythema index ratio of boost area to non-boost area is 1.67 at the completion of radiotherapy, and it was 1.44 at five years after radiotherapy.(b) Melanin index ratio of boost area to non-boost area is 1.7 at the completion of radiotherapy, and it was 1.42 at five years after radiotherapy.
|
2017-08-28T15:12:52.763Z
|
2016-06-21T00:00:00.000
|
{
"year": 2016,
"sha1": "5b44db84becb738499df6593b36a9328fe3b5174",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=68072",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "5b44db84becb738499df6593b36a9328fe3b5174",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
116956899
|
pes2o/s2orc
|
v3-fos-license
|
Determination of the stationary basis from protective measurement on a single system
We generalize protective measurement for protective joint measurement of several observables. The merit of joint protective measurement is the determination of the eigenstates of an unknown Hamiltonian rather than the determination of features of an unknown quantum state. As an example, we precisely determine the two eigenstates of an unknown Hamiltonian by a single joint protective measurement of the three Pauli matrices on a qubit state.
INTRODUCTION
Protective measurement is one of the unexpected consequences of the strange structure of quantum mechanics. According to general wisdom, we cannot gain information on the unknown stateρ of a single quantum system unless we distort the state itself. In particular, we cannot learn the unknown state of a single system whatever test we apply to it. It came as a surprise that in weak measurements [1] the expectation value  of an observablê A can be tested on a large ensemble of identically prepared unknown states in such a way that the distortions per single systems stay arbitrarily small (cf. [2], too). An indirectly related surprise came with the so-called protective measurements [3][4][5] capable to test  at least in an unknown eigenstate of the HamiltonianĤ at arbitrarily small distortion of the state itself. Interesting debates followed the proposal as to the merit of protective measurement in the interpretation of the wave function of a single system instead of a statistical ensemble (cf., e.g., [6] and references therein).
My work investigates an alternative merit of protective measurement. First I construct joint protective measurements of several observables 1 , 2 , . . . and re-state the original equations for them in a general form. Then I show that the straightforward task that a single joint protective measurement solves on a single system is the determination of the eigenstates of an otherwise unknown Hamiltonian.
JOINT PROTECTIVE MEASUREMENT OF SEVERAL OBSERVABLES
Consider a single quantum system in stateρ, and suppose that its Hamiltonian has discrete non-degenerate spectrum ω 1 , ω 2 , . . . : with the eigenstates |n . Consider a set of Hermitian observables 1 , 2 , . . . . For later convenience, introduce their expectation values in the stationary eigenstates: To simultaneously measure the observables 1 , 2 , . . . , we use von Neumann detectors with the canonical variables (x 1 ,p 1 ), (x 1 ,p 1 ), . . . , with vanishing Hamiltonians. We prepare the detectors in stateρ D initially, such that the pointer variablesx 1 ,x 2 , . . . are of zero means and of small dispersions δx 1 , δx 2 , . . . , respectively. The conditions must be satisfied for as many pairs n = m as possible for each detector α = 1, 2, . . . , to ensure that a maximum set of  α 1 ,  α 2 , . . . be distinguished by the detectors. Now we introduce the usual couplingK = αp αÂα /T between the observables and the detectors, respectively, with the factor 1/T where T is the duration of the protective measurement. Let us evaluate the composite unitary dynamics in interaction picture. The observables and the coupling become time-dependent: The unitary transformation after time T readŝ where T stands for time-ordering. Insertinĝ we find the contribution of the off-diagonal elements become heavily suppressed when T |ω n −ω m | ≫ 1 is satisfied for all n = m. The ideal protective measurement requires T = ∞, the corresponding unitary dynamics contains the contribution of diagonal elements (2) only: Observe that the eigenvalues ω n of the Hamiltonian play no role, only the eigenstates |n do. The dynamics of joint protective measurement of (a finite number of) ob-servables 1 , 2 , . . . is captured byÛ ∞ . It will entangle the system with the detectors in such a way that the pointer variablesx 1 ,x 2 , . . . get shifted by the expectation values of  1 n ,  2 n , . . . , taken in each eigenstate |n in turn. The readout of the detectors will obtain the outcomes with probability | n|ψ | 2 . The terms ±δx 1 , ±δx 2 , . . . indicate statistical errors. The above outcomes mean that we have occasionally, i.e.: whenever the thresholds (3) disclose the ambiguity of n, collapsed the stateρ of the system into |n n| and we have precisely (i.e.: at arbitrary small errors) measured the expectation values of A 1 , 2 , . . . in the stationary state |n ofĤ. Let us test the above dynamics on the uncorrelated pure initial state |ψ D |ψ of the system+detectors compound: Let us introduce the wave function ψ D (x 1 , x 2 , . . . ) of the detectors. If we substitute (8) and multiply both sides of (10) by x 1 , x 2 , . . . |, we get This shows that, under the conditions (3) on the initial wave function ψ D (x 1 , x 2 , . . . ) of the detectors, the state on the r.h.s. prepares the von Neumann measurement of n and  1 n ,  2 n , . . . . In particular, the initial probability density P (x 1 , x 2 , . . . ) = |ψ D (x 1 , x 2 , . . . )| 2 changes like this: Formally, this expression is the statistical mixture corresponding to a von Neumann projective measurement of the stationary basis resulting in the outcome n with probability | n|ψ | 2 . In each term the initial positions of the pointers get shifted by the expectation values of the corresponding observables in the given post-measurement eigenstate |n . The eigenvalues ω n themselves do not appear in the result, since they already canceled from the unitary dynamicsÛ ∞ , as we observed before.
PROTECTIVE MEASUREMENT OF THE STATIONARY BASIS
We are going to show that a single joint protective measurement determines the eigenstates of the unknown Hamiltonian. Our example is a single qubit in an unknown initial statê with unknown spatial polarization vector s. Unknown is the Hamiltonian as well: with unknown strength Ω and unknown direction e of the external 'magnetic' field. The Hamiltonian has two unknown eigenvalues ±Ω and eigenstates |± : Now we prepare three von Neumann detectors and couple them to the three qubit observables α =σ α , for α = 1, 2, 3, respectively. Their joint protective measurement is described by the unitary operator (8): (16) With ˆ σ ± = +|ˆ σ |+ = ± e, the coupling shows the following simple dependence on the unknown parameter e ofĤ: Let the stateρ D be constrained by δx 1 , δx 2 , δx 3 ≪ 1, cf.
(3). Inserting (17) and taking the diagonal matrix element x| . . . | x of both sides, we get the resulting change of the initial pointer statistics P ( x) = x|ρ D |x : Expressing | ±|ρ |± | 2 via (13) and (15), the final statistics of the pointers x 1 , x 2 , x 3 becomes If we read out the three detectors, the outcome is x ≈ ± e with probability (1 ± e s)/2, respectively. We have thus determined the spatial direction e of the external field at arbitrary high precision upto its sign though. The precision of the measured components e 1 , e 2 , e 3 is given respectively by the initial dispersions δx 1 , δx 2 , δx 3 ≪ 1, it does not depend on the initial stateρ of the qubit. The strength Ω of the field remains unknown while the obtained knowledge of ± e means that we have precisely inferred the two stationary states |± . Our protective measurement collapses the system, exactly like an ideal von Neumann measurement ofĤ would do, into one of the two stationary states, just we cannot learn into which one of the two.
SUMMARY
I have generalized the concept of protective measurement for joint protective measurement of a (possibly finite) number of observables, determined the corresponding unitary operation and its action on arbitrary uncorrelated initial state of the system and the detectors. I have shown that on a single qubit of unknown state and unknown Hamiltonian, the two stationary states can be determined in a single joint protective measurement of the three Pauli matrices. The post-measurement state of the qubit is just like after a projective measurement of H it would be. This result may certainly be generalized for higher dimensional systems as well. In fact, the full Hamiltonian can always be determined on a single system if, e.g., we perform a suitable sequence of standard measurements. Yet the surprising feature of the joint protective measurement is that the stationary states can be determined in a single step and in a transparent model. This work was supported by the Hungarian Scientific Research Fund under Grant No. 75129 and the EU COST Action MP1006.
|
2014-01-30T23:06:25.000Z
|
2014-01-30T00:00:00.000
|
{
"year": 2014,
"sha1": "daf66662cb699804f64e931e888c12a86ab6da3b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1401.8020",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "daf66662cb699804f64e931e888c12a86ab6da3b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
237386736
|
pes2o/s2orc
|
v3-fos-license
|
Extranodal Extension in Bilateral Cervical Metastases: A predictor of Undesirable Survival Outcomes despite Aggressive Salvage Treatment in Oral Cancer Patients
Objectives: Despite the inclusion of extranodal extension (ENE) in the recent staging system, the presence of ENE alone is not sufficient to depict all clinical situations, as ENE is frequently found in multiple nodes. Thus, the purpose of this study was to evaluate the surgery-based treatment outcomes and clinicopathological features of oral cavity squamous cell carcinoma (OCSCC) patients with ENE found in bilateral multiple cervical metastases. Materials and methods: A retrospective single-institutional study of OCSCC patients with bilateral ENE nodes was performed from January 2011 to December 2018. OCSCC patients of different admission statuses (with primary lesions (PL), recurrent lesions (RL) and isolated neck metastases (INM)) were included for subgroup comparisons. All patients received surgical treatment with/without adjuvant therapies and had complete follow-up data. Disease-free survival (DFS) was regarded as the main outcome. Time-to-relapse data were also collected for comparison. Results: A total of 128 patients were included, of whom 97 (75.8%) were male. The mean follow-up period reached 15 months. Among the patients, 85 (66.4%) were treated for PLs, followed by 26 (20.3%) treated for RLs after failed prior therapy and 17 (13.3%) treated for INMs (concurrent or sequential). The DFS rate was merely 35.2%. Treatment-related factors such as surgical margin (p=0.003), postoperative adjuvant therapy (p=0.014) and perioperative complications (p=0.036) were significantly associated with patient outcomes. In addition, oral lesion-related variables such as oral subsites (p=0.037), T classification (p=0.026) and skull base involvement (p=0.040) were indicators of a worse prognosis. For bilateral ENE features, ENE subclassification (p=0.036), maximum size of ENE nodes (p=0.039) and arterial nodal encasement (p=0.025) tended to predict the surgery-based treatment outcomes of these patients. Conclusions: Bilateral cervical metastases with ENE features, though uncommon, are a serious regional burden, and these patients have lower-than-expected treatment outcomes, especially those with RLs or INMs. A fairly large number of OCSCC patients with advanced oral lesions gain little benefit from intensified salvage surgical treatment. Such treatment should instead be offered to select patients with smaller bilateral ENE nodes (<3 cm) and those with lower ENE subclassifications and no arterial nodal encasement.
Introduction
The presence of metastatic lymph nodes (MLNs) has widely been accepted as an important prognostic factor for patients diagnosed with oral cavity squamous cell carcinoma (OCSCC) [1,2]. Additionally, it has been found in many studies that approximately 30-50% of primary OCSCC patients have nodal involvement, leading to a significant reduction in locoregional control and overall survival (OS) [3][4][5]. Such a high incidence of MLN has initiated an intense wave of investigations into its clinical implications in OCSCC. Thus, a large number of MLN indices, such as total nodal volume [6][7], nodal necrosis [8] and lymph node ratio (LNR) [9][10], have been proposed to reflect the seriousness of metastatic burdens. Apart from these indices, it was not until 2013 that the presence of extranodal extension (ENE) or spread was first proposed as one of the dichotomized criteria, along with the size of the MLN, for determining the cervical stages in head and neck cancers [11][12][13]. Currently, the importance of this factor has found general acceptance with increasing clinical evidence regarding its relation to prognosis. The incorporation of ENE in the new Tumor, Node, Metastasis (TNM) Staging System also highlights its role in the stratification of high-risk patients for more aggressive treatment regimens [11], which is routinely applied when evaluating clinical care for OCSCC patients.
Theoretically speaking, ENE is commonly defined as metastatic cancer cells that extend through the lymph node capsule into the surrounding connective tissues [14]. Such characteristics were previously found to be closely relevant to regional nodal recurrence and distant metastasis [15][16]. Compared with conventional MLNs, MLNs with ENEs are more likely to be found in those with advanced primary lesions because their synergistic effect contributes to poorer treatment outcomes [17][18][19]. Shaw also confirmed this claim, with reported OS rates of 65%, 52% and 23% for node-negative, node-positive (ENE-negative) and node-positive (ENE-positive) patients, respectively [17]. Despite these findings, the full influence of ENE has not yet been elucidated, as most studies still consider ENE as a single pathologic event when grading patients with OCSCC [20][21]. Nevertheless, few reports have focused on the clinical dilemmas for those with bilateral multilevel MLNs with ENE features. The development of such bilateral ENE nodes varies according to the different locations of diseases, prior treatment and adjuvant therapies. Accordingly, the treatment rationale and prognoses for patients with bilateral ENE nodes may differ depending on specific locoregional conditions. In addition, the efficacy of surgery-based therapies, which are usually the first option against OCSCC for these patients, has been questioned due to likely increased odds of relapse or distant metastasis. Thus, the aim of this retrospective study was to tentatively resolve these dual concerns in terms of the specific ENE nodal features and the prognosis of OCSCC patients with bilateral ENE burdens. Unilateral ENE burdens, which included ipsilateral or contralateral ENE nodes in OCSCC patients were also compared for analyses. Those who might benefit from aggressive treatment are also discussed for possible subclassification.
Study population and inclusion criteria
This was a retrospective single-institutional study that included OCSCC patients with either ipsilateral/contralateral ENE nodes, or with bilateral ENE ones, who received surgery with/without adjuvant therapies between January 2011 and December 2018. The follow-up duration was calculated from dates of surgical treatment in our institution until death/last follow-up visits in months. Due to the retrospective nature of this study, approval was granted by the independent institutional ethics committee of our hospital (approval number: SH9H-2021-TK165). In addition, written consent was obtained from the patients whose clinical images are shown in this study.
With regard to the study aim, the inclusion criteria for this study were as follows: 1) patients with primary or recurrent OCSCCs or with isolated neck metastases (concurrently or sequentially) after failed watchful observations for early-stage OCSCC patients; 2) patients with pathological ENE evidence found in either ipsilateral/contralateral or bilateral cervical nodes; 3) patients surgically treated with curative rather than palliative intent; and 4) patients without distant metastases. The candidates included in this study met all these criteria. The exclusion criteria were as follows: 1) patients with incomplete medical records or follow-up information; 2) patients treated with only adjuvant, non-surgical therapies (radiotherapy, radiochemotherapy or targeted/ immunotherapies) and 3) patients without ENE nodes.
Demographic information and prior treatment history
Demographic information (age, sex and smoking history) was directly collected from the chart database. History of prior and present treatment was also obtained to classify these OCSCC patients with either unilateral or bilateral ENE nodes into three distinct subgroups: primary lesion (PL), recurrent lesion (RL), isolated neck metastases (INM) groups. The RL group included patients with relapsed/ secondary primary lesions, while the INM group included those with ipsilateral/contralateral or bilateral cervical metastases and the absence of oral primary lesion relapses after previous surgical treatment. Due to the complicated treatment regimens and disease statuses of these patients, the following statistics regarding these three groups were both collectively (between groups) and separately (i.e., within group) compared.
Oral-cavity tumor characteristics
Information regarding the oral subsite, size and pathologic grades of OCSCC was collected, and the tumor (T) classification was recorded according to the 8 th edition of the American Joint Committee on Cancer (AJCC) system [22]. In an attempt to further delineate the disease conditions of these OCSCC patients, along with nodal information, other special radiologicpathologic characteristics of oral lesions, such as inseparable oral and cervical lesions, midline involvement, bone invasion, depth of invasion (DOI) and perineural invasion (PNI), were re-reviewed and included in this study. Oral lesions of the INM group were determined by previous treatment records and pathologic reviews. Human papillomavirus (HPV) detection, either P16 or HPV DNA tests, was performed in some cases with exophytic growth or with proximity to oropharyngeal anatomies. However, considering the generally low incidence of infection in OCSCC samples, HPV examinations were not routinely performed in our cohort. In addition, the preoperative radiologic suspicion of either ipsilateral/contralateral or bilateral ENE was verified by comparing the final pathologic results.
Cervical metastases and ENE data
Since ENE was first incorporated into the AJCC classification in 2017, the pathological sections obtained before April 2018 (with descriptions of extranodal spread, extension, or surrounding-tissue invasion) were reviewed by two experienced pathologists (Y.H. and J.D). In addition, specific nodal information regarding the total number of metastatic lymph nodes, metastatic LNR, greatest dimensions of metastatic nodes, level of ipsilateral/contralateral or bilateral ENE, metastatic lymph node fusion (inseparable metastatic nodes), nodal necrosis, cutaneous, muscle or vascular invasion (including oncologic venous embolism), and even mandibular or skull base bone involvement was recorded for analysis. According to the new International Collaboration on Cancer Reporting (ICCR) recommendations, the grade of ENE was also determined for the different depths of extracapsular extension of MLNs, as minor ENE (ENEmi) for extension of up to 2 mm (≤2 mm) from the lymph node capsule and major ENE (ENEma) as extension of more than 2 mm (>2 mm), which always includes gross carcinogenic deposits in cervical soft tissues with blurred/without normal nodal architecture [23].
Surgery and adjuvant treatment
Treatment regarding the extent of resection was indirectly reflected in the reconstructive parameters, such as flap sizes and types. For the treatment of ENE nodes, the aggressiveness of neck dissections was classified according to the level of involvement, such as supra-omohyoid neck dissection (SOND) from level I to III, extended SOND from level I to IV, and radical neck dissection (RND) from level I to V. The application of en bloc procedures (oral lesions resected together with cervical lymph nodal samples) was also analyzed. In addition, postoperative margin status was reported to describe the completeness of surgical resection, and postoperative complications were recorded based on the chart review.
Data regarding pre-and postoperative adjuvant therapies were also collected. For the RL group, the application of reirradiation was also analyzed for efficacy. Although no immune therapy was applied in the current cohort due to government approval and market access at that time, anti-epithelial growth factor receptor (anti-EGFR) therapies were used in selected cases. The criteria for using anti-EGFR therapies, though self-pay (uncovered by the medical insurances), were mostly based on positive sample results of EGFR. Admittedly, the economic status of different patients also influenced the choices of such targeted treatment in our study.
Follow-up information
For follow-up, all patients returned to the outpatient clinic every 1-3 months during the first three years, 6-9 months during the fourth to fifth years, and annually thereafter. The total follow-up time was calculated until the last follow-up or event of death, irrespective of the cause. Disease-free survival (DFS) was counted as the main outcome for this study. In order to better describe the treatment efficacies, time-to-relapse (TTR) data were also collected. Representative cases were also presented.
Statistical analysis
The statistical analyses were performed with SPSS version 23.0 software (IBM Corp., Armonk, NY). The primary endpoint of this study was DFS. Logistic regression was utilized to determine the relevant factors of ENE. Cumulative survival curves were plotted with the Kaplan-Meier method (log-rank test). The TTR data for univariate and multivariate analyses were also given. Additionally, univariate and multivariate proportional hazard Cox models were used to evaluate the prognostic factors.
Patient population and treatment summary
In total, 501 patients (331 male and 170 female) with either ipsilateral/contralateral ENE nodes, and 128 patients (97 male and 31 female) with bilateral ENE nodes were included in this study. The demographics and prior medical history are listed in patients received prior surgeries as primary treatment. In the bilateral ENE group, en-bloc resection (continuous oral lesion resection and neck dissection) was performed in 73 (57.0%) patients, while noncontinuous (separated) resection was performed in 55 (43.0%) patients, which also included 17 (13.3%) patients in the INM group who had isolated bilateral metastatic nodes with ENE features. There were 20 (4.0%) and 12 (9.4%) patients with reports of positive surgical margins in either unilateral and bilateral ENE groups, of whom all received postoperative adjuvant treatment. For the bilateral ENE group, the ablative operations involved several anatomic subsites, which later required large pedicled or free flap coverage, with 92 (71.9%) patients receiving flap reconstructions with a flap length (skin) over 10 cm. In contrast, for the unilateral ENE group, primary closure or minor flaps (<10 cm) were applied in 375 (74.9%) patients, indicative of a relatively small wound burden for the latter. Within the bilateral ENE group, 67 (52.3%) developed minor or major perioperative complications, of whom the symptoms were exacerbated and resulted in death in 2 cases (carotid blowout and hepatic failure). Pulmonary infections (n=32, 25.0%) were also frequently found in these bilateral metastatic patients due to the high rate of prophylactic tracheotomy. For the bilateral ENE group, postoperative radiotherapy was administered to 31 (24.2%) patients, while radiochemotherapy was administered to 52 (40.6%) patients. Targeted therapies (anti-EGFR) were mostly administered in combination with other adjuvant treatments to 30 (23.4%) patients. For the unilateral ENE group, postoperative radiotherapy and radiochemotherapy were administered to 310 (61.9%) and 100 (20.0%) patients, respectively. Anti-EGFR therapy were applied in 58 (11.6%) patients.
Oral-cavity tumor characteristics
Within the bilateral ENE group, tongue (60, 46.9%) and floor of the mouth 34 (26.6%) were found to be the most frequently affected subsites in this series ( Table 2). For the unilateral ENE group, a similar trend (tongue=201, 40.1%) was also noticed (Supplementary Table 2). According to the AJCC classification, most patients (63, 49.2%) in the bilateral ENE group were graded as having T3 disease (including those in the INM and RL groups). The average size of the oral lesions reached 4.88 cm, with most oral lesions (n=92, 71.8%) exceeding 4 cm. In addition, the DOIs of oral lesions (bilateral ENE group) were generally (n=115, 89.8%) larger than 10 mm, indicative of the disease seriousness. In consideration of the clinical status of bilateral MLNs, we found OCSCC lesions invading through the midline in 96 (75.0%) patients. A low HPV infection rate (2.3% and 2.8%) was found in either unilateral or bilateral ENE group, though the infection rate in almost 80% of the cases remained unknown. The coexistence of ENE and PNI was also found in 69 (53.9%) patients (bilateral ENE group). In addition, mandibular (or maxillary) osseous destructions were not rare (n=48, 37.5%), with 8 (6.3%) cases having oral lesions that reached the skull base in the bilateral ENE group, while such osseous destructions were only found in 93 (18.6%) patients in the unilateral ENE one.
Ipsilateral/contralateral and bilateral ENE features
For the unilateral ENE group, most ENE nodes (n=393, 78.4%), according to the ICCR classifications, were classified as ENEmi, signaling a less aggressive nature (Supplementary Table 3). Most unilateral ENE patients were with ipsilateral (n=416, 83%) ENE nodes. In addition, the average sizes for ipsilateral/ contralateral ENE nodes reached only 3.6±1.2 cm, with most found in the upper I-III levels (n=422, 84.2%). Arterial encasement of ENE nodes was merely found in 15 (3.0%) cases, while evidence of internal jugular vein embolism was found in 48 (9.6%) cases. The number ratio between ENE and all excised nodes were 0.19±0.27.
On the other hand, as the focus of this study, the clinicopathologic features of bilateral ENE nodes were given considerable attention, as shown in Table 3. Firstly, regarding the preoperative examination of nodes, ENE signs were not confirmed by radiologic imaging in 36 (28.1%) patients. The intraoperative findings further revealed ipsilateral metastatic lymph node fusion (or agglomeration) in approximately one-fourth (n=37, 28.9%) of the included patients. The postoperative pathological review showed that ENEma was found only in 108 (84.4%) patients, indicating the severe cervical extension (bilateral ENE group) into the surrounding tissues. A closer inspection of the pathologic reports showed obvious soft tissue involvement, rather than simply ENE presence, in the majority of patients (n=79, 61.7%). Surprisingly, mandibular involvement of ENE nodes (bilateral ENE group) was also confirmed in 26 (20.3%) patients, while hyoid involvement was noted in 9 (7.0%), showing the aggressiveness of ENE nodes. Apart from these findings, postoperative pathologic evaluations revealed that the average number of ipsilateral metastatic nodes equaled 5.3, while that on the contralateral sides equaled 4.1. For the nodes with ENE features, pathologic evidence showed an approximate average number of 2 for both sides. In addition, the presence of multilevel ENE nodes was found in these patients, with levels I-III (89, 69.5%) being the most likely sites of bilateral involvement. Fused (inseparable) dumbbell-like metastatic nodes were also reported in 37 (28.9%) patients, indicative of the seriousness of these ENE metastases. The average lymph node ratio (LNR) between metastatic nodes and all excised nodes reached 0.23±0.15, while the average ratio between ENE nodes and all excised nodes was 0.12±0.09 (Figures 1 and 2, Representative bilateral ENE cases and images).
Follow-up and Univariate survival analyses
For the unilateral ENE group, the mean followup time reached 33.8 months. The most frequently encountered treatment failure was locoregional recurrences (n=94, 18.8%). Eight (1.6%) cases were died to non-oncologic causes (Supplementary Table 4).
On the other hand, the mean follow-up for the bilateral ENE group reached only 23.2 months (TTR: 7.9 months). Most of the deaths in this group were due to failure of either locoregional control (38,45.8%), distant metastases (20, 24.1%), or both (21, 25.3%) ( Table 4). Either ENE-related cervical recurrence or distant metastasis was found to contribute to treatment failure in 64 (50.0%) patients. For the unilateral ENE group, the survival analyses revealed that patients with PL enjoyed the best DFS outcome when compared with those with RL or INM status (p<0.001). However, the sides (ipsilateral/contralateral) of ENE nodes did not sway the DFS in these patients (p=0.252). The similar trend was also observed when taking TTR as the endpoint event. All these data were shown in Supplementary Table 5 & 6. In the univariate analyses of all the possible demographic and oral lesion variables for these OCSCC patients with bilateral ENE nodes (Table 5 & 6), the treatment group (i.e., PL, RL and INM groups) was found to be significantly related to DFS (p=0.012). Positive surgical margin status also predicted a worse treatment outcome in both the whole series (p=0.002) and the RL cohort (0.020). In addition, a parallel survival impact (p=0.001) was also found: as the treatment regimens escalated (radiochemotherapy and radiochemotherapy plus anti-EGFR therapies), the DFS rate increased accordingly.
However, the occurrence of perioperative complications (p=0.017) adversely affected the outcome based on our univariate analyses. The similar results were observed when taking TTR as the endpoint (Supplementary Table 7).
When taking the lymph nodes information into consideration ( Table 7 & Supplementary Table 8), a number of factors were explored for their potential in revealing the treatment outcomes. Firstly, DFS was adversely affected by the maximum size of metastatic ENE nodes (p<0.001). Second, ICCR subclassification (p=0.004) and arterial nodal encasement (p=0.026) were significantly related to a worse DFS despite aggressive treatment. Unexpectedly, LNR (p=0.696) and ENE node number ratio (p=0.123) were not able to further stratify patients with bilateral ENE nodes. Besides, HPV status was not significantly associated with DFS in neither unilateral (p=0.066) or bilateral ENE groups (p=0.876).
Comparisons between unilateral (ipsilateral/ contralateral) and bilateral ENE groups
Though the distribution of treatment subgroups (PL, RL and INM) was statistically equal (p=0.660), DFS time between unilateral and bilateral ENE groups varied greatly (DFS: p<0.001), signaling the doubled power of ENE nodes towards eventual treatment failure ( Table 8). Most other variables, such as T classifications, DOI, number of ENE nodes, muscular invasion, were largely different between unilateral and bilateral ENE groups. Surprisingly, there were insignificant differences between ENE sizes (p=0.800) and LNR (p=0.337) in these two groups.
Multivariate Cox regression analysis for bilateral ENE group
All parameters included in the univariate analysis were further assessed using Cox multivariate regression analysis (Table 5-7). After adjusting for different covariables, treatment group (p=0.017), surgical margin (p=0.003), postoperative adjuvant therapy (p=0.014) and perioperative complications (p=0.036) remained independently associated with the final treatment outcome. In addition, a posterior (latter) oral subsite (p=0.037), a higher T classification (p=0.026) and skull base involvement (p=0.040) conferred the worst DFS rate in the Cox analysis. Interestingly, alongside the proven effects of the maximum size of ENE nodes (p=0.039) and arterial encasement (p=0.025), ICCR subclassification (p=0.036) was also found to adversely affect the DFS results after allowance for potential confounders. The results for unilateral ENE group could also be found in Supplementary Table 5.
Correlation between bilateral ENE nodes and other related variables
Based on the correlation analysis (Table 9), the number of ipsilateral lymph nodes, location of the ipsilateral ENE, fusion of ipsilateral metastatic lymph nodes and LNR showed possible correlations with the number of ipsilateral ENE (p<0.001). For the contralateral side, the number of contralateral metastatic lymph nodes, level of contralateral ENE nodes and LNR (p<0.001) were strongly correlated with the number of contralateral ENE nodes. In addition, lower levels of ipsilateral ENE nodes were possibly related to the male sex (p=0.001), a deeper DOI (p=0.038) and higher T classification (p=0.029), while lower levels of contralateral ENE nodes were frequently found in those with bone destruction (p=0.015), a higher T classification (p=0.038), fusion (blurred) of the oral lesion and cervical metastasis (p=0.024) and internal jugular obstruction due to cancer embolism (p=0.005). Moreover, the maximum size of ENE nodes was also found to be correlated with the ICCR subclassification (p<0.001).
Discussion
Due to its significance in treatment considerations, ENE, as a discrete adverse entity, has been added to the recent AJCC classification for upgrading the nodal status of advanced OCSCCs. Specifically, ENE (with even microscopic presence) in a single node, regardless of size, will directly categorize patients into stage IV [11,22]. However, there have been concerns about ENE as a single factor for staging, disregarding other MLN information, as patients with such ENE features have diverse clinicopathologic backgrounds. ENE features can be found in metastatic nodes of different sizes and different levels of invasion and in OCSCC patients with either primary or recurrent lesions [24]. Although patients with ENE nodes would most likely receive augmented treatment regimens, the outcomes available in the literature were vastly different, showing varying treatment benefits among OCSCC patients [25][26][27]. As was shown in our study, those with unilateral ENE nodes enjoyed much better DFS results than those with bilateral ones. It was also reasonable to assume that patients with multiple ENE nodes (or a higher ENE nodal density) will have a further reduced OS rate, as reflected in our study (35.2%). This hypothesis for the specific extent of ENE concerns has also been validated, as patients with ENEma tended to receive less benefit, with a mere DFS rate of 28.7% in our results. When further stratified by admission status, the results of the INM group (patients with both INM and bilateral ENE nodes) fell largely short of expectations, with a surprising drop in the DFS rate to merely 25%, illustrating the lower therapeutic efficiency among even patients without oral lesion recurrences. We also found that the presence of bilateral ENE nodes was not a single event but was instead coupled with other important clinical parameters, especially in patients with adverse locoregional factors (T classification (p=0.026) and surgical margins (p=0.003)). Collectively, these factors conferred the worst survival probabilities. On the other hand, among nodal characteristics, the maximum size of ENE nodes (p=0.039), ICCR subclassification (p=0.036) and carotid arterial encasement (p=0.025) were found to be associated with much worse outcomes. These clinicopathologic features in patients with bilateral ENEs, especially those regarding the ENE status, were largely different from those with unilateral ENE nodes, according to the comparisons ( Table 8). In addition, bilateral ENE features related to general soft tissue involvement (p=0.039), together with detailed muscle invasion, arterial encasement and jugular venous embolism (obstruction), foreshadowed undesirable outcomes among OCSCC patients in the PL groups, as any major invasion into either of these structures would increase the likelihood of recurrence or distant metastasis despite aggressive locoregional treatment. The survival benefits of complete excision (negative surgical margin) and adjuvant therapies were also demonstrated in our study as proof of a standardized treatment for select cases with such nodal features. Therefore, we added our new evaluation and treatment considerations to further subcategorize and weigh the benefits of bilateral ENE features in OCSCC patients (Figure 3). The focus of treatment recommendations was mainly based on DFS influences of specific clinicopathologic features.
Cervical ENE (extracapsular-spread) nodes were first reported in 1974 in a treatment failure analyses of patients with OCSCC [32]. Since then, the significance of ENE has been gradually recognized in head and neck cancers, mostly OCSCC and oropharyngeal cancers. Within the existing literature, the presence of ENE in MLNs was mostly reported to be associated with an increased likelihood of locoregional recurrence and distant metastasis [26], ultimately leading to lower survival rates among OCSCC patients. This was also consistent with our results. Nevertheless, the impact of ENE was quite subtle, as marked discrepancies in survival rates for OCSCC patients could be discerned between different studies, ranging from 66.6% to 38.9% [17][18][19]. Some even asserted that more than 2 ENE nodes would confer unfavorable prognoses, while single ENE nodes did not affect overall survival [21]. Others considered that the unfavorable distributions (lower levels) of multiple ENE nodes would constitute risk factors [27,33]. However, the MLN burden with bilateral ENE features might have strong relations to an inferior DFS, as demonstrated in our study. Such relations, as far as we are concerned, were proven by the lower regional control rate and the much higher possibility of distant metastasis, as reflected in Table 1. For OCSCC patients with bilateral ENE MLNs, ENEma (>2 mm) was a common feature, which was found in almost 90% of our bilateral ENE cases. In contrast with Tirelli's report [34], the ICCR subclassification of ENEmi and ENEma was found to be significantly associated with DFS in our study. In addition, the prognostic influence of other ENE features, especially multiple bilateral ENE MLNs, in RL and INM patients was rather elusive since most studies have only focused on primary OCSCC patients. Theoretically speaking, when treating recurrent OCSCC patients with multiple ENE MLNs, most surgeons would be reluctant to offer salvage surgical treatment due to the generally unfavorable DFS (15.4%), which was also confirmed in our study. We also found that age, sex or comorbidities did not significantly affect the prognosis of these OCSCC patients. In addition, patients with larger-sized (≥4 cm) oral RLs and bilateral ENE MLNs, which might entail larger flap reconstructions and intensified adjuvant therapies, should be considered for palliative modalities due to surgery-relevant treatment toxicity, as few DFS benefits and complications (p=0.036) were noted in our study. In other words, salvage treatment could be offered to select RL patients with low oral disease burdens and bilateral ENE MLNs. It is plausible that OCSCCs in different oral subsites grow, invade the surrounding organs, and metastasize to regional lymph nodes in different ways [35][36]. In our study, the retromolar trigone 4, 3.1%), lower gingiva (10, 7.8%) and bucca (8,6.3%) were found to be associated with much lower DFS. In contrast with expectations, DOI (p=0.779), unlike T classification, was not found to be significantly related to survival outcomes [29]. Surprisingly, most of the clinicopathological characteristics, such as imaging features, PNI, and oral-cervical nodal fusion, did not reach statistical significance. Unlike oral lesions, we found that the most influential factors for patients with bilateral ENE nodes were still nodal size and extent of infiltration (ICCR subclassification), implying dual considerations for extracapsular invasion and size in terms of the treatment prognosis. According to our study, the cutoff value of the maximum ENE nodes reached 3 cm (Supplementary figure 1 for receiver operating characteristic (ROC) curve cutoff value). Based on this finding, we contend that, for cases with bilateral smaller ENE nodes (<3 cm), salvage surgical treatment in combination with adjuvant treatment is still feasible, while caution should be taken for patients with larger ENE nodes (≥3 cm). In addition, OCSCC patients were mostly salvageable when both smaller ENE sizes and ENEmi were found in the postoperative pathological reports. In addition to these features, for those with ENE nodal carotid arterial encasement, most salvage surgeries might not achieve the goal of potential rescue, as approximately 90% of the cases would not gain any DFS benefit despite aggressive treatment. Interestingly, unlike the ICCR subclassification, nodal soft tissue involvement as a whole was not associated with DFS due to the influence of other covariates, while muscle invasion was found to be correlated with OS (p=0.008) in the PL group, within which direct SCM invasion largely reduced the DFS rate to approximately 25%.
Patients with ENE MLNs are always considered candidates for postoperative adjuvant therapies due to the higher risk of treatment failure [34]. The striking advantage of postoperative concurrent chemoradiotherapy (CCRT) in MLN head and neck cancers was found in a collaborative analysis of two randomized phase III trials conducted in Europe by European Organization Research and Treatment of Cancer (EORTC) 22931 and in the United States by Radiation Therapy Oncology Group (RTOG) 9501 [37][38]. However, there were controversial results regarding the role of CCRT in ENE patients, even in the aforementioned RTOG 9501 trial, as patients with ENE nodes failed to have improved long-term outcomes with the addition of cisplatin-based chemotherapy (due to a high rate of recurrence) [39]. Our study suggested that patients with bilateral ENE MLNs could gain additional survival benefit when the adjuvant regimens were escalated. In addition, patients receiving targeted therapies such as EGFR-inhibited drugs in combined treatment approaches with CCRT tend to have slightly better outcomes (50% vs. 46.2%). However, it should also be noted that toxicity increases with such additions; patients receiving these protocols were mostly of a younger age or had fewer comorbidities.
The current study had some limitations due to its retrospective design, including the lack of a randomized patient population and associated biases. Due to the low incidence of bilateral ENE MLNs in OCSCC patients, we tended to increase the study population by enrolling patients with different prior treatment histories, which might have partially influenced the general analysis of the data. The trend in our study that increased (escalated) treatment regimens led to better survival should be viewed with caution as these were mostly prescribed to younger patients with less comorbidities. The result could not be directly extended to those of older ages or with more comorbidities. Thus, further multicenter studies are needed to investigate whether OCSCC patients with bilateral ENE features should be considered for different subgroups and treatment considerations (Figure 4).
Conclusion
Bilateral cervical metastases with ENE features, though uncommon, represent serious regional burdens and lead to lower-than-expected treatment outcomes, especially in those with RLs or INMs. A fairly large number of OCSCC patients with advanced oral lesions gain little benefit from intensified salvage surgical treatment. Such treatment should instead be offered to select salvageable patients with smaller bilateral ENE nodes (<3 cm), lower ENE subclassifications and no arterial nodal encasement.
Ethics approval and patients' consent for publication
We have obtained the approval from the Institution's Ethics Committee. The approval number is SH9H-2021-TK165. In addition, the patients' consent for publication (all the patients' photos and CT images) were obtained before the submission of this manuscript.
Authorship and contributions
WG, YH, BG and CM wrote and revised the manuscript. WG, YH and DZ collected and present their cases, wrote the figure lengths and representative case descriptions. XL and CM collected the data and revised the manuscript in 9 th People's Hospital. JD, YD and CM designed the studies and gave the idea of the presentations. All authors have read and approved the manuscript.
|
2021-09-03T05:26:01.173Z
|
2021-08-03T00:00:00.000
|
{
"year": 2021,
"sha1": "a9960d901ab0d07a9fc94c443608ce8f824086a5",
"oa_license": "CCBY",
"oa_url": "https://www.jcancer.org/v12p5848.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a9960d901ab0d07a9fc94c443608ce8f824086a5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
5561137
|
pes2o/s2orc
|
v3-fos-license
|
Genetic variation in fitness within a clonal population of a plant RNA virus
A long-standing observation in evolutionary virology is that RNA virus populations are highly polymorphic, composed by a mixture of genotypes whose abundances in the population depend on complex interaction between fitness differences, mutational coupling and genetic drift. It was shown long ago, though in cell cultures, that most of these genotypes had lower fitness than the population they belong, an observation that explained why single-virion passages turned on Muller’s ratchet while very large population passages resulted in fitness increases in novel environments. Here we report the results of an experiment specifically designed to evaluate in vivo the fitness differences among the subclonal components of a clonal population of the plant RNA virus tobacco etch potyvirus (TEV). Over 100 individual biological subclones from a TEV clonal population well adapted to the natural tobacco host were obtained by infectivity assays on a local lesion host. The replicative fitness of these subclones was then evaluated during infection of tobacco relative to the fitness of large random samples taken from the starting clonal population. Fitness was evaluated at increasing number of days post-inoculation. We found that at early days, the average fitness of subclones was significantly lower than the fitness of the clonal population, thus confirming previous observations that most subclones contained deleterious mutations. However, as the number of days of viral replication increases, population size expands exponentially, more beneficial and compensatory mutations are produced, and selection becomes more effective in optimizing fitness, the differences between subclones and the population disappeared.
Introduction
RNA viruses are obligate intracellular parasites found infecting all life forms, except perhaps the ciliates. The reason for this evolutionary success steams from a combination of the high mutation (Sanju an et al. 2010) and recombination (Simon-Loriere and Holmes 2011) rates of their error-prone RNA-dependent RNA polymerases (RdRp), very short generation times and potentially huge population sizes (Wasik and Turner 2013). The combination of these three factors results in highly polymorphic and evolvable mutant swarms that respond very efficiently to environmental perturbations. However, an excessive mutational load is a double-edge sword (Elena and Sanju an 2005;Belshaw et al. 2007). Although it allows for rapid exploration of genotypic spaces in situations of environmental stress, the drawbacks come with the generation of large amounts of deleterious mutations and inviable genotypes that may jeopardize the viability of small populations in which purifying selection may not be efficient (Gabriel, Lynch, and Bü rger 1993). This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/ licenses/by-nc/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited. For commercial re-use, please contact journals.permissions@oup.com Indeed, it has been widely observed with many RNA viruses that when viral populations are submitted to consecutive transmission bottlenecks of size one, without subsequent population expansions, viral fitness declines in a process compatible with the onset of the Muller's ratchet (e.g. Chao 1990;Duarte et al. 1992;Clarke et al. 1993;Escarm ıs et al. 1996;Yuste et al. 1999;De la Iglesia and Elena 2007). On the other hand, large population passages in a new environment always result in fitness increases (e.g. Clarke et al. 1993;Novella et al. 1995Novella et al. , 1999. The role of genetic variability and minority variants in the collective behavior of viral populations as a whole has been well established (Duarte et al. 1994;Cuevas, Moya, and Sanju an 2005;Schulte and Andino 2014;Border ıa et al. 2015;Combe et al. 2015) and is in the origin of phenomena such as evolvability (Burch and Chao 2000;Ciota et al. 2007Ciota et al. , 2012 or the memory to past environmental constraints (Ruiz-Jarabo et al. 2000) experienced by the viral mutant swarms.
In a hallmark study, Duarte et al. (1994) characterized the distribution of fitness for individual genomes isolated from a clonal population of vesicular stomatitis rhabdovirus (VSV). They found that the majority of genomes produced during replication of this VSV clone contained deleterious mutations, being the average fitness of clones significantly lower than the fitness of the entire population (Elena, Codoñer, and Sanju an 2003). Later on, using again VSV, Cuevas, Moya, and Sanju an (2005) found that the main driver of the fitness differences between individual clones was their ability to complete infection cycles rather than viral yield per cell or differences in adsorption and cell-to-cell transmission rates. However, one may argue that these studies suffer the weakness of being done in cell cultures, which represents a highly artificial environment that lacks the inherent complexity (morphological, physiological, and in defense responses) of real multicellular eukaryotic hosts. To explore whether these observations hold in the context of the infection of a real host, we have performed experiments conceptually identical to those reported by Duarte et al. (1994) but using the plant pathosystem formed by Tobacco etch virus (TEV, genus Potyvirus, family Potyviridae) and its natural host, tobacco. TEV is a prototypical example of a picorna-like virus; it is a very well-characterized plant RNA virus that has become a model system for studying plant RNA virus evolution in recent years (reviewed in Elena et al. 2008Elena et al. , 2011. TEV genome is composed by ca. 9.5 kb single-strand RNA molecule of positive polarity that contains a large open reading frame (ORF), whose product is a polyprotein that self-processes into ten mature peptides, plus a second small ORF in the þ2 reading frame that encodes for an additional peptide (Revers and Garc ıa 2015). TEV infects numerous plant species, though most of its natural hosts are restricted to the family Solanaceae (Shukla, Ward, and Brunt 1994). The usual symptoms in solanaceous plants include stunting and mottling, necrotic etching, and leaf malformation (Shukla, Ward, and Brunt 1994).
In short, we generated a clonal TEV population by inoculating a single tobacco plant with infectious TEV RNA generated by in vitro transcription. Next, 164 individual biological subclones were isolated from this clonal population using a lesion-forming assay in leafs of quinoa, which is equivalent to the wellknown plaque-forming assay in monolayers of susceptible cells with solid overlay agar. It is important to mention that some subclones may actually correspond to the same genotype, as the probability of resampling the same genotype depends ultimately on its population frequency. The fitness of biological subclones and of random samples of the clonal population was evaluated in parallel at increasing numbers of days post-inoculation (dpi) in tobacco plants. The fitness of subclones relative to that of the clonal population was compared.
Preparation of the starting TEV clonal population
The infectious clone pMTEV (Bedoya and Dar os 2010) contains a full-length cDNA of TEV and a 44 nt long poly-T tail followed by a unique BglII restriction site. After linearization with BglII, the plasmid was transcribed with SP6 mMESSAGE mMACHINE kit (Ambion) following the manufacturer's instructions. RNA integrity and quantity were assessed by gel electrophoresis. The RNA transcript was mixed with a 1:10 volume of inoculation buffer (0.5 M K 2 HPO 4 , 100 mg/ml Carborundum). Five mL containing 5 mg of 5 0 capped infectious RNA were inoculated by rubbing the third true leaf of twenty-five 4-weeks old Nicotiana tabacum (L.) var Xanthi NN plants (Carrasco et al. 2007a). Inoculations were done in a single experimental block and all plants were at similar growth stages. Afterwards, plants were maintained in a Biosafety Level-2 greenhouse at 25 C under a 16-h light and 8-h dark photoperiod. Eight dpi, after symptoms appeared, virus-infected leafs and apexes of twenty-five plants were collected in plastic bags (after removing the inoculated leaf). The whole tissue collected was mixed, frozen in liquid nitrogen, ground with mortar and pestle, and aliquoted (100 mg each). These aliquots of TEV-infected tissue were stored at À80 C.
Isolation of biological subclones, infection of tobacco plants with individual subclones, and samples from the clonal population
Supplementary Figure S1 shows a schematic representation of the experimental protocol followed to isolate subclonal components and random samples from the clonal population. Isolation of subclonal components of the TEV clonal population was done by the dilution-inoculation assay method on the local-lesion host Chenopodium quinoa Willd (Kleczkowski 1950;De la Iglesia and Elena 2007). First, 100 mg of TEV-infected tobacco tissue were ground with mortar and pestle in 1 ml of K 2 HPO 4 buffer. Second, nine fully developed leafs from each one of four different 4-week-old C. quinoa plants were inoculated by rubbing with 10 ml of undiluted, 10-and 100-fold diluted grounded tissue stock; three leafs of each dilution were inoculated to minimize plant effects (Kleczkowski 1950) and 100 mg/ml Carborundum was added to facilitate inoculation. Two additional leafs were mock-inoculated with 0.5 M inoculation buffer. Nine dpi, clearly isolated local lesions were collected individually, immediately frozen with liquid nitrogen, ground in 1.5 ml tubes with pestles and kept in liquid nitrogen until the moment of inoculating tobacco plants. At the time of inoculation, 20 ml of inoculation buffer were added to each tube, mixed thoroughly and inoculated in tobaccos, as explained above. In parallel, a number of aliquots from the stock clonal population were processed in the same way and used to inoculate tobacco plants. The full inoculation experiment was divided into four blocks, each block containing a number of inoculations with subclones and a number of inoculations with samples from the clonal population. All infected tobacco plants showed clear symptoms 4-6 dpi. Viral load of infected plants was evaluated after 5 (fifty-two plants infected with subclones and seventeen infected with the population), 7 (five and three plants), 9 (seven and three plants), and 12 (100 and 25 plants) dpi as described below.
RNA extraction from infected tobacco plants and quantification of viral load
Total RNA was extracted from 100 mg of fresh tissue of mockinoculated and virus-infected systemic leaves of tobacco using InviTrap Spin Plant RNA Mini Kit (Stratec Molecular) and the concentration was adjusted to 100 ng/ml. Quantification of viral load was measured by absolute RT-qPCR using standard curves. Standard curves were constructed using ten serial dilutions of the ancestral TEV RNA produced as described earlier and diluted in total plant RNA obtained from healthy tobacco plants, treated like all other plants on the experiment. RT-qPCR reactions were performed in 20 ml volume using One Step SYBR PrimeScript RT-PCR Kit II (Perfect Real Time) (TaKaRa) following the manufacturer's instructions. The forward (q-TEV-F 5 0 -TTGGTCTTGATGGCAACGTG-3 0 ) and reverse (q-TEV-R 5 0 -TGTGCCGTTCAGTGTCTTCCT-3 0 ) primers were chosen to amplify a 71 nt fragment in the 3 0 end of TEV genome and would only quantify complete genomes but not partial incomplete amplicons (Lali c et al. 2011). Amplifications were performed in ninety-six-well plates, each plate containing twenty samples from plants infected with subclones, five samples from plants infected with the clonal population and the RNA samples necessary to built the standard curve. Three technical replicates per infected plant were done. Amplifications were done using the StepOne Plus RT-PCR (Applied Biosystems), according to the following thermal profile: RT phase consisted of 5 min at 42 C and 10 s at 95 C; PCR phase consisted of forty cycles of 5 s at 95 C and 34 s at 60 C; final phase consisted of 15 s at 95 C, 1 min at 60 C and 15 s at 95 C. Quantification results were examined using StepOne software version 2.2.2 (Applied Biosystems).
Fitness
Viral loads at time t dpi, V t , obtained by RT-qPCR were transformed into Malthusian growth parameters using the expression m ¼ 1 t lnV t . Since we are interested in evaluating the performance of subclonal components relative to random samples from the clonal population, the relative fitness of subclone i in the sample taken t dpi was computed as where m pop t is the average Malthusian growth parameter estimated for the population samples taken t dpi.
Statistical analyses
We believe it is important to highlight at this point that differences among subclones are biologically meaningful, since each subclone potentially corresponds to a different genomic sequence (although the probability of some subclones may share the same genomic sequence would ultimately depend on their abundance in the clonal population), whereas differences among large samples taken from the entire clonal population are mostly statistically meaningful and evaluate our ability to reproducibly estimate fitness for a large population.
Prior to any further statistical analyses, relative fitness data were checked for violations of the assumptions of normality and homoscedasticity of variances. We found that data were not normally distributed (one-sample Kolmogorov-Smirnov test: D ¼ 0.201, P < 0.001) nor variances were homogeneous among groups (Levene test: F 139,496 ¼ 21.520, P < 0.001). Therefore, we opted for a generalized linear model (GLM) approach for data analysis. The model incorporated three random factors: the source of inoculum used to infect the tobacco plant (S; i.e. a subclone or a random sample from the population), the biological replicate (R; i.e. the individual subclone or the sample from the population) which is nested within S, and the dpi at which samples were taken, T, which is treated as a covariable. The model equation reads where l is the grand mean value and n ijkl is the error associated with individual measure l (estimated from the technical replicates of the RT-qPCR reaction). A Normal distribution and an identity link function were assumed (based on the minimal Bayes information criterion). The statistical significance of each factor was evaluated using a likelihood ratio test (LRT) that asymptotically follows a v 2 distribution. The magnitude of the different factors included in the model was evaluated using the g 2 P statistic that represents the proportion of total variability attributable to a given factor. Conventionally, values g 2 P < 0.05 are considered as small, 0.5 g 2 P < 0.15 as medium and g 2 P ! 0.15 as large effects. The partition of total variance among the different factors was done by maximum likelihood.
All statistical tests were performed using IBM SPSS software version 22. Otherwise indicated, all confidence intervals reported represent 61 SEM. Figure 1 shows the distribution of relative fitness values estimated for individual subclones and for large samples randomly taken from the original clonal population at increasing numbers of dpi. These fitness data were fitted to the statistical linear model described in the 'Methods' section by means of GLM techniques. Table 1 summarizes the results of the model fitting and the significance tests for all factors involved in the model. First, these analyses show that overall highly significant differences exist between fitness of the subclonal components and the repeated measures obtained for the entire clonal population (second row in Table 1). Indeed, the grand mean relative fitness for subclones is 0.870 6 0.003, whereas the grand mean relative fitness estimated for the entire clonal population is 1.001 6 0.006. This result suggests that the fitness of the whole clonal population can not be predicted simply by averaging the fitness of the individual subclones that compose it. Second, a significant time effect exists (third row in Table 1) and, more interestingly, time affects the magnitude of differences between subclones and the population samples in a different manner (fourth row in Table 1): while the estimates of fitness for the clonal population does not change with time, the fitness of the subclones increases with the number of dpi. This effect will be addressed more specifically in Sections 3.2 and 3.3 below.
Statistical modeling of fitness data and analysis of variance
Third, significant differences exist among relative fitness of the biological replicates within each group (fifth row in Table 1), that is, between subclones and/or between samples from the entire population. What causes such differences? To address this question, we computed the maximum likelihood estimates for the genetic component of variance for relative fitness among subclones and among the different samples from the clonal population at different dpi. In the case of subclones, the genetic contribution to the observed fitness differences ranged between (3.240 6 0.051) Â 10 À2 at 5 dpi and (3.086 6 0.031)Â10 À3 at 12 dpi, suggesting that genetic differences among subclones are large shortly after inoculation but are being erased at longer times as the newly generated populations accumulate more and more genetic variants that make them more genetically homogeneous in terms of fitness (ca. one order of magnitude less diverse). In a sharply contrasting pattern, the genetic contribution to the fitness differences observed among independent samples of the entire clonal population rank from 0 at 5 and 7 dpi to (4.090 6 0.178) Â 10 À4 at 12 dpi, suggesting that population samples were really homogeneous right after inoculation, as expected, but start slowly diverging as they accumulate genetic variants in an independent manner during the progress of single infections.
3.2. The average fitness of subclones does not predict the fitness of the whole clonal population at early time points Figure 2 illustrates the effect of the duration of infection on the differences in relative fitness between the average subclone and the whole clonal population. Let's pay attention now to the earliest time point evaluated, 5 dpi. The distribution of relative fitness values estimated from the whole clonal population is symmetrical (g 1 ¼ À0.337 6 0.550; t 16 ¼ 0.613, P ¼ 0.548) and mesokurtic (g 2 ¼ 1.360 6 1.063; t 16 ¼ 1.279, P ¼ 0.220), as expected for a Normal distribution. The distribution is centered around a mean relative fitness value of 1.036 6 0.030. In contrast, the distribution of fitness values among subclones is moderately yet significantly left-skewed (g 1 ¼ À0.778 6 0.330; t 51 ¼ 2.358, P ¼ 0.022), that is, the fitness of most subclones is below the mean value, although the distribution is still mesokurtic (g 2 ¼ 0.252 6 0.650; t 51 ¼ 0.388, P ¼ 0.700), that is, most values lie near the center of the distribution rather than in the tails. The mean relative fitness of a randomly chosen subclone was 0.689 6 0.025. Therefore, the average fitness of subclones 5 dpi is 33.49% smaller than the expected fitness of the entire clonal population after the same number of dpi, representing a highly significant difference in centrality (Mann-Whitney' test: U ¼ 38, P < 0.001) and shape (two-samples Kolmogorov-Smirnov test: D ¼ 0.825, P < 0.001) between both distributions. The observed negative difference in mean fitness indicates that mutations making subclones differ from each other were deleterious on average. We have tested whether the differences between subclones and samples from the clonal population remained significant at intermediate time points (7 and 9 dpi) despite the smaller sample sizes, as suggested by the non-overlapping 95% confidence intervals of the median shown in Figure 2. Seven dpi the distribution of fitness among subclones has a mean value of 0.886 6 0.034, which is significantly smaller than the average fitness estimated for independent samples of the clonal population, 1.000 6 0.002 (Mann-Whitney' test: U ¼ 0, one-tailed P ¼ 0.018; Kolmogorov-Smirnov test: D ¼ 1, P ¼ 0.047). After 9 dpi, both samples remain different both in centrality parameters (Mann-Whitney' test: U ¼ 0, one-tailed P ¼ 0.009), with mean values of 0.794 6 0.077 and 1.000 6 0.006 for subclones and samples from the clonal population, respectively, and in shape (Kolmogorov-Smirnov test: D ¼ 1, P ¼ 0.030).
Differences in fitness between subclones and the population disappear after long periods of replication
As shown in Table 1 and discussed in Section 3.1, the duration of infection had a significant effect on relative fitness that depended in magnitude on whether it was measured for the entire clonal population or for individual subclones. Figure 2 clearly shows that the large and significant differences observed at early times post-inoculation disappears at the latest times of infection (12 dpi). The distribution of relative fitness values for independent samples taken from the whole clonal population remains symmetrical (g 1 ¼ 0.079 6 0.464; t 24 ¼ 0.170, P ¼ 0.866) and mesokurtic (g 2 ¼ À0.862 6 0.902; t 24 ¼ 0.956, P ¼ 0.349), again, as expected for a Normal distribution of values taken from the same population. The distribution is centered around a mean value of 1.000 6 0.005. Twelve dpi, the distribution of relative fitness values for the subclones was also symmetrical (g 1 ¼ À0.126 6 0.241; t 99 ¼ 0.523, P ¼ 0.602) and mesokurtic (g 2 ¼ À0.260 6 0.478; t 99 ¼ 0.544, P ¼ 0.588), with a mean value of 1.009 6 0.006. No significant differences exist between these two distributions neither in centrality nor in shape (Mann-Whitney' test: U ¼ 1,089, P ¼ 0.320; two-samples Kolmogorov-Smirnov test: D ¼ 0.300, P ¼ 0.055). Therefore, we conclude that the initially diverse subclones have compensated their deleterious mutational load and converged towards the average relative fitness of the clonal population from which they were isolated.
Discussion
Two decades ago, Duarte et al. (1994) described for the first time great phenotypic heterogeneity among subclonal components of a clonal population of VSV. This variability among individual genomes, that were naïvely expected to be clonal, was an unavoidable consequence of the error-prone replication of RNA viruses, with mutation rates that usually are within the range of 0.1-1 mutations per genome (Sanju an et al. 2010). Furthermore, since the vast majority of random mutations had a negative effect on viral fitness, the average fitness of the subclonal components was significantly lower than the fitness of the population as a whole. Here, we have extended these observations to a plant RNA virus, the potyvirus TEV, and in a fully realistic biological situation, the infection of plants of the natural host tobacco, in contrast to the highly artificial and over-simplistic cell culture environment used by Duarte et al. (1994). Shortly after infection, our results fully reproduced those from Duarte et al. (1994): we have observed highly significant differences among subclones and the clonal population from which they were isolated, being the subclones less fit than the population. Furthermore, we have also observed a great amount of genetic variance for fitness among subclones, indicative of different mutations being fixed on each subclone. Beyond what was described in the VSV work, here we have found that after long periods of infection, the genetic differences among subclones were erased. Again, the error-prone replication of these clones resulted in mutant swarms upon which positive selection operated to bring the average fitness of these newly created mutant swarms back to the same value of the ancestral population in the natural host.
Some readers might be concerned about the possible effect of isolating subclonal components in a host, C. quinoa, different from the actual host in which fitness effects were evaluated, N. tabacum. We have previously shown that host switching in TEV occurs with concomitant changes in fitness (Agudelo-Romero, De la Iglesia, and Elena 2008;Bedhomme, Lafforgue, and Elena 2012;Hillung et al. 2014). However, this to happen needs of very large effective population sizes so beneficial mutations improving fitness in the new host can be generated, survive drift, and increase frequency in the population until reaching fixation. The severe bottlenecks imposed during the local lesion assays in the quinoa leafs makes adaptation to this host highly unlikely and thus the initial fitness differences observed short times after infection of tobaccos are most likely due to the deleterious nature of standing variation in the clonal population rather than to the emergence of mutations beneficial in quinoa and of negative pleiotropic effects in tobacco.
Transmission bottlenecks are common during the infection of individual plant hosts mediated by insect vectors or by direct contact (Moury, Fabré , and Senoussi 2007;Betancourt et al. 2008;Sacrist an et al. 2011), during cell-to-cell spread within the inoculated leaf (Miyashita and Kishino 2010;Tromas et al. 2014), during systemic movement via the phloem and subsequent colonization of distal tissues (Hall et al. 2001a,b;Sacrist an et al. 2003;French and Stenger 2005;Gonz alez-Jara et al. 2009;Ali and Roossinck 2010;Gutié rrez et al. 2010Gutié rrez et al. , 2012Gutié rrez et al. , 2015Tromas et al. 2014), and even during vertical seed transmission (Fabré et al. 2014). In all these cases, bottlenecks are strong and the number of transmitted genomes varies within the range of units or tens (Zwart and Elena 2015). These strong bottlenecks minimize the efficiency of purifying selection to remove deleterious alleles, which are constantly produced during error-prone genomic RNA replication, and result in the onset of Muller's ratchet. Muller's ratchet has been amply described operating in RNA virus populations under the appropriate demographic conditions (Chao 1990;Duarte et al. 1992;Clarke et al. 1993;Yuste et al. 1999), including plant viruses (De la Iglesia and Elena 2007). The rate at which the ratchet clicks accelerates in a feedback process known as mutational meltdown (Lynch and Gabriel 1990;Gabriel, Lynch, and Bü rger 1993;Lynch et al. 1993): the higher the mutational load, the less viable individuals in the population and, hence, the smaller the effective population size and the stronger the bottleneck. Despite the pervasive presence of bottlenecks during plant infections, plant RNA viruses do not extinguish themselves. Why? Our observation that compensatory evolution takes place as soon as the subclones regenerate a new mutant swarm, and evolve back to the fitness of the original population, are relevant to answer this question. The longer the duration of the infection, the more chances for the mutant swarm to contain the right combination of compensatory mutations or reversion mutants that are quickly and efficiently selected for, resulting in fitness recoveries within the host. Indeed, it was shown for the chikungunya alphavirus that this recovery from the deleterious effect of fixed mutations was strongly dependent on the fidelity of viral replicases RdRps (Coffey et al. 2011): while wildtype viruses created highly evolvable and virulent mutant swarms, viruses having a high-fidelity RdRp were unable of recovering fitness as they produced more homogeneous mutant swarms (Coffey et al. 2011).
Two considerations must be made in the context of the operation of Muller's ratchet in finite viral populations. First, the distribution of fitness differences among subclonal components represents a biased sample from the real underlying distribution of mutational fitness effects associated to single point mutations for TEV on its natural host (Carrasco, De la Iglesia, and Elena 2007b). Although real distributions incorporate a substantial fraction of lethal mutations for TEV (Carrasco, De la Iglesia, and Elena 2007b) and VSV (Sanju an, Moya, and Elena 2004a), the sample generated in this study only contains viable genotypes, that is, those able of generating a visible local lesion in quinoa leafs. This problem was also evident in Duarte et al. (1994) study with VSV. The existence of lethal alleles within a mutant swarm further reduces its effective population size, as lethal genotypes cannot contribute to the next generation, thus eventually accelerating Muller's ratchet. Second, the subclonal components may contain more than one mutation, and likely the newly created mutant swarms will contain genotypes carrying more than one mutation. If mutations interact epistatically, especially if they do so in a synergistic manner, and mutational effects are all identical, the speed of the ratchet will halt (Kondrashov 1994). However, if mutational effects follow some continuous distribution, the ratchet will operate regardless the way mutations interact (Butcher 1995). As already mentioned, mutational effects are variable for TEV and VSV, while epistasis among mutations has been shown to be predominantly of antagonistic type for these two viruses (Sanju an, Moya, and Elena 2004b; Lali c and Elena 2012), thus keeping the field open for the ratchet to operate.
A final consideration to avoid misconceptions. As we have discussed, Muller's ratchet is expected to operate only in small viral populations wherein selection is relaxed and drift plays a major role. This has not to be confused with lethal mutagenesis, a completely different phenomenon. Lethal mutagenesis in viral populations is a deterministic process that is independent of population size and only depends on extra-high mutation rate and on the number of replication-competent offspring per parent being small (Bull, Sanju an, and Wilke 2007). The potential effectiveness of lethal mutagenesis as an antiviral therapy is beyond this study.
Data availability
Data are available through LabArchives doi: 10.6070/H4BV7DN9.
Supplementary data
Supplementary data are available at Virus Evolution online.
Funding
This work was supported by grants BFU2012-30805 from the Spanish Ministry of Economy and Competitiveness (MINECO), PROMETEOII/2014/021 from Generalitat Valenciana and the EvoEvo project (ICT610427) from the European Commission seventh Framework Program to S.F.E. H.C. was supported by predoctoral contract BES2013-065595 from MINECO.
|
2018-04-03T03:27:24.205Z
|
2016-01-01T00:00:00.000
|
{
"year": 2016,
"sha1": "a582fe5bf2e78f8a1677de5a6bfdac5be371ee36",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/ve/article-pdf/2/1/vew006/19571872/vew006.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d08a9d24cfdfea909330cc1f595a8a36bb43e87b",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
247754733
|
pes2o/s2orc
|
v3-fos-license
|
A Methodology to Support the Flexibility Maximization for Multi-Energy Systems to Provide Services to the Electrical Distribution System
The paper proposes the Energy-Lattice methodology designed to model and analyse multi-energy systems (MES) as energy transformation flows. A mixed-integer linear programming algorithm supports the methodology to set short-term planning for MES to satisfy the multi-energy demand, and the provision of services, like ancillary services to the power system. The methodology is based on the notion of energy-layers associated to energy carriers. An energy-layer represents the provision of services and the satisfaction of the external demand, by the operation of suitable devices, like generators, storages and loads related to an energy-carrier. Energy layers are related each other by conversion nodes. This work was partially carried out in the European project H2020 MANGNITUDE (n. 774309). The paper illustrates the main features of the Energy Lattice methodology and the underlying algorithm that model the behaviour of the MES in the short term. This algorithm is a mathematical mixed integer linear programming composed of two steps. The former copes the energy demand and the latter, according to the results of the first one, verifies the economic convenience to provide ancillary services according to the identified flexibility margins.
Introduction
The transformation of energy landscape towards decentralized low-carbon energy systems is leading to redesign the generation devices to supply the demand and revise the electricity system management strategies [1]. In this respect, the strong commitment arising from power system operators concerns the even more availability of resources for regulation exchanged as ancillary services (AS) [2]. The set of ASs consists of different procedures to control the stability and balance the power system. These bring in action active power control resources like primary, secondary and tertiary reserve, power balancing and congestion resolution . Traditional resources devoted to system regulation are progressively reducing in number, and substituted by variable renewable energy sources (vRES), which are the main responsible of the issues currently occurring on the electrical systems [2]. This trend can be fruitfully mitigated by the available resources, acting on the electrical (distribution) system and on other energy systems like gas, heat, etc. which can share flexibility to support the electrical system. Multienergy systems (MES), [3] and [4], are systems that can share flexibility to support power system. The integration of technologies like generation, energy storage, renewable energy, short distance transmission and natural gas, is considered an effective way to improve the energy utilization efficiency, accommodate more renewable energy and satisfy the multiple energy demands [5].
The physical and commercial coupling enables great synergies among the energy carriers but at the same time introduces a higher level of complexity to be managed. Several complexity-streams can be identified: spatial : a MES can be a converter device, or a pool of devices, as well as an energy area, an entire country or a region; temporal : different functionality like operation, balancing and planning of a MES, in the "very-short" and "short" term with seconds, minutes, hours-time resolutions; and hours, days, weeks-time horizons; networks: integration of energy networks (for electricity, gas, DH -district heating-and cooling, hydrogen, and so on). This is currently one of the main barrier to completely get benefits from MES integration.
A strong commitment arises from power system to ask the availability of resources for regulation organized by ancillary services (AS) products. Power system stability and balance procedures are implemented by several market products like primary, secondary and tertiary reserve, power balancing, congestion resolution, etc. (Multi)-Energy systems are able to provide ASs in a flexible way combining generators and loads, and in some cases even storage systems, across several energy carriers. The complexity in modelling and analyses of MES represents a barrier to the complete exploitation of the benefits arising from their integration. Delimit this complexity is the key goal addressed in this paper. Complexity reduction will be dealt with from several perspectives: the identification of services, the demand and the different devices composing the MES. Devices are modelled to meet the requirements posed by a planning scheme, which are mainly associated to the spatial and temporal streams previously introduced, as addressed in [6]. The work proposed in this paper, currently carried on by RSE and ACS within the European H2020 project MAGNITUDE (n. 774309) and the Research Fund for the Italian Electrical System, investigates how to identify and design the optimization strategies to maximize the synergies among multi-energy systems for the provision of services to the electrical system. The paper is organized as follows: in section 2 the energylattice methodology is introduced, in section 3 the two stagealgorithm is sketched and in section 4 the exemplification of the methodology, based on the Milan district-heating case study, is proposed.
The Energy Lattice Methodology
The complexity of the MES modelling phase is mainly due to coordination of multiple devices -operating on different energy carriers, like: electricity, gas, heating, cooling etc.;to satisfy the multiple-carrier demand, and to provide multiple-services. Devices taken into account include: energy converters (e.g., gas turbine, gas boiler, , electric chiller, absorption chiller), energy storages -electricity and heat storages -transformers [4]. The energy transformation process to satisfy the demand can result in energy losses and demand-not-met.
The diffusion of modelling languages to express mathematical programming problems (e.g., AMPL, GAMS, CPLEX/OPL, etc.) are leading to abstract the design point of view from "low-level coding" related to the mathematical programming paradigm, to a more abstract one, based on algebraic equations, possibly associated to graphs, to represent the transformation of energy fluxes through system devices and network-carriers. Furthermore, the possibility of providing a common framework covering different levels from planning to control is an increasingly requested feature.
• The Energy Lattice The methodology links each energy carrier managed by the MES to an energy-layer (EL). An energy-layer hosts the energy process related to the associated carrier devoted to provide external services and satisfy the carrier-demand. A general representation of an energy-layer is proposed in Fig.1. The energy transformation/production is modelled by interactions of elementary entities like: generators (GEN), energy storages (STO), loads (LOAD). The carrier's demand satisfaction is represented by the withdrawal of energy, while service provision is an energy contribution, or a withdrawal, or both. An energy-layer quantifies losses and demand-notmet. In general, the model of a MES involves as many energy-layers as many energy carriers it manages. Energy layers are linked together by energy conversion-nodes (CN). Each CN represents the conversion of energy performed by a conversion device, like for instance a combined heat and power generator (CHP), a chiller, etc. A conversion device provides as many CNs as many energy conversions it performs.
Figure 1 : Energy (Carrier) Layer: main constituents
• Balancing node/carrier network The simplest way to model the flow among services, generators, storage, loads and demand is by a single balancing node. Each entity is linked with an energy flux with a specific direction. However, if the complexity or the features required to model the energy fluxes cannot be represented by a single node, the detail of the network is introduced. In this methodology formulation just the case of balancing-node is taken into account. Energy equilibrium for balancing nodes is solved according to the Kirchhoff's current law, set for electric circuits. That is, the algebraic sum of energy inflows (contributions) to the node must be equal to the algebraic sum of corresponding energy outflows (withdrawals), accounting losses and demand-not-met.
• Conversion nodes to link energy-layers Each coupling-device operates an energy conversion among two or more energy carriers, this is represented by as many suitable conversion-nodes (CNs) as needed. According to a functional view, a CN is a bidirectional-rule that relates input and output energy flows together. In case the input energy flow to the CN is the input energy flow to the device and the output energy flow to the CN is the output energy flow to the device the CN denotes the efficiency.
• Carrier process and the lattice model The complete model of a MES consists of as many energylayers (ELs) as many energy-carriers (ECs) are involved in its energy processes, ELs are linked together by suitable CNs. Within each energy layer a node, or a graph, links together GENs, STOs and Loads and coupling devices allow to link energy layers through CNs. The model gained is a graph. This graph ensures the soundness of energy-transformation processes held in the model. That is, the balance acquired in one EL must be coherently reflected in all the other connected ELs while providing services and satisfying the demands.
• MES and operational flexibility identification As previously shown, the model of a MES expressed by the energy-lattice methodology ensures balancing among the different entities. The methodology also supports the identification of (operational) flexibility owned by a MES, to gather extra resources to cover extra-services. For instance, this suites well to the case extra-flexibility margins are identified to be offered on the ancillary market services after the participation to the day-ahead electricity market. In fig. 2 the energy-layer is integrated with the information regarding the flexibility margins the MES entities own.
(2) where, uc d denotes the unit-commitment of d, and P d (t) is a dphysical (input or output) variable at time t; P d and ܲ ௗ denote, respectively, the lower (min) and the upper (max) limit value. For each device d power shifting (in upward or downward direction) must be within the maximum power variation allowed for d(the ramp-rate ρ), For a coupling device that links energy carriers Е Е Е Е j and Е Е Е Е i , the conversion coefficient is, where P out d is the output power (Еi), P in d is the input power (Еj); σ Еi,Еj d labels the CN, it specifies the conversion to get the P out d from P in d , in this case σ identifies the efficiency η of d to convert P d from Еi to Еj. The state of charge for storage devices at each time instant depends on the history and on the current charging contribution and discharging withdrawal, and is computed by the equation: where SOC denotes the state-of-charge of storage d, ω d denotes the losses of the storage in the time unit, P CH d and P DCH d denote, respectively, the charging and discharging power of d. η CH d and η DCH d denote, respectively, the charging and discharging efficiency value for d. The balancing equation for each energy-layer Еi, taking into account service provision and demand satisfaction (with suitable management policies [8]) is, where on the left side (contributions) there are the GENs P g , discharging STO P DCH s , the service-S -P IN sr and the demandnot-met λ. On the right side (withdrawal) there are the LOADs P l , charging STO P CH s , the services-S + P OUT sr , the demand D and the losses ω. Both first and second-stage algorithm require the cost reduction of energy production P g , an increasing of revenue due to demand satisfaction P l and to electricity market participation (for electricity energy carrier, [9]).
The term cost d represents the unitary production cost and P d , the power generated by d, rem D represents the unitary remuneration due to supply the demand D, cost IN sr and rem OUT sr represent, respectively, the unitary cost and remuneration due to import and export of power due to some service provided by the MES. The second stage algorithm differs from the first one because it looks for extra services to be provided and at this aim computes flexibility margins for GENs, STOs and LOADs. Flexibility is distinguished between upward (up) and downward (dw) margins (identified with M1 and M2), for generation and load modes: where in case the device d is a generator: M1=up and M2=dw, while in case is a load: M1=dw and M2=up.
Methodology exemplification
In this section a sketch of the Energy-Lattice methodology testing is proposed. This refers to the analysis of the (real) case study of the Milan district-heating (DH) system. This plant includes 9-devices to feed the district heating network linking 700 buildings. Plant devices includes: 3 Combined Heat and Power -CHP gas engines, 1 water/water heat pump, 3 gas boilers, 2 heat storages (operated as a single unit) and 1 electric boiler. The next table illustrates the essential technical characteristics of these devices. The Energy-Lattice multiple-layer representation is proposed in the next figure.
CONCLUSIONS
The paper proposed a methodology to support the short-term planning for multi-energy systems. The methodology is designed to ensure the balancing equilibrium across several energy carriers, and at the same time to associate to each energy carrier its own resources. The methodology owns a two steps algorithm to analyse and operate MES. This case study is a fragment of the Milan district-heating system.
|
2022-03-28T15:02:43.572Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "55602780c55376180186ff6e6a6c4595bed0c8f0",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1049/oap-cired.2021.0306",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "7a2dcf2e1cba788075dfb89e337d4b1a08231662",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
}
|
3427615
|
pes2o/s2orc
|
v3-fos-license
|
Climate contributions to vegetation variations in Central Asian drylands
: Central Asia comprises a large fraction of the world’s drylands, known to be vulnerable to climate change. We analyzed the inter-annual trends and the impact of climate variability in the vegetation greenness for Central Asia from 1982 to 2011 using GIMMS3g normalized difference vegetation index (NDVI) data. In our study, most areas showed an increasing trend during 1982–1991, but change during the study period. The impact of climate on vegetation was significantly different for the different sub-regions before and after 1992, coinciding with the collapse of the Union of Soviet Socialist Republics (USSR). It was suggested that these spatio-temporal patterns in greenness change and their relationship with climate change for some regions could be explained by the changes in the socio-economic structure resulted from the USSR collapse in late 1991. Our results clearly illustrate the combined influence of climatic/anthropogenic contributions on vegetation growth in Central Asian drylands. Due to the USSR collapse, this region represents a unique case study of the vegetation response to climate changes under different climatic and socio-economic conditions.
Introduction
Central Asia comprises one of the most important drylands in the world, accounting for nearly 1/10 of the total dryland area in the planet [1].This region feeds over 64 million people, with its population expected to reach 87 million in 2050 (FAO, [2]).Central Asia is a moisture-limited ecosystem dominated by steppes, semi-deserts, and deserts [3], and represents an important carbon reservoir, playing a significant role in the global carbon cycle [4].However, large fluctuations in vegetation greenness have been recently observed in this region.Comparing with the greenness variation in 1982-2006 in Eurasia, Piao et al. [5] noted a turning point in the normalized difference vegetation index (NDVI) trajectory for Central Asia in 1992-1996, when the greenness started to decrease.Significant negative trends in vegetation greenness continued for large parts of Central Asia since the beginning of the 20th century [6,7].
Many studies have shown that drylands in Central Asia are especially sensitive and susceptible to climate change and environmental degradation [8][9][10].Continuous increases in temperature contributed to increased evapotranspiration in this region.Thus, shortage of water resources and aridity are expected to be intensified [4,[11][12][13][14].Under such circumstances, Eisfelder et al. [15] pointed out that temperature is an important factor for plant net primary productivity (NPP) during spring in Kazakhstan.Although precipitation is highly variable [4,11], several studies have found that the plants in Central Asia are sensitive to precipitation anomalies and exhibit lagged responses for most areas [15][16][17][18].However, studies on the combined impact of temperature and precipitation on the vegetation greenness in Central Asia are limited so far.
In addition to climate change, the collapse of Union of Soviet Socialist Republics (USSR) in 1991 was another pivotal factor affecting plant growth [19].Given the political, economic, and social instability post-USSR collapse, terrestrial ecosystems became particularly vulnerable to human disturbance, such as land-use changes [11,20,21].Vast areas of ploughed land in Kazakhstan, used as extensive farmland during the USSR period, were abandoned because of the reduction in population following the USSR collapse [19,22,23].Accordingly, significant differences in the vegetation greening trend [5,24] and phenology were found in Central Asia for the pre-and post-USSR collapse [25].Thus, a deeper understanding of the challenges faced by Central Asian countries following the USSR collapse is crucial for designing a comprehensive adaptation strategy to contemporary climate change in the broader context of sustainable development.However, an assessment of the different mechanisms driving the vegetation response to climate change under different socio-economic circumstances has received little attention in the scientific literature.
This study addresses the need to assess the terrestrial vegetation response to climate variations in Central Asia pre-and post-USSR collapse.More specifically, we aimed to (1) investigate the long-term changes in vegetation, especially for grasslands and croplands, and (2) determine the contribution of climatic factors to vegetation growth during 1982-2011, which encompasses pre-and post-USSR collapse periods.
Study Area
Central Asia constitutes the core region of the Asian continent, spanning from the Caspian Sea in the west to China in the east and from Afghanistan in the south to Siberia in the north (Figure 1).Our study area included five countries: Kazakhstan, Uzbekistan, Turkmenistan, Tajikistan, and Kyrgyzstan.Its mean summer temperatures range from 20 °C in the north to above 30 °C in the south [26], while during winter they are below zero, with extremes below −20 °C in the northern and mountain areas.The mean annual precipitation in the lowlands ranges between ~400 mm in north of Kazakhstan and less than 100 mm in some areas of Uzbekistan and Turkmenistan [11,26].
Figure 1.
Map of study area.The land cover data was adapted from the 500 m Moderate Resolution Imaging Spectroradiometer (MODIS) land cover product (MCD12Q1).Data for population, roads, rivers, lake centerlines, and country boundaries were downloaded from the Natural Earth database [27].
Central Asia has approximately 6000 lakes, most of which are inland lakes [28].Water originating from snow melting in the mountains is provided to the lowland lakes by rivers, with upstream storage reservoirs used for agricultural irrigation.However, more than half of inland lakes have experienced significant decreases since 1975 [29], exacerbating serious environmental issues such as soil salinization, pollution of water sources [30], frequent drought, and land degradation [11].
According to the Moderate Resolution Imaging Spectroradiometer (MODIS) land cover map (MCD12Q1), barren lands occupy most Uzbekistan and Turkmenistan.Regarding the vegetation, grasslands account for 60.6% of the total territory in Central Asia, followed by croplands (12.0%), shrublands (9.5%), and forests (1.0%) (Figure 1).Grasslands occupy considerable areas of Kazakhstan, Tajikistan, and Kyrgyzstan.The dominant crop types is rain-fed that include wheat, oats, and barley in northern Kazakhstan, while along Amu Darya and Syr Darya (mainly in Turkmenistan, Uzbekistan, and southern Kazakhstan) vast areas are used for irrigated crops (e.g., cotton and rice).
GIMMS3g NDVI Data
The GIMMS3g NDVI dataset, available from July 1981-December 2011, is the latest NDVI product released by the NASA Global Inventory Modeling and Mapping Studies (GIMMS) group.The dataset was generated by an Advanced Very High Resolution Radiometer (AVHRR) onboard a series of National Oceanic and Atmospheric Administration (NOAA) satellites (NOAA 7,9,11,14,16,17,and 18).To avoid the effects caused by the use of different sensors changes between the NOAA satellites and orbital decay on the quality of the AVHRR data, several procedures were performed to minimize the deviation.Specifically, a satellite orbital drift correction was performed using the empirical mode decomposition (EMD)/reconstruction method, which minimizes the effects of orbital drift by removing the common trends between the time series for the solar zenith angle (SZA) and NDVI [31,32].Additionally, corrections were applied for volcanic stratospheric aerosol effects from the El Chichon (1982)(1983)(1984) and Mt.Pinatubo (1991-1993) volcanic eruptions [31].Calibration was performed using SeaWiFS data, as opposed to earlier GIMMS NDVI datasets, which were based on inter-calibration with the SPOT sensor [33].
The GIMMS3g NDVI dataset (1982-2011) has a spatial resolution of 1/12° and a temporal resolution of 15 days, and is currently considered the best dataset available for long-term NDVI trend analysis [33].The GIMMS3g NDVI dataset has already been shown to accurately represent the real responses of vegetation to climate variability [34].
In this study, we aggregated the NDVI data to monthly observations for the entire study area in 1982-2011 using the maximum value composition, further reducing cloud and other noise effects.We calculated the annually and seasonally averaged NDVI for each year.Since in Central Asia the vegetation growing season starts around April/May and lasts until October, when temperatures drop and snow cover blocks the absorption of incoming photosynthetically active radiation (APAR), effectively stopping vegetation growth [7,17,35].Therefore, seasonal NDVI was calculated as the mean NDVI for spring (April-May), summer (June-August), and autumn (September-October).
Climate Data
Climate data were derived from the Modern Era Retrospective-Analysis for Research and Applications (MERRA) project at the Global Modeling and Assimilation Office (GMAO; [36]).The MERRA reanalysis dataset uses observations from NASA's Earth Observing System satellites and reduces the uncertainty in precipitation data for the water cycle, with significant improvements since the previous generation of datasets [37].
Meteorological data from MERRA have a spatial resolution of 0.5° × 0.667°, covering from 1979 to present.In this study, we used monthly temperature and precipitation data for 1982-2011.The climate data were resampled to ensure a 1/12° spatial resolution consistent with the GIMMS3g NDVI data and were used to explore the impact of climate on the vegetation greenness.Generally, the climate variables have relatively smooth transition in space in Central Asia, especially for temperature.Interpolation is a more suitable method for climate data and has been commonly used in regional climate studies.Similar to the NDVI, we calculated the annual and seasonal total precipitation and mean temperature for each year for 1982-2011 using the monthly data.
Trend Analysis of NDVI and Climatic Factors
We identified the regions with significant annual and seasonal NDVI trends in 1982-2011 using linear regression: (1) where y was a dependent variable representing the annual or seasonal NDVI (or climate variable) and x was the independent variable representing the year.The parameter b was the slope of the regression line, a was the intercept, and ε represented the error term.In this linear model, we adopted the least absolute deviation (LAD) method, which requires the deviation minimum, in comparison with the least-squares method.For large dispersion data, the values fitted with the LAD method are closer to the real values [38].The LAD method is also more powerful than least-squares for asymmetric error distributions and heavy-tailed, symmetric error distributions, and more resistant to the influence of outliers in the dependent variable [38][39][40][41].
The LAD linear regression model was applied to all pixels in the time-series images and maps of the annual and seasonal trends of NDVI and climate variables were created to show the positive (increasing) or negative (decreasing) trends in the data.Then, we conducted an analysis of the regional trends by focusing on the spatially averaged time series for NDVI, temperature, and precipitation.
Multiple-and Partial-Correlation Analysis for the Climatic Impact on Vegetation
To quantify the effect of rainfall and temperature on the greenness trends, we used multiple correlation regression to simulate their relationships (Equation ( 2)).In this equation, x1 and x2 represented the precipitation and temperature, respectively, and y represented the NDVI.was the simulated NDVI by the regression of x1 and x2.The multiple-correlation coefficient (R) was calculated by Equation (3).The significance of the correlation coefficient was estimated by an F-test at a significance level of 95%, with a larger coefficient representing a closer relationship between NDVI and climate. (2) However, multiple correlation cannot identify negative or positive relationships as its ranges 0-1.Thus, we adopted a partial correlation coefficient to demonstrate the contribution of each climatic factor to the vegetation greenness.Partial correlation measured the degree of association between two variables without the influence of the other factor as follows: .
where , , and are the correlation coefficients between NDVI and one climatic factor (e.g., precipitation), NDVI and the other climatic factor (e.g., temperature), and the two climatic factors, respectively, and is the partial correlation coefficient between one climatic factor and NDVI, excluding the other climatic factor.A t-test was adopted to test the partial correlation coefficients, whose significance was estimated at a level of 95%.
Time-Lag Correlation Analysis between NDVI and Precipitation
The relationship between NDVI and precipitation is characterized by a time-lag response of NDVI in relation to rainfall, since the vegetation is affected by both current and previous events.Moreover, this relationship can differ significantly depending on the plant growth stage [42,43].Thus, the relationship between NDVI and precipitation was depicted by: where NDVIt is the NDVI value at time t, Pt − k is the precipitation volume at time t − k, k is the lag length, d is the date during the growing season, and is the random error.In our study, we simplified this relationship for each month during the growing season according to Gessner et al. [17]: 1.The relationship between NDVI and precipitation was analyzed through partial correlation according to Equation ( 2).The time-lagged correlations were performed for four different month lags (lag 0, lag 1, lag 2, and lag 3) for each pixel.For each lag effect, we considered the cumulative effect of 1-5 months [17], resulting in 20 lag-correlations for all conditions (Figure 2). 2. To assess the most accurate lag-response of the vegetation to precipitation changes, we adopted the maximum correlation coefficient of the 20 correlation analyses for each pixel.3. Through the maximum correlation coefficient, we determined the lag time from the 20 lag-correlation results.The lag effects were illustrated by the maximum correlation coefficient and corresponding lag time.
Annual Trends in the Climatic Factors and NDVI
The annual total precipitation and annual mean temperature from the MERRA dataset were calculated for 1982-2011 (Figures 3(a1,b1)).The annual total precipitation varied from 11 mm in the lowlands of Uzbekistan to over 800 mm in the high mountain grasslands of the southern regions.Temperature exhibited an opposite spatial pattern to precipitation.Low temperatures were found for the high latitude and mountainous areas where the annual mean temperature reached below zero.High temperatures were found for the southwestern areas of Central Asia where the annual total precipitation was generally below 200 mm.
Precipitation showed a high spatial heterogeneity trend during 1982-2011 (Figure 3(a2)).Positive trends were concentrated in the southwest and east regions, while negative trends widely occurred across the majority of Central Asia, such as in the grasslands of Kazakhstan and Tajikistan.The regional averaged total precipitation decreased significantly (p-value < 0.05) in 1982-2011 (Figure 3(a3)).Concurrently, a significant warming trend was detected with an increasing rate of 0.052 °C/yr (p-value < 0.05) (Figure 3(b3)) and covering over 90% of the vegetated areas in Central Asia (Figure 3(b2)).Overall, the climate in Central Asia was characterized by a warming and drying trend during 1982-2011.
The NDVI was relatively high for the northern high latitudes of Kazakhstan and southern Central Asia, where rainfall was sufficient and temperature was relatively low compared to middle Central Asia (Figure 4a).Relatively low NDVI values appeared mainly in central and southwestern Central Asia, mostly in shrubland areas, with little annual rainfall (0-150 mm) and high annual temperatures (10-20 °C) (Figure 3(a1,b1)).4b).Before 1991, the vegetation showed a greenness increase of up to 0.017/yr (p-value = 0.14).After 1991, there was an obvious decrease in greenness over Central Asia at a rate of −0.009/yr (p-value = 0.05).To evaluate the spatial heterogeneity of the NDVI trends, a linear NDVI trend was calculated for each pixel during 1982-1991 and 1992-2011.The vegetation greenness experienced significant upward trends for 9.0% of the total vegetated area in Central Asia during 1982 and 1991 (Figure 4(c)).However, a downward trend in greenness was observed in 1992-2011 (Figure 4c,d).During 1992-2011, the vegetated areas with significantly browning trends (p-value < 0.05) dramatically increased to 22.9% of all vegetated area.The areas with non-significantly decreasing NDVI trends also expanded to cover 48.2% of all vegetated area.Significantly decreasing trends were mainly found for most north Kazakhstan and the Aral Sea Basin.
The various vegetation types changed differently in Central Asia.During 1982-1991, significantly positive greenness trends were observed for the following vegetation type areas: grasslands (7.3%), croplands (16.7%), shrublands (5.1%), and forests (34.8%) (Figure 5(a)).However, significant decreases in NDVI were found for these four areas in 1992-2011: grasslands (21.5%), croplands (36.5%), shrublands (24.3%), and forests (18.8%).Grassland is the dominating vegetation type in Central Asia.In 1982-1991, grassland areas showed barely significantly (p-value < 0.05) decreasing trends (Figure 4 and 5).But after 1992, the decreasing trends expanded to large areas accounted for 21.3% of grassland areas in Central Asia.This phenomenon was mainly apparent for the lowland areas of Kazakhstan, whereas the high mountain grassland areas in the east, such as Kyrgyzstan and east Kazakhstan, showed very limited decreasing trends.
In addition, croplands suffered the largest decrease in 1992-2011 and a detailed statistical analysis of cropland data was performed for the five Central Asian countries (Figure 5b).Before 1991, significantly positive trends in NDVI were observed for over 10% of the croplands in all countries.However, decreasing greenness trends were observed in croplands for the five countries after the USSR collapse.The largest percentage of negative trends for Central Asia was observed in Kazakhstan, where 47.7% of croplands showed a significant greenness decrease.Most browning croplands corresponded to rain-fed agricultural areas in northern Kazakhstan (Figure 1).A decreasing greenness was also apparent for croplands in Tajikistan (4.3%), Uzbekistan (5.6%), Kyrgyzstan (3.3%), and Turkmenistan (3.4%).As the climatic variation was similar for these two periods, these differences in greenness changes might have additionally been caused by other factors, such as land-use and water supplement changes.
Seasonal Trends in the Climatic Factors and NDVI
To verify the seasonal greenness contributions to the annual variation, we investigated the seasonal variation in the climatic factors and NDVI during 1982-1991 and 1992-2011.In spring, the mean precipitation was relatively lower (3.8 mm) in 1992-2011 than in 1982-1991 (Figure 6).However, the climate became warmer (1.0 °C) in 1992-2011, at a significant increasing rate of 0.2 °C/yr (p-value < 0.05), with over 40% of the vegetated areas characterized by a significantly warming trend (p-value < 0.05) (Table 1).Accordingly, the area with a significant greening trend increased from 4.5% in 1982-1991 to 17.5% in 1992-2011.Precipitation did not show large spatio-temporal changes during summer (Figure 6; Table 1).Similar to spring, the summer temperature significantly increased at a rate of 0.08 °C/yr in 1992-2011 (p-value < 0.05).However, the vegetation showed dramatically different trends for these two periods.During 1982-1991, the NDVI showed a non-significant increasing trend, with 7.9% of the vegetated areas showing a significantly greening trend.However, in 1992-2011, the NDVI trend significantly decreased at a rate of −0.3 unit/yr with 13.5% of the area characterized by a significant browning trend.
In autumn, the precipitation was much lower (9.6 mm) in 1992-2011 than in 1982-1991.The temperature was higher (0.8 °C) in 1992-2011, with over 75% of the vegetated areas showing significantly warming trends.Similar to summer, the NDVI trends in autumn differed for the two periods.In 1982-1991, NDVI had a significantly increasing rate of 0.5 unit/yr, with 13.5% of the vegetated areas showing significantly greening trends.Nevertheless, in 1992-2011, the vegetation NDVI decreased at a rate of −0.05 unit/yr, with 27.1% of the vegetated areas characterized by significantly browning trends.
The warming condition and relatively higher precipitation promoted seasonal greenness in 1982-1991.Accordingly, in 1982-1991, the annual greening trend resulted from all the three seasonal greening trends.However, the slightly decreasing precipitation in spring and summer contributed to the dryer conditions in 1992-2011.Comparing the temperature variations for the three seasons, the seasonal warming trends were more severe in 1992-2011 than in 1982-1991, especially in autumn.Therefore, in 1992-2011, even though 17.5% of the vegetated areas showed a significantly greening trend in spring, the decreasing greenness in summer and autumn led to an annual decrease in NDVI.
Climate Impact on the Vegetation Greenness.
Based on the multiple correlation analysis, in 1982-1991, only 3.3% of the vegetated area was significantly affected by climate (p-value < 0.05) (Figure 7(a1)).However, the vegetated region significantly affected by climate expanded to 5.6% in 1992-2011 (Figure 7(b1)), mainly in south Kazakhstan and north Uzbekistan, within the Amu Darya and Syr Darya basins.But for most areas in northern Kazakhstan, the degree of climatic impact on greenness was evidently lower in 1992-2011 than that in 1982-1991.
To distinguish the individual contributions of precipitation and temperature from their combined effect, Figure 7(a2,b2) show the partial correlation between NDVI and precipitation for the two periods, excluding the effect of temperature.In 1982-1991, the precipitation showed positive effects on the vegetation greenness in 5.1% of the vegetated areas in Central Asia, but it expanded to 11.5% of the vegetated areas in 1992-2011.These expanded areas mainly located in Amu Darya and Syr Darya basins, where croplands are irrigated.Comparing the spatial patterns of the affected areas by the combined climate effect during these two periods (Figure 7(a1,b1)), the expanded areas coincided with the areas affected by precipitation changes (Figure 7(a2,b2)), which implied that precipitation was the main factor of climatic impact on greenness of these regions.For the first period, the temperature showed strong positive effects on vegetation growth for most areas, while its consistent increase resulted in negative effects in 1992-2011 for most areas and especially for Kazakhstan grasslands.The vegetation greenness might then have been suppressed by the significant warming trend in 1992-2011.Overall, climate had a generally weak correlation with greenness in northern Kazakhstan areas in 1992-2011 compared to 1982-1991.But the correlation between precipitation and greenness became stronger in some areas such as the Amu Darya and Syr Darya basins.In addition, temperature showed evidently opposite impacts on greenness during the two periods and the warming condition turned from a promoter in 1982-1991 to a suppressant in 1992-2011 for vegetation greenness.
Lagged-Response of NDVI to Precipitation
Considering vegetation have lagged responses to rainfalls, we further studied the relationship between vegetation and precipitation in 1982-1991 and 1992-2011.Figure 8 presents the lagged maximum correlations and months of NDVI response to precipitation.Generally, in Central Asia, the correlation between NDVI and precipitation was positive in May-September.However, the precipitation and NDVI were negatively correlated in April and October for most vegetated areas, and especially for grasslands in Kazakhstan.Additionally, the significantly correlated areas increased from April-June and declined until October, with the largest areas in June (43.3%in 1982-1991 and 52.7% in 1992-2011 for all vegetated areas) (Figure 8(c1)-(c4)).
Many areas were characterized by a strong sensitivity of NDVI to rainfall (r > 0.7) in 1982-1991 (Figure 8).In contrast to the time-lagged response of greenness to precipitation in 1982-1991, strong correlations were only observed for northwest Central Asia in 1992-2011 (Figure 8), indicating a weaker impact of precipitation on NDVI for the second period.However, there was a larger percentage of vegetated areas affected by precipitation after 1992 than before 1992.This expansion was mostly located in central and south Central Asia, where the NDVI was comparatively lower.The analysis, including the lagged months, revealed that the vegetation greenness was significantly (p-value < 0.05) affected by precipitation without a time lag (lag 0) for most areas in 1982-1991, accounting for nearly 20% of all vegetated areas in April-October (Figure 9a).Plants with a 1-3 month lag were mainly located in the central grasslands and southern shrublands in June.However, in 1992-2011, over 25% vegetated areas were characterized by a time lag of 1-3 months in May-September (Figure 9b).Overall, the degree and time lag of the responses of the vegetation to precipitation were significantly different for the two periods.Considering the non-significant trends for precipitation in 1982-2011 (Table 1), it is reasonable to conclude that the observed change in the lagged response might mainly result from changes in the plant species or other disturbances.
Vegetation Variations and Their Relationships with Climatic Factors
In a study of Eurasia, Piao et al. [5] pointed out that a turning point for the NDVI in Central Asia mostly spanned from 1992 to 1996.In our study, the turning point in NDVI was given as the collapse of USSR in late 1991, with specific implications in the vegetation greenness pre-and post-1992.Before 1992, the NDVI displayed increasing trends for most Central Asia.However, decreasing trends in the vegetation greenness were detected for 22.9% of the vegetated areas after 1991.This coupling between vegetation trends and socio-economic conditions is in agreement with De Beurs et al. [44], and Lioubimtseva and Henebry [11].
According to our results, temperature had different impacts on greenness in 1982-1991 and 1992-2011(Figure 7).For 1982-2011, our study showed a consistently warming trend for Central Asia especially in 1992-2011, in agreement with previous studies [11,13].For 1982-1991, our results demonstrated an enhancement in the vegetation greenness due to the warming trend, as warmer conditions during the early and late growing season are typically associated with lower levels of frost damage and overall better conditions for plant growth [42,45,46].This is consistent with other results from high-latitude areas in Eurasia, North America, and China showing that a warming trend promotes greening [47][48][49][50][51].However, as the warming trend significantly increased in 1992-2011, the increasing temperatures might have caused a larger water deficit due to evapotranspiration losses both annually and seasonally, thereby increasing the plant water stress and desiccation and impacting the rates of carbon uptake by photosynthesis [52][53][54].Since the greenness decline observed in summer and autumn largely contributed to the annual decrease, it is likely that the extremely high temperatures for these two seasons were the main reason for the decrease in greenness in Central Asia in 1992-2011.
In addition, high temperatures are also a main cause of intense fire activities in this region that reduced vegetation greenness abruptly.Loboda et al. [3] used MODIS global fire data to characterize fire occurrence in Central Asia, and reported that the majority of burned areas in 2001-2009 corresponded to grasslands, especially in Kazakhstan [55].In addition, the majority of burned areas resulted from late summer and autumn fires when the herbaceous biomass entered the senescence phase, contributing to a negative impact on greenn1ess.This further supports our findings that a warming climate is a crucial factor for vegetation greenness changes by aggravating water shortage and fire occurrence in Central Asia.
In 1982-2011, the annual precipitation generally positively correlated to annual NDVI (Figure 7).However, in April and October, the precipitation was negatively correlated with vegetation growth (Figure 8).This is contradicted to the common understanding in many other dryland areas (e.g., Africa [56], Australia [57]), where precipitation is a pivotal factor promoting greenness.One reason may be that the temperature in April and October in Central Asia is extremely low and the rainfalls in such low temperature easily form ice crystals in plant tissue, thus leading to the death of the plant, or at least damage to burgeon and young leaves [58].Another reason may be that the excessive rainfall may have contributed to the formation of seasonally frozen soil.When soil is frozen, its thermal conductivity increases and its heat capacity decreases [59], which could significantly suppress vegetation growth.
Although the annual precipitation failed to demonstrate an accurate relationship with simultaneous vegetation greenness, the lagged correlation analyses revealed an important response of the vegetation to precipitation in Central Asia.This lagged phenomenon is also common in other dryland regions, such as Africa [60], northeast China [61], and the Great Plains [62].Comparing the lagged response of the two periods, our study showed a time lag of 0 months for 1982-1991, but 1-3 months after 1992, especially for the grasslands of Kazakhstan.The time-lag mechanism of plant is controlled by plant root system that is capable of holding a large amount of moisture and transforms to shoots and leaves gradually [63,64].Comparing to vegetation with less developed root systems, vegetation with strong root systems could avoid being immediately affected by precipitation shortage in the dry season.Thus, this prolonged lagged response during 1992 and 2011 showed a reinforcement of the plants root growth to strengthen their water-holding capability, indicating a transition in vegetation functional types as vegetation adapted to climate changes [63,[65][66][67] in Central Asia.
Potential Impacts of the USSR Collapse on Climate-Vegetation Relationships
Our study proved that the correlation between climate and vegetation tended to be weaker in most northern parts of Kazakhstan during the post-USSR collapse period (Figure 7).Considering the Central Asian countries experienced large changes in land-use followed by socio-economic disturbance after the USSR collapse (e.g., wars, revolutions, policy changes, and economic crises) [68], these socio-economic factors are also likely to have contributed to the greenness changes, providing specific explanations for the change of climate-vegetation relationships in these regions.
During the socialist period of the USSR, northern Kazakhstan was characterized by rain-fed farmlands, accounting for 94% of the croplands in Kazakhstan in 1991-1993 [69].This region was known as "the major granary" of the USSR and was heavily subsidized and intensified for farming [70,71], which may have been another reason for the increasing greenness in 1982-1991, apart from the climate contribution.However, comparing the regional NDVI trends for the five countries in Central Asia in 1992-2011, north Kazakhstan showed the largest decline in vegetation greenness for croplands (Figure 5(b)).During the post-USSR period, with the drastically reduced profitability of farming and unsecure land tenure, approximately three millions of people migrated outside of Kazakhstan in 1991-2006 [68,70,72].Accordingly, millions of hectares of farmland were abandoned [22,23,73], leading to a considerable decrease in crop production in the 1990s [74], explaining the large decrease in cropland greenness in Kazakhstan (Figure 5).Due to these human disturbances, climate impact was much weaker on greenness in post-USSR period than that in pre-period in the northern Kazakhstan.
However, the climatic effects on greenness during the two periods in the Amu Darya and Syr Darya basins were completely different from that in the northern Kazakhstan.In the northern Kazakhstan, climate and vegetation had a strong correlation before 1991, and this correlation became weaker in 1992-2011.However, in the Amu Darya and Syr Darya basins, climate showed a weak correlation with vegetation greenness in 1982-1991, and became stronger after the USSR collapse (Figures 8 and 9).This can be explained by the different land-use policies or practices for the two periods in the Amu Darya and Syr Darya basins.Under the Soviet Union policy (and especially after the 1970s), the expansion of irrigated agriculture resulted in a 70% increase in irrigated farmlands in Central Asia [30].In Uzbekistan, Turkmenistan, Tajikistan, and Kyrgyzstan, irrigation was the dominant agriculture practice for agriculture, accounting for 92%, 91%, 75%, and 67% of all croplands, respectively, in 1991-1993 [69].Irrigation water supplement was the key factor to crop yield, therefore rainfall was less impacted on vegetation greenness during the pre-USSR collapse.However, the USSR intensified agriculture policies (e.g., expansion irrigation area; dam construction) caused environmental degradation (e.g., erosion, salinization, and decreasing fertility) in the Amu and Syr River basin [30,75,76].Thus, after the USSR collapse, each country had to alleviate the environmental damage resultant from the over-extension of the irrigated area by rehabilitating the abandoned croplands [20,21] and plowing up the rain-fed crops in predominantly arid climate areas [19,77].These actions led to a stronger dependency of plant growth on the climate variability during the post-USSR period, supporting our finding that the relationship between plants and rainfall became stronger in 1992-2011 for south Central Asian countries (i.e., Uzbekistan) when the irrigated land area significantly decreased.
Human disturbance has a large impact on vegetation growth and its relationship with observed climate variations as findings in our study.According to the UNEP-WCMC [78], only 9% of the world's drylands are nationally protected areas.Thus, to protect drylands, governments should also attach importance to the sustainable development of the dryland ecosystems by controlling socio-economic changes.
Conclusions
The main findings of our study can be summarized as follows: 1.The overall trends of NDVI evidently differed before and after 1992.The vegetation greenness showed an increasing trend for most areas before 1991, but experienced a dramatic decrease in 1992-2011.2. Climate largely contributed to the greening/browning trends in Central Asia during these two periods, but its influence on greenness varied significantly.The increasing temperature
Figure 2 .
Figure 2. Schematic presentation of the lag effect of precipitation on NDVI.The horizontal bars represent the precipitation time series.The vertical bar represents the NDVI at time t.The impact of lag 0, 1, 2, and 3 on the NDVI is represented by the colors red, yellow, green, and blue, respectively.For example, lag 0 represented monthly NDVI values at time t were correlated with the precipitation from time t (concurrent month) to time t − 1 (previous one month), or t − 2 (previous two month), or t − 3 (previous three month), or t − 4 (previous four month).
Figure 4 .
Figure 4. (a) Map of the annual NDVI and (b) spatially averaged NDVI trend for 1982-2011.(c,d) represent the maps of the annual NDVI linear trends for 1982-1991 and 1992-2011, respectively.The areas with significant trend correspond to a p-value < 0.05.
Figure 5 .
Figure 5. Percentage of the area with significantly positive (white) and negative trends (gray) (p-value < 0.05) during 1982-1991 (left columns) and 1992-2011 (right columns) for the (a) dominant land cover types (with the number of pixels below) and (b) croplands in Kazakhstan, Kyrgyzstan, Tajikistan, Turkmenistan, and Uzbekistan.
Figure 6 .
Figure 6.Inter-annual variations and trends for the seasonal (a) precipitation, (b) temperature, and (c) NDVI in 1982-1991 and 1992-2011 in Central Asia.The boxes represent the values for the second and third quartiles, the horizontal line gives the median, and the whiskers show the lowest/highest values.The hollow stars indicate the temporal trends in precipitation, temperature, and NDVI for the different seasons, while the solid stars show the significant trends (p-value < 0.05).
Figure 7 (
Figure 7(a3,b3) show the partial correlation between NDVI and temperature for the two periods, excluding the effect of precipitation.Temperature had dramatically different impacts for the two periods.For the first period, the temperature showed strong positive effects on vegetation growth for most areas, while its consistent increase resulted in negative effects in 1992-2011 for most areas and especially for Kazakhstan grasslands.The vegetation greenness might then have been suppressed by the significant warming trend in 1992-2011.
Figure 8 .
Figure 8. Lagged NDVI response to precipitation during 1982-1991 and 1992-2011.Maximum r corresponds to the maximum correlation coefficient for all correlation analyses between NDVI and precipitation in Central Asia.The resulting temporal lag represents the best fit for all time-lag correlation analyses.Areas with non-significant correlations (p-value > 0.05) are shown in dark gray.
|
2015-09-18T23:22:04.000Z
|
2015-03-02T00:00:00.000
|
{
"year": 2015,
"sha1": "60cc4489190338af6e96868789cfa4e6820f15ed",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-4292/7/3/2449/pdf?version=1425292469",
"oa_status": "GOLD",
"pdf_src": "Crawler",
"pdf_hash": "60cc4489190338af6e96868789cfa4e6820f15ed",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": [
"Computer Science",
"Geology"
]
}
|
251196712
|
pes2o/s2orc
|
v3-fos-license
|
Comparison of refilling schemes in the free-surface lattice Boltzmann method
Simulating mobile liquid-gas interfaces with the free-surface lattice Boltzmann method (FSLBM) requires frequent re-initialization of fluid flow information in computational cells that convert from gas to liquid. The corresponding algorithm, here referred to as the refilling scheme, is crucial for the successful application of the FSLBM in terms of accuracy and numerical stability. This study compares five refilling schemes that extract information from the surrounding liquid and interface cells by averaging, extrapolating, or assuming one of the three different equilibrium states. Six numerical experiments were performed, covering a broad spectrum of possible scenarios. These include a standing gravity wave, a rectangular and cylindrical dam break, a Taylor bubble, a drop impact into liquid, and a bubbly plane Poiseuille flow. In some simulations, the averaging, extrapolation, and one equilibrium-based scheme were numerically unstable. Overall, the results have shown that the simplest equilibrium-based scheme should be preferred in terms of numerical stability, computational cost, accuracy, and ease of implementation.
I. INTRODUCTION
The free-surface lattice Boltzmann method (FSLBM) [1] is a numerical model for simulating free-surface flows combining the latticed Boltzmann method (LBM) for hydrodynamics simulations with the volume of fluid (VOF) approach [2] for interface tracking. It successfully simulates applications such as rising bubbles [3], waves [4], dam break scenarios [5], impacts of droplets [6], and electron-beam melting [7]. Free-surface flows relate to immiscible two-fluid flow problems in which the fluid dynamics of the lighter fluid can be neglected. Therefore, the problem reduces to a single-fluid flow with a free boundary. In this article, the lighter fluid will be called gas, and the heavier fluid will be called liquid. The Eulerian computational grid is represented by lattice cells in the LBM. In the FSLBM, each lattice cell is categorized as either gas, liquid, or interface type, with the latter separating the former. In the LBM, information about the flow field is stored in each cell in terms of particle distribution functions (PDFs).
Agreeing with the free-surface definition, in the FSLBM, valid PDFs are only available in liquid and interface cells but not in gas cells. Gas cells are converted to interface cells during the simulation because of the free interface's motion. These cells must be refilled with valid flow field information, that is, their PDFs must be reinitialized. To the authors' knowledge, no other refilling scheme but the one suggested in the original FSLBM from Körner et al. [1] has yet been tested for the FSLBM. However, there have been similar studies about moving solid obstacle cells in the LBM. There, analogously, cells are converted from solid to liquid and must be refilled [8][9][10][11][12][13][14]. Based on the schemes used for this application, five different schemes for refilling cells in the FSLBM are compared in this article.
The manuscript is structured as follows. First, the numerical foundations of the LBM and FSLBM are introduced. Then, the different refilling schemes are presented and discussed in terms of mass conservation and computational costs. The first scheme under investigation initializes the PDFs with their equilibrium constructed with the average fluid velocity and density of non-newly converted neighboring interface and liquid cells [1]. The second and third scheme extend the first one by adding a contribution of the non-equilibrium PDFs [8], or by including information about the local pressure tensor using Grad's moment system [10][11][12]15], respectively. The fourth refilling scheme initializes PDFs with a second-order extrapolation from neighboring cells' PDFs [9]. In contrast to these, in the final scheme tested here, the PDFs are initialized with the average corresponding PDFs from neighboring, non-newly converted interface or liquid cells [14]. Six numerical benchmarks then compare the refilling schemes in terms of accuracy and numerical stability. These benchmarks include a standing gravity wave, the collapse of a rectangular and cylindrical liquid column, the rise of a Taylor bubble, the impact of a droplet into a thin film of liquid, and a bubbly plane Poiseuille flow. Finally, it is concluded that for the FSLBM, the simplest equilibrium-based refilling scheme is preferable in terms of numerical stability, computational costs, accuracy, and ease of implementation.
The source code of the implementation used in this study is freely available as part of the open source C ++ software framework waLBerla [16] (https://www.walberla.net). The version of the source code used in this article is provided in the supplementary material.
II. NUMERICAL METHODS
This section introduces the foundations of the lattice Boltzmann method and its extension to free-surface flows, the free-surface lattice Boltzmann method. The section is based on Section 2 from prior articles [17,18] but repeated here for completeness.
A. Lattice Boltzmann method
The lattice Boltzmann method is a relatively modern approach for simulating computational fluid dynamics. This article only introduces its fundamental aspects. A rigorous introduction to the LBM is available in the literature [19].
The LBM is a discretization of the Boltzmann equation from kinetic gas theory. It describes the evolution of particle distribution functions on a uniformly discretized Cartesian lattice with spacing ∆x ∈ R + . The macroscopic fluid velocity is discretized with the DdQq velocity set in each cell of the lattice, with d ∈ N referring to the lattice's spatial dimension and q ∈ N referring to the number of PDFs per cell. A PDF f i (x, t) ∈ R with i ∈ {0, 1, . . . , q − 1} describes the probability that there exists a virtual fluid particle population at position x ∈ R d and time t ∈ R + traveling with lattice velocity c i ∈ ∆x/∆t {−1, 0, 1} d , where ∆t ∈ R + denotes the length of a discrete time step. The successive steps of collision, also called relaxation, and streaming, also called propagation, form the lattice Boltzmann equation. In the collision step, the collision operator Ω i (x, t) ∈ R relaxes the PDFs towards an equilibrium state f eq i (x, t), which is influenced by external forces F i (x, t) ∈ R. In the streaming step, the post-collision PDFs f i (x, t) propagate to neighboring cells. For the simulations in this article, the single relaxation time (SRT) collision operator was used, where τ > ∆t/2 is the relaxation time. The PDF's equilibrium [20] f eq can be derived from the Maxwell-Boltzmann distribution and includes the lattice weights w i ∈ R, the lattice speed of sound c 2 s ∈ R + , the macroscopic fluid density ρ ≡ ρ(x, t) ∈ R + , and the macroscopic fluid velocity u ≡ u(x, t) ∈ R d . In this article, the well-established D2Q9 and D3Q19 lattice models are used. The corresponding lattice weights are available in the literature [19]. The lattice speed of sound for these velocity sets is c 2 s = 1/3 ∆x/∆t. It relates the macroscopic fluid density ρ(x, t) and pressure p(x, t) = c 2 s ρ(x, t). The PDFs' zeroth-and first-order moments yield the fluid's density and velocity with external force F (x, t) ∈ R d . The fluid's kinematic viscosity is computed from the relaxation time τ , that is, relaxation rate ω = 1/τ . In the simulations used in this article, the gravitational force, as part of F i (x, t) in the LBM collision (1), was modeled according to Guo et al. [21] with where again, u ≡ u(x, t) was used.
The rectangular and cylindrical dam break simulations in Sections IV B and IV C were performed using a Smagorinksy-type large eddy simulation turbulence model [22,23]. Based on the user-chosen relaxation time τ 0 > ∆t/2, the collision operator's relaxation time is locally adjusted by the model with a contribution τ t (x, t) ∈ R from the turbulence viscosity The turbulence viscosity is obtained from the filtered strain rate tensor where the filtered mean momentum flux is computed from the momentum fluxes The index notation with α and β refers to the components of a vector or tensor. The moment fluxes are given by the second-order moments of the PDFs' non-equilibrium parts. The turbulence model's contribution to the relaxation time is then [22] τ t (x, t) = 1 2 where ∆x LES is the filter length and C S is the Smagorinsky constant. For the simulations in this article, these parameters were chosen ∆x LES = ∆x and C S = 0.1, as suggested by Yu et al. [23].
At solid obstacles, a no-slip boundary condition were realized using the bounce-back approach, where PDFs streaming into solid obstacle cells are reflected reversely. The PDF's original direction with index i is reversed, denoted asī, with lattice velocity cī = −c i [19].
Free-slip boundary conditions are modeled similarly with the PDFs being reflected specularly.
In the resulting lattice velocity c j , the normal velocity component of the incoming velocity c i is reversed with c j,n = −c i,n at a free-slip boundary [19].
In the remainder of this article, ∆x = 1 and ∆t = 1 are assumed as this is common practice in the LBM [19]. All quantities are denoted in the LBM unit system if not explicitly stated otherwise. The LBM reference density ρ 0 = 1 and pressure p 0 = c 2 s ρ 0 = 1/3 were set in all simulations. The relaxation time τ or relaxation rate ω specified for the numerical experiments in Section IV refer to the constant user-chosen values that the Smagorinsky turbulence model did not yet adjust.
B. Free-surface lattice Boltzmann method
The free-surface lattice Boltzmann method as presented by Körner et al. [1] is used in this article. The FSLBM extends the LBM by simulating the interface between two immiscible fluids. It assumes that the heavier fluid governs the entire flow dynamics of the system with the lighter fluid's influence being negligible. Consequently, the immiscible two-fluid flow problem reduces to a single-fluid flow with a free boundary. Therefore, the hydrodynamics of the lighter fluid are not simulated in the FSLBM. A simplification such as this is valid if the fluids' densities and viscosities differ substantially, for example as in liquid-gas flows. In what follows, the heavier fluid is called liquid, whereas the lighter fluid is called gas.
The free interface between the liquid and gas is treated as in the VOF approach [2]. A fill level ϕ(x, t) is assigned to each lattice cell, acting as an indicator that describes the affiliation to one of the phases. Cells can be of liquid (ϕ(x, t) = 1), gas (ϕ(x, t) = 0), or interface type (ϕ(x, t) ∈ (0, 1)). A sharp and closed layer of interface cells separates liquid and gas cells. Interface and liquid cells are treated like regular LBM cells, which contain PDFs and participate in the LBM collision (1) and streaming (2). In contrast, conforming with the free-surface definition, gas cells neither contain PDFs nor participate in the LBM update.
The liquid mass of each cell is determined by the cell's fill level ϕ(x, t), fluid density ρ(x, t), and volume ∆x 3 . Note that in two-dimensional simulations, the cell's volume is also given by ∆x 3 . The domain is then assumed to have an extension of a single lattice cell in the third direction. The mass flux between an interface cell and cells of other types is computed from the LBM streaming step via The simplicity of this mass flux computation is an advantage of the FSLBM when compared to non-LBM-based VOF approaches. In these methods, the advection of mass commonly requires solving a partial differential equation that describes the evolution of the mass.
In the implementation used here, interface cells are not immediately converted to gas or liquid when their fill level becomes ϕ(x, t) = 0 or ϕ(x, t) = 1, respectively. Instead, they are converted with respect to the heuristically chosen threshold ε ϕ = 10 −2 that prevents oscillatory conversions [24]. Consequently, an interface cell is converted to gas or liquid if its fill level becomes below zero with ϕ(x, t) < 0 − ε ϕ or above one with ϕ(x, t) > 1 + ε ϕ , respectively.
When an interface cell converts to gas or liquid, surrounding gas or liquid cells may convert to interface cells to maintain a closed interface layer. It is important to point out that neither liquid nor gas cells can directly convert into one another. Instead, both cell types can only convert to interface cells. The separation of liquid and gas is prioritized in case of conflicting conversions. When converting an interface cell with fill level ϕ conv (x, t) to gas or liquid, the fill level is forcefully set to ϕ(x, t) = 0 or ϕ(x, t) = 1. The resulting excess mass is distributed evenly among all surrounding interface cells to ensure mass conservation.
During a simulation, unnecessary interface cells may appear, which do not have neighboring gas or liquid cells. In the implementation used in this study, these cells are forced to fill or empty by adjusting the mass flux (15), as suggested by Thürey [25].
The cells' PDFs are not modified when cells are converted from interface to liquid or vice versa. If a cell converts from interface to gas type, the cell's PDFs need not be considered further and can therefore be invalidated. Note that this does not affect mass conservation as any excess mass (16) will be distributed accordingly. However, no valid PDF information is available when cells convert from gas to interface type. The PDFs of these cells must be initialized with one of the schemes presented in Section III.
The LBM collision (1) and streaming (2) are performed in all interface and liquid cells.
Körner et al. [1] proposed to weight the gravitational acceleration with an interface cell's fill level in the LBM collision. Conforming with the work of other authors [24,26,27], the implementation used here did not weight the gravitational force with the fill level.
The macroscopic boundary condition at the free surface is given by [26,28] p where p G (x, t) is the gas pressure, p L (x, t) is the Laplace pressure, t 1 (x, t) and t 2 (x, t) are interface-tangent vectors, and n(x, t) is the interface-normal vector. As shown by Bogner et al. [29], this macroscopic boundary condition is approximated by the LBM anti-bounce-back pressure boundary condition which Körner et al. [1] suggested to use. In this equation, u ≡ u (x, t) is the interface cell's velocity and ρ G ≡ ρ G (x, t) = p G (x, t) /c 2 s is the gas density. Other formulations of the boundary condition have been investigated in the literature [29,30]. The free-surface boundary condition (18) is applied to all PDFs streaming from the gas towards the interface as these PDFs are not available. However, in the original FSLBM [1], this boundary condition is not only used to reconstruct missing PDFs. It is also used to reconstruct some PDFs that are already available. It should be pointed out that this approach overwrites existing information about the flow field. In the implementation used in the study presented here, no information is overwritten, and only missing PDFs are reconstructed at the free boundary.
This scheme was found to be of superior accuracy [18]. Note that the free-surface boundary condition (18) must also be applied at free-slip boundaries for specularly reflected PDFs that originate from gas cells.
The gas pressure incorporates the volume pressure p V (t) and Laplace pressure p L (x, t). The volume pressure stays constant in case of atmospheric pressure or results from changes in the volume V (t) of an enclosed gas volume, that is, a bubble, according to The Laplace pressure is defined by the surface tension σ ∈ R + and the interface curvature κ(x, t) ∈ R. As suggested by Bogner et al. [31], in the simulations shown in this article, a finite difference approximation was used, wheren(x, t) ∈ R d is the normalized interface normal vector. The interface normal was computed with central finite differences according to Parker and Youngs [32]. Near obstacle cells, the computation of the normal is modified as proposed by Donath [27]. This modification narrows the access pattern of the finite differences such that obstacle cells are excluded from the computation. A bubble model algorithm is used to keep track of the bubbles' volume pressure [24,33]. This is required because bubbles might coalesce or segment during the simulation.
III. REFILLING SCHEMES
In the FSLBM, gas cells do not contain valid PDF information as described in Section II B.
Therefore, when a gas cell converts to an interface cell, its PDFs must be reinitialized. This reinitialization is commonly referred to as refilling. While refilling has not yet been studied in the context of the FSLBM, it has been investigated for moving solid obstacle cells [8][9][10][11][12][13].
Analogously to the gas cells in the FSLBM, solid cells do not carry valid PDFs, such that these PDFs must be refilled after conversion. In what follows, the schemes developed for refilling moving solid cells are introduced and adapted to the FSLBM. Then, their influence on the conservation of mass and their computational costs are briefly discussed.
A. Scheme definitions
In the FSLBM proposed by Körner et al. [1], PDFs are refilled with their equilibrium (4) according to The average local fluid densityρ ≡ρ (x, t) and velocityū ≡ū (x, t) are computed from all surrounding, non-newly created interface and liquid cells. Note that this is in contrast to the same scheme applied for solid obstacle cells, where the velocity of the solid object is used instead [8,9,13].
The EQ scheme can be extended by adding the non-equilibrium contribution from neighboring fluid cells with [8] EQ+NEQ: 1} is used to access the cell that corresponds best to the local interface normal n ≡ n(x, t), that is, for which the scalar product c i · n is the largest. Note again that only non-newly created neighboring interface and liquid cells are valid directions for c n i . Another variant extends the EQ scheme with information from the local pressure tensor [10][11][12] using Grad's moment system [15], leading to where δ αβ is the Kronecker delta. In the implementation used in this study, the velocity derivatives are approximated by second-order finite differences. If not enough neighboring fluid cells are available, the derivatives are approximated by first-order backward-or forward-finite differences.
Instead of using the PDF's equilibrium, Lallemand and Luo [9] suggested refilling based on a second-order extrapolation scheme. The PDFs are extrapolated from the lattice direction closest to the surface normal c n i by In the implementations used for this study, a corresponding lower-order extrapolation is used if the number of neighboring cells in direction c n i is not sufficient. If no neighboring cell is available in this direction, the EQ scheme is employed as a fallback. A first-order extrapolation scheme led to the same numerical instabilities as will be discussed in Section IV for the second-order scheme. These numerical instabilities were not observed with a zerothorder extrapolation, that is, copying PDFs from a direct neighboring cell. However, this zeroth-order extrapolation scheme was more inaccurate than other refilling schemes, and it even led to physically implausible results in some tests. For brevity reasons, these results are not included in this article.
The AVG scheme uses the averagef i (x, t) of the identically oriented PDFs of surrounding, non-newly created interface and liquid cells. This is denoted by and has already been used by Fang et al. [14] for solid obstacle cells. However, Fang et al.
averaged the PDFs resulting from a higher-order extrapolation scheme, including a larger neighborhood of cells. In this work, only the PDFs from direct neighboring cells are used.
B. Effect on mass conservation
It is important to point out that the choice of the refilling scheme does not affect the system's total mass and its conservation. When a cell is refilled, it has converted from gas to interface and is initially empty with fill level ϕ(x, t) = 0. Therefore, the refilled cell's mass (14) is initially m(x, t) = 0 and is independent of the cell's refilled PDFs.
In the following time steps, the interface cell's mass then changes according to where ∆m i (x, t) is the mass flux (15). If the cell x + c i ∆t is also an interface cell, the same ∆m i (x, t) affects this interface cell's change in mass (29). If the cell x + c i ∆t is of liquid type, the corresponding ∆m i (x, t) is implicitly considered in the liquid cell's density ρ(x, t) and, therefore, in its mass, because the mass flux ∆m i (x, t) is computed directly from the balance of the streaming PDFs. Consequently, ∆m i (x, t) is present in the change of the PDFs' values. The density ρ(x, t) is obtained by taking the PDF's zeroth-order moment (5).
Since the zeroth-order moment is the sum of the cell's PDFs, ∆m i (x, t) leads to an according change of ρ(x, t).
C. Computational costs
The computational costs of the individual refilling schemes strongly depend on the implementation of the FSLBM and the test case simulated. For instance, the EQ, EQ+NEQ, and GEQ schemes require the neighboring cells' density ρ(x, t) and velocity u(x, t). However, these macroscopic quantities are not necessarily available in any LBM implementation but might have to be computed only when required. The LBM algorithm of collision (1) and streaming (2) works on PDFs but does not involve ρ(x, t) or u(x, t). Therefore, depending on the implementation, these values might be computed explicitly for the refilling schemes via the moments (5) and (6), increasing the computational costs.
As another example, the EQ+NEQ and EXT schemes involve the interface normal n.
The interface curvature κ(x, t) must be computed if the Laplace pressure (21) is relevant in a test case. The curvature computation algorithm employed in this work also uses n so that no additional computations may be required to obtain n when refilling cells.
It should also be pointed out that the computational costs are affected by the refilled cell's neighboring cells, as only non-newly created interface and liquid cells are considered by the refilling schemes.
These examples show that it is not generally possible to present the specific computational cost of each refilling scheme. Only the EQ, EQ+NEQ, and GEQ refilling schemes can be put into perspective, as the EQ+NEQ and GEQ build upon f eq i (ρ,ū) as computed by the EQ scheme but add additional computations. Therefore, the EQ+NEQ and GEQ schemes are computationally more expensive than the EQ scheme.
Note that although cell conversions appear frequently, the computational costs for refilling might be insignificant compared to the costs of other algorithmic parts in the FSLBM.
However, this likewise strongly depends on the test case under investigation.
IV. NUMERICAL EXPERIMENTS
The refilling schemes introduced in Section III are compared in six numerical experiments in this section. The chosen test cases are vastly similar to those suggested in prior work [17,18]. Therefore, the corresponding description of the test cases, simulation setups, and figures are based one those from these articles but are repeated here for completeness. The numerical benchmarks include the simulation of a standing gravity wave, the collapse of a rectangular and cylindrical liquid column, the rise of a Taylor bubble, the impact of a drop into a thin film of liquid, and a bubbly plane Poiseuille flow. All simulations were performed with double-precision floating-point arithmetic.
A. Gravity wave
A gravity wave is a standing wave oscillating at the phase boundary between two immiscible fluids. Surface tension forces are neglected, and gravitational forces entirely govern the wave's flow dynamics. The analytical model [34,35] is used as reference data for assessing the simulation results.
Simulation setup
As illustrated in Figure 1, a gravity wave of wavelength L was simulated in a two- to as diffusive scaling in the LBM [19]. The system is characterized by the Reynolds number where is the angular frequency of the wave, and ν is the kinematic fluid viscosity. Owing to the gravitational acceleration g, the initial profile evolved into a standing wave oscillating around d. It was dampened because of viscous forces. The dimensionless surface elevation a * (x, t) := a(x, t)/a 0 and non-dimensionalized time t * := tω 0 were monitored at x = 0 every t * = 0.01. The simulations were performed until t * = 40, which was found to be sufficient for the wave's motion to be decayed in the simulations.
Analytical model
The analytical model for the gravity wave is derived by linearization of the continuity and Euler equations with a free-surface boundary condition [34]. The standing wave's amplitude is obtained assuming an inviscid fluid with zero damping, such that a D (t) = a 0 . Viscous damping is considered by [35] a D (t) = a 0 e −2νk 2 t .
The analytical model applies if k|a 0 | 1 and k|a 0 | kd [34], which is true in this study 3. Results and discussion Figure 2 shows the gravity wave's non-dimensionalized amplitude a * (0, t * ) over time t * as simulated with L = 800 and the refilling schemes from Section III. All refilling schemes but the EXT generally agreed well with the analytical model. The other refilling schemes showed only minor differences. Most notably is a slight overestimation of the wave's first positive amplitude when using the AVG scheme. The duration of a gravity wave's half-period T * /2 is shown in Figure 3. The EXT refilling scheme had the largest deviations when compared to the analytical model. All other refilling schemes did not follow a clear trend. Figure 4 shows of the gravity wave's oscillations. Although the differences are relatively small, the EQ+NEQ scheme could arguably be considered most accurate in this comparison.
The simulation results presented here have converged, as illustrated in Figure 26. As pointed out in prior work [17], the FSLBM can only predict the wave's motion sufficiently well, if the amplitude spans over at least one but preferably multiple cells. This is also shown in Figure 26, where less wave periods could be simulated when using lower computational domain resolutions.
B. Rectangular dam break
In the rectangular dam break benchmark case, a rectangular liquid column collapses and spreads at the bottom surface. This test case is regularly used as a numerical benchmark to validate free-surface flow simulations [5,36,37]. The experiments from Martin and Moyce [38] were used as reference data for the simulations in this section.
Simulation setup
The simulation setup resembled that of the reference experiments [38], and is illustrated in Figure 5. negative y-direction, the liquid was initialized with hydrostatic pressure. Therefore, the LBM pressure at y = H was initially equal to the constant atmospheric gas pressure p V (t) = p 0 .
Wetting effects were not considered, and free-slip boundary conditions were set at all domain walls. Conforming with diffusive scaling, the relaxation rate ω = 1.9995 was kept constant for all tested computational domain resolutions. The simulations were performed using the turbulence model from Section II A with Smagorinsky constant C S = 0.1 [23]. Two dimensionless numbers describe the fluid mechanics of the system. The Galilei number with kinematic viscosity ν, relates the gravitational to viscous forces. The Bond number defines the relation between gravitational and surface tension forces. It is defined by the surface tension σ, and the density difference between the liquid and gas phase ∆ρ = ρ − ρ G .
Note that in a free-surface system, the gas phase density is assumed to be zero so that ∆ρ = ρ.
The reference experiments [38] were performed with liquid water, but the authors did not 2. Results and discussion Figure 6 compares the simulated dam break with the experimental measurements [38] in terms of the non-dimensionalized width w * (t * ) and height h * (t * ). The simulations were performed with a computational domain resolution, that is, initial dam width of W = 200. The simulations with the EXT refilling scheme became numerically unstable because the macroscopic velocity gradually increased after refilling, and eventually locally exceeded the lattice speed of sound c s , as illustrated in Figure 8. Exceeding the lattice speed of sound is often an effect of a scheme being numerically unstable in the LBM [19]. These numerical instabilities eventually lead to the collapse of the simulation. Note that these instabilities were not an immediate consequence of a certain cell being refilled with the EXT scheme.
Instead, high macroscopic velocities appeared in the later course of the simulation. All other refilling schemes produced results of similar accuracy and agreed with the trend of the experimental observations for h * (t * ). The simulated dam width w * (t * ) generally agreed better with the experimental measurements than the simulated height. However, the choice of the refilling scheme had more effect on w * (t * ). The AVG scheme was the most accurate, while the EQ and GEQ schemes were less accurate. Although the EQ+NEQ scheme also produced accurate results, it temporarily deviated more significantly from the experimental data at t * ≈ 3.7. Except for the unstable simulation with the EXT refilling scheme, the simulated dam contours at t * = 3 are visualized in Figure 7. While there are noticeable differences between the schemes, no accuracy assessment could be made due to the lack of suitable experimental reference data.
As shown in Figure 27, the simulation results presented in this section were converged in terms of computational domain resolution. A resolution equivalent to W = 50 was sufficient to reasonably agree with the experimental data.
C. Cylindrical dam break
In this section, the simulation setup and results for a cylindrical dam break are presented.
The numerical simulations resemble the laboratory experiments from Martin and Moyce [38].
This test case was chosen to evaluate whether the model's isotropy is affected by the choice of the refilling scheme.
Simulation setup
As illustrated in Figure 9, The liquid column's radius r(t) was monitored during the simulation. It was obtained by finding the distance of the liquid front to the column's initial center of symmetry. The liquid column's collapse can not be assumed to be perfectly symmetric. Consequently, r(t) was computed for every interface cell detected by a seed-fill algorithm [40]. The starting point of this algorithm was set to an arbitrary domain boundary. With this configuration, the algorithm only detected the outermost interface cells, that is, only interface cells at the It collapsed due to the gravitational acceleration g that acted in negative z-direction. spreading liquid's front. A statistical sample was used to evaluate r(t) by computing the maximum, minimum, and mean values of r(t) at every t * = 0.01. The radius r * (t) := 2r(t)/D and time t * := t 4g/D were non-dimensionalized as suggested in the reference data [38].
Conforming with the experimental data [38], the simulations were stopped at r * max (t * ) ≥ 4.33 with the non-dimensionalized maximum liquid front radius r * max (t * ). Large error bars indicate a deviation from rotational symmetry. As for the breaking dam test case in Section IV B, the EXT refilling scheme was numerically unstable. All other refilling schemes generally agreed well with the experimental data [38]. Although the error bars for the EQ+NEQ scheme indicate asymmetry, the liquid spread's front remained qualitatively symmetric as shown in Figure 11. However, tiny droplets detached from the main liquid spread. The evaluation algorithm detected these droplets as part of the liquid surge front.
Results and discussion
The EQ and GEQ schemes are approximately of equal accuracy, while being less accurate than the AVG scheme. The shape of the collapsing liquid column at time t * = 4 is visualized in Figure 11. The solid black lines indicate the initial center of symmetry. In general, all refilling schemes remained rotationally symmetric and did not move from their initial center of symmetry.
As illustrated in Figure 28, indicates the column's initial center of symmetry. The simulation with the EXT refilling scheme was numerically unstable and is not included here. All other schemes kept their initial center of symmetry and remained rotationally symmetric. There are slight differences between the individual refilling schemes. Owing to the lack of reference data, no accuracy assessment could be made in terms of shape.
D. Taylor bubble
A Taylor bubble is a gas bubble that rises in a cylindrical tube filled with a stagnant liquid due to buoyancy forces. It has an elongated shape and a round leading edge with a length of multiple times its diameter. The simulation results were compared to the experimental data from Bugg and Saad [41].
Simulation setup
The simulation setup is illustrated in Figure 12 and resembled that of the reference experiments [41]. where ν is the kinematic viscosity, for different computational domain resolutions. The bubble's rise velocity u was computed from the bubble's center of mass in the z-direction at time t * = 10 and t * = 15. The simulations generally agreed well with Re from the reference data [41], and there were only minor differences between the refilling schemes. Similarly, as illustrated in Figure 13 for t * = 15, the choice of the refilling scheme had almost no effect on the shape of the simulated Taylor bubble. Figures 15 to 17 compare the non-dimensionalized axial u * a = u a /u and radial u * r = u r /u velocities at the locations defined in Figure 14. As for the Taylor bubble's shape, the refilling schemes led to only small differences in the velocity profiles. Only the radial velocity u * r at a radial line at 0.111D from the Taylor bubble's front, was arguably predicted more accurately by the EQ and GEQ schemes, as shown in Figure 16. Table I and Figure 29, the simulation results shown here have sufficiently converged in terms of computational grid resolution. Figure 16: Simulated non-dimensionalized axial velocity u * a (a) and radial velocity u * r (b) along the radial monitoring-line at 0.111D from the Taylor bubble's front (see Figure 14).
As depicted in
The comparison with experimental data [41] is drawn in terms of the non-dimensionalized radial location r * at dimensionless time t * = 15 with tube diameter D = 128 lattice cells. Figure 17: Simulated non-dimensionalized axial velocity u * a (a) and radial velocity u * r (b) along the radial monitoring-line at −0.504D from the Taylor bubble's front (see Figure 14).
The comparison with experimental data [41] is drawn in terms of the non-dimensionalized radial location r * at dimensionless time t * = 15 with tube diameter D = 128 lattice cells.
E. Drop impact
In the fifth test case, the vertical impact of a drop into a pool of liquid was simulated.
As no quantitative experimental measurements are given for the reference experiments from Wang and Chen [42], only a qualitative comparison with photographs could be made.
Simulation setup
As illustrated in Figure 18, a spherical droplet with a diameter of D = 80 lattice cells was initialized in a three-dimensional computational domain of size 10D × 10D × 5D (x-, y-, z-direction). The droplet was initially located at the surface of a thin liquid film of height 0.5D with impact velocity U in the negative z-direction. The gravitational acceleration g also acted in the negative z-direction. Accordingly, hydrostatic pressure was initialized such that the pressure at the pool's surface was equal to the constant atmospheric volumetric gas pressure p V (t) = p 0 . The walls in the x-and y-direction were periodic, and no-slip boundary conditions were set at the top and bottom domain walls in z-direction. The relaxation rate was chosen ω = 1.989. The droplet's impact is described by the Weber number which relates inertial and surface tension forces, and by the Ohnesorge number which relates viscous to inertial and surface tension forces. These dimensionless numbers include the surface tension σ, dynamic viscosity µ, and liquid density ρ. Assuming g = 9.81 m/s 2 , and using ρ = 1200 kg/m 3 and µ = 0.022 kg/(m·s) [42], the Bond number Bo = 3.18 (36) with characteristic length D, closes the definition of the system. The nondimensionalized time t * := tU/D is offset by t * = 0.16 [6] to allow a comparison with the numerical simulations performed in this study.
Results and discussion
The simulated drop impact, that is, the splash crown formation at t * = 12, is qualitatively compared with experimental results in Figure 19. The solid black line indicates the crown's contour in a central cross-section, oriented with the normal in the x-direction. There was no scale bar provided for the photograph of the laboratory experiment [42]. Therefore, the simulations performed here could only be validated qualitatively. As in the dam break simulations in Sections IV B and IV C, the EXT refilling scheme became numerically unstable which led to too high macroscopic velocities. The EQ+NEQ scheme was subject to numerical instabilities for the same reason. Qualitatively plausible results could be obtained with all other refilling schemes. However, with the GEQ scheme, the droplets detaching from the crown's top formed thin and long threads of liquid. In contrast, in the photograph of the experiment, the detaching droplets rather form thicker and shorter liquid threads that then detach as spherical droplets. This kind of crown formation is arguably resembled best by the EQ scheme.
F. Bubbly plane Poiseuille flow
This final benchmark case is inspired by Peng et al. [8], where a particle-laden turbulent channel flow was simulated. The choice of the refilling scheme for solid obstacles affected the particle dynamics, that is, the particles' position during the simulation. In the study presented here, a similar test case is used with randomly initialized spherical bubbles rather than solid particles. The flow is force-driven between two parallel plates, also called plane Poiseuille flow.
Simulation setup
A three-dimensional domain of size 2L × L × L (x-, y-, z-direction) with a channel width of L = 100 lattice cells was filled with liquid, as illustrated in Figure 23. As shown in Figure 24, there were 381 randomly distributed spherical bubbles with a diameter of 0.1L in the channel, leading to a gas volume fraction of approximately 0.1. The bubbles were arranged so that their center was not closer than 0.05L to the domain wall. The random distribution was chosen once and kept the same for all simulations. Therefore, the simulation with any refilling scheme started from an identical initial situation. The domain's walls were periodic in the x-and y-direction and set to no-slip in the z-direction. With the force F acting in the x-direction, the fluid velocity profile took a parabolic shape in the z-direction with zero-velocity at the no-slip domain walls. This velocity profile is commonly referred to as plane Poiseuille flow and analytically given by [43] where µ is the dynamic viscosity of the liquid. The setup is defined by the Morton number Mo = 10 −5 (37) and by the Reynolds number Re = 10 4 (38) with characteristic length L and analytical maximum velocity u max = u(0.5L). The relaxation rate was chosen ω = 1.989, and the time t was non-dimensionalized with t * = t u max /L.
Results and discussion
As for the drop impact test case in Section IV E, the simulations with the EXT and EQ+NEQ refilling schemes were numerically unstable. The simulation results for the AVG, EQ, and GEQ schemes at t * = 4 are shown in Figure 25. The bubbles gathered and coalesced in the center of the domain, where the velocity was the highest. Although all simulations started from the same initial situation, the refilling schemes led to noticeable differences in the bubble dynamics, that is, the bubbles' positions. This observation agrees with those made by Peng et al. [8], where a turbulent particle-laden channel flow was simulated with different refilling schemes for solid obstacles. However, due to the lack of missing reference data from experiments, the refilling schemes' accuracy could not be assessed in this test case.
V. CONCLUSIONS
This study has compared different refilling schemes for the free-surface lattice Boltzmann method [1] (FSLBM). In the FSLBM, it is distinguished between cells belonging to the heavier fluid, lighter fluid, and the interface located between them. These cells are here referred to as liquid, gas, and interface cells, respectively. The gas phase is neglected and the interface is treated as a free surface. Consequently, gas cells neither participate in the LBM flow simulation, nor carry valid information about the flow field. In the LBM, such information is stored in terms of particle distribution functions (PDFs) in each lattice cell.
Because of the free interface's motion, gas cells regularly convert to interface cells. As the hydrodynamic LBM simulations are performed in interface cells, those cells' PDFs must be initialized with valid information during the conversion. This initialization of PDFs is commonly referred to as refilling. The first refilling scheme under investigation was the one suggested in the original FSLBM as introduced by Körner et al. [1]. In this model, PDFs are initialized according to their equilibrium (EQ), which is constructed using the average density and velocity from the neighboring, non-newly converted interface and liquid cells. This scheme was extended by adding the contribution of the neighboring cells' non-equilibrium PDFs (EQ+NEQ) [8], or by including information about the local pressure tensor using Grad's moment system (GEQ) [10][11][12]15]. Additionally, the PDFs could also be extrapolated (EXT) from neighboring cells' PDFs [9] or were taken as the average (AVG) of neighboring, non-newly converted interface and liquid cells' PDFs [14].
These schemes' accuracy and stability properties were investigated in six numerical experiments, with reference data for five of them as either analytical models or laboratory measurements from the literature. In the experiments conducted here, the EXT and EQ+NEQ schemes often led to numerical instabilities. These instabilities were caused by the lattice velocity exceeding the lattice speed of sound. The AVG refilling scheme was also unstable in the cylindrical dam break test case in one of the computational domain resolutions used in the convergence study. In contrast, the EQ, and GEQ schemes were numerically stable in all simulations performed here. Although, the AVG scheme was more accurate than the EQ and GEQ schemes in the dam break test cases, it slightly overestimated the gravity wave's amplitude in the first period. The EQ, and GEQ schemes' simulation results hardly differed when compared in terms of the quantitative reference data available in the literature. Nevertheless, qualitative differences between these schemes could be observed in the dam break, drop impact, and bubbly Poiseuille flow test cases. Because of lacking appropriate reference data, a final accuracy comparison could only be made vaguely based on visual comparison. In the drop impact benchmark, the EQ scheme arguably seemed to be favorable over the GEQ scheme when compared qualitatively. Additionally the GEQ scheme is computationally more expensive than the EQ scheme. In summary, for the numerical simulations performed here, the EQ scheme should be preferred in the FSLBM with respect to ease of implementation, computational costs, numerical stability, and accuracy.
SUPPLEMENTARY MATERIAL
The following supplementary material is available as part of the online article: An archive of the C++ source code used in this study. It is part of the software framework waLBerla [16] (version used here: https://i10git.cs.fau.de/walberla/walberla/ -/tree/01a28162ae1aacf7b96152c9f886ce54cc7f53ff). The ready-to-run simulation setups for all numerical experiments performed in this article are included in the directory apps/showcases/FreeSurface.
|
2022-08-01T01:16:05.491Z
|
2022-07-29T00:00:00.000
|
{
"year": 2022,
"sha1": "aa5a94d2490fdf92a813f4877448b18ca8e70b30",
"oa_license": "CCBY",
"oa_url": "https://aip.scitation.org/doi/pdf/10.1063/5.0131159",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "bd7104925c9f446e32ba5a668cb11b389c8fc8ad",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
199230444
|
pes2o/s2orc
|
v3-fos-license
|
Method of calculating the parameters of the mountain pressure epure
. Elevated bearing load pressure is formed near the exposed part of the coal seam, compared with static stresses normal to the reservoir. The loading of the near-bottom part of the coal seam is formed by linearly damped, according to the principle of Saint-Venant, from the bottom of the face to the massif of the tangential stresses from contact friction between the formation and lateral enclosing rocks in the form of a reference rock pressure, the epure of which is described by a convex quadratic function whose initial value is normal stress at the top of the bottom hole fracture, and the final stress is to the rock pressure in the zone of the intact massif. In connection with the above scientific position, a method has been developed for determining the vertical normal stress at the top of the bottom hole fracture, the length of the epure, and the distance from the bottom to the maximum of the reference pressure.
Introduction
In modern conditions, the solution of issues of rock pressure only by methods of mine observations and modeling by equivalent materials does not allow to reveal the mechanism of formation of the reference pressure.
Since the second half of the last century, work has been carried out to control mountain pressure [1]. Rejecting all other approaches to the critical analysis of various theories based on the fact that they are described by ascending and descending exponents, while the real epures of rock pressure have the form of quadratic functions, А.А. Borisov gives his experimental and theoretical epure of himself and other authors (Fig. 1). Known approaches to the construction of the epure of rock pressure, including A.A. Borisov, are reduced to three types of representation of the epure of rock pressure in the form: 1) a downward stress epure, characterized by its maximum value above the face; 2) one ascending and other descending exponent or exponential function of stresses intersecting at the point of the maximum value of the reference pressure. In this case, the ascending epure is based on the constant value of contact tangential stresses arising from contact friction; 3) a convex quadratic function with an initial voltage value equal to zero. However, any known analytical formula necessary for calculating the driving force control parameters of mountain ranges from their set does not describe the experimental convex epure of vertical stresses of the reference rock pressure on coal seams with an initial value not equal to zero (Fig. 1, epure 2). Researchers continue to use unrealistic formulas for describing the reference pressure epure [3]. There are many examples.
In works [3 -7], it is assumed that the vertical deformation decreases with removal deep into the reservoir in a linear dependence. This assumption brings the plot of rock pressure closer to the real one ( Fig. 1, curve 2). But it is not sufficiently analytically substantiated and not brought to a high accuracy of calculation.
Methods
The bearing capacity of the seam is associated with the emergence of a bottom-hole support pressure zone. The reference pressure is formed by increased, compared with static, normal to the seam stresses acting near the exposed part of the reservoir. The reference pressure reflects the coal seam loading around the workings due to the exposure of the massif. The reference pressure determines the processes occurring in the marginal parts of the seam. It occurs constantly when a cavity is formed, also when conducting workings. Support pressure plays an important role in the occurrence of rock bursts, sudden outliers, pressing of coal, heaving, influences the stability of workings, causes blockages of lavas, etc. The patterns of the manifestations of the reference pressure depend on many factors, but it is important to know the dominant ones. This has become one of the important problems of rock mechanics in revealing the physical nature of the reference pressure, in developing ways to control not only the manifestations of the reference pressure, but also the physical processes caused by them. Under the influence of the load, the material of the massif is damaged, irreversible deformations occur in the marginal part of the formation, cracks form in the latter, and the roof is displaced. Therefore, we will pay attention to the issue of disclosing the mechanism of the formation of the reference pressure, on which there is no consensus among scientists. The magnitude of the rock pressure on the contact surface is formed in the form of a certain regularity -the epure of rock pressure. This pattern is not sufficiently disclosed. To reveal the mechanism of rock pressure loading, we use experimental observations of other researchers from the mechanics of a deformable body, which claim that contact shear stresses decay linearly from places of stress concentration deep into a deformable body in accordance with the principle of Saint-Venant [8].
The seam is represented together with the enclosing rocks in the form of an elastic system in the form of a strip enclosed between rigid plates. The rock pressure acts on the upper plate. Between the enclosing rocks and the formation, shear stresses due to contact friction, resulting from the deformation of the coal seam, act and are directed deep into the massif.
It is generally accepted that the magnitude of the rock pressure on the seam is determined by the weight of the column of overlying rocks of a single section where γ -the specific gravity of rocks, N/m 3 ; Н -height of the rocks column, m.
The reference pressure at a certain distance from the bottom reaches a maximum and then decreases to level values corresponding to the state of an intact massif. It is directly related to the distribution of contact normal stresses in the bottom hole zone (Fig. 2).
We put forward a concept that does not exclude the influence of contact friction and describes the epures of rock pressure in the form of a convex quadratic function [9] with an initial value not equal to zero (Fig. 1, epure 2). The author of the book [8], based on the principle of Saint-Venant, argues that friction decreases with distance from the free surface along the length of the slab. In general, it is obvious that the displacement of the reservoir towards its outcrop should stop at a certain distance from the bottom in the untouched zone. To calculate the linear attenuation of the contact shear stress, we introduce the parameter t l, reflecting the change in the coefficient of contact friction and equal t l = 1/l m , where l m is the length of the reference zone, m. Using the linear attenuation of the contact friction coefficient ƒ i according to the law ƒ i =f k (1 -t l l) (Fig. 2), (where l is the distance from the bottom of the studied contact area), we can write the epure of rock pressure in the form of the linear pressure L. Prandtl [9,10] where y -vertical normal stress at the top of the crack, Pa; f k -coefficient of the contact friction between the formation and lateral rocks; h -seam width, m; іt is necessary to write down the important condition that ƒ i =f k (1 -t l l) at i y = r , (in the zone of the untouched massif). Now we will determine the length of the loaded zone and the length of the epure. In general, the epure crosses the horizontal line γH at two points: the near point determines the size of the unloading zone, which will be discussed in detail in the separate article of this collection. The distant point is on the descending branch of the epure, in the zone of the untouched massif (Fig. 2). The farthest point of intersection of the epure with the horizontal line γH determines its length. Due to the fact that at point а (Fig. 2) the effective shear stress reaches the value of k n -the limit of the resistance of the material to shear, a crack is formed, which develops along the trajectory of maximum effective tangential stresses (TMETS). The epure changes its value all the time, because in the bottom hole section of the TMETS ξ, a crack develops and the normal stress y increases until the bearing capacity σ с in the reference pressure zone reaches the value of γH. The value will be equal to the normal stress at the top of the crack. The beginning of the reference pressure diagram is shifted to the depth of the coal mass by the abscissa of the top of the crack. The console value x ξ is formed because of the bottom hole formation will be out of load. A zone of irreversible deformations wide x ξ = х 0 is formed (Fig. 2).
Then the distribution of normal contact voltages is described by a quadratic function (Fig. 2).
where t l = 0.1, 1/m. The vertical stress yξ at the top of the crack is determined by the system of equations [10] where k n -limit shear resistance at the crack top, Pa; μ -internal friction coefficient; ρ = arctgμ -angle of internal friction of coal, rad; β ξ -the value of the angle of inclination of the TMETS ξ relative to the horizontal from contact friction, rad; β b -the value of the angle of the trajectory exit to the contact surface, rad; k b -the current value of the shear resistance at point b (Fig. 2), Pa; dimensionless parameters b at the crack top and b b at where х ξ -abscissa of the top of the crack, m; х b -the abscissa of the intersection point of TMETS ξ with the reservoir soil, m.
The angle of the bottom hole crack ξ where ρ -internal friction angle, rad.
In accordance with the principle of Saint-Venant, in expression (5) the linear attenuation of shear stresses from external friction along the thickness of the seam is taken into account according to the expression h y 2 1 . Point b for TMETS ξ is fixed. Therefore, the value of the tangential stress from external friction is taken into account by the abscissa value х b at у = h 1 . The minus and plus sign in expressions (4) and (5) indicates the different role of external friction on the contact planes.
The angle values β ξ and β b are determined using the equations 2 1 sin cos arctg 2 To calculate the parameters of the bottom part of coal seams, experimental values of the shear resistance limit against bedding, the coefficient of internal friction of coal and the contact friction between the layer and the immediate roof rocks are necessary. Therefore, coal and roof rock were sampled from the face of the i 3
Results and discussion
For the first time, a method has been developed for calculating parameters of the epure of rock pressure in the form of a convex quadratic function. Its initial value is equal to the normal stress at the top of the bottom hole fracture, and the final value is to the mountain pressure in the zone of an intact massif corresponding to the experimental measurements. The basis of the methodology for developing a method for calculating epure parameters is the analytical determination of the vertical normal stress at the top of the bottom hole fracture, the length of the epure of the reference pressure.
The method can be used to solve many problems of mining: managing the state of the massif near the mine workings in order to increase their stability or unload from the pressure of coal seams for their degassing.
Conclusions
1. The vertical stress at the top of the bottom hole fracture is determined by a system of equations that takes into account the properties of coal: shear resistance limit, internal friction angle, contact friction coefficient, geometric parameters of the crack.
2. The loading of the bottom part of the coal seam is formed by the shear stresses linearly damped from the bottom of the face deep into the massif from the contact friction between the seam and lateral host rocks in the form of a reference rock pressure, whose epure is described by a convex quadratic function. The initial value of this function is equal to the normal stress at the top of the bottom hole fracture, and the final value is the mountain pressure in the zone of the untouched massif.
3. The paper presents a method for calculating parameters of rock pressure sealing: vertical stress at the top of the bottom hole fracture, the length of the epure, the distance from the bottom to the depth of the morning formation to the maximum of the rock pressure.
|
2019-08-03T01:36:17.402Z
|
2019-01-01T00:00:00.000
|
{
"year": 2019,
"sha1": "c752caff70cc79bf3a3f4d80426325ba496f80d4",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2019/35/e3sconf_rmget18_00055.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "92327894adaf3aca0a7a474d21fcc94b5d0f0769",
"s2fieldsofstudy": [
"Geology",
"Engineering"
],
"extfieldsofstudy": [
"Geology"
]
}
|
257807436
|
pes2o/s2orc
|
v3-fos-license
|
Prevalence and associated factors of early sexual initiation among youth female in sub-Saharan Africa: a multilevel analysis of recent demographic and health surveys
Background Early sexual initiation is a major public health concern globally, specifically in Sub-Saharan African (SSA) countries where reproductive health care services are limited. It is strongly related to increased risk of HIV/AIDS, sexually transmitted diseases, unwanted pregnancy, adverse birth outcomes, and psychosocial problems. However, there is limited evidence on the prevalence and associated factors of early sexual initiation among youth females in SSA. Methods A secondary data analysis was employed based on the recent DHSs of sub-Saharan African countries. A total weighted sample of 184,942 youth females was considered for analysis. Given the hierarchical nature of DHS data, a multilevel binary logistic regression model was fitted. The Intra-class Correlation Coefficient (ICC), Median Odds Ratio (MOR), and Likelihood Ratio (LR) test were used to assess the presence of clustering. Four nested models were fitted and the model with the lowest deviance (-2LLR0 was selected as the best-fitted model. Variables with p-value < 0.2 in the bivariable multilevel binary logistic regression analysis were considered for the multivariable analysis. In the multivariable multilevel binary logistic regression analysis, the Adjusted Odds Ratio (AOR) with the 95% Confidence Interval (CI) was reported to declare the strength and statistical significance of the association. Results The prevalence of early sexual initiation among youth females in sub-Saharan Africa was 46.39% [95%CI: 41.23%, 51.5%] ranging from 16.66% in Rwanda to 71.70% in Liberia. In the final model, having primary level education [AOR = 0.82, 95% CI; 0.79, 0.85], and [AOR = 0.50, 95%CI; 0.48, 0.52], being rural [AOR = 1.05, 95%CI: 1.03, 1.07], having media exposure [AOR = 0.91, 95%CI: 0.89, 0.94], and belonged to a community with high media exposure [AOR = 0.92, 95%CI: 0.89,0.96] were found significantly associated with early sexual initiation. Conclusion The prevalence of early sexual initiation among youth females in SSA was high. Educational status, wealth index, residence, media exposure, and community media exposure have a significant association with early sexual initiation. These findings highlight those policymakers and other stakeholders had better give prior attention to empowering women, enhancing household wealth status, and media exposure to increase early sexual in the region.
Introduction
Early sexual initiation is defined as an experience of first intercourse before 15 years of age [1,2]. Early sexual activity, particularly in developing nations, has been reported to cause social and public health issues [3,4]. Early sexual practice at a young age is a global public health issue that is particularly prevalent in low-and middle-income countries such as sub-Saharan African countries [5]. According to a report by the World Health Organization (WHO), the annual number of maternal deaths falls from one in 73 to one in 180 [6]. Low and Middle-i accounts for nearly all maternal mortality, with SSA accounting for roughly 66% and Southern Asia accounting for 22% [6].
Globally, about 20 million unsafe abortions and 68,000 unsafe abortion-related deaths occurred annually, of which adolescent girls account for 14% [6]. Studies conducted in different countries evidenced that early sexual initiation increases the risks of unexpected pregnancy and Sexually Transmitted Diseases (STDs). An estimated 30% of teens under the age of 17 have had sexual relations, which is responsible for 252,000 unintended pregnancies per year [7]. According to studies conducted in various parts of the world, the prevalence of early sexual initiation was 9.8% in Malaysia [8], China 18.5% [9] and 58.5% in the Caribbean. A study reported in Brazil showed that the prevalence of early sexual initiation was 7% among girls [10]. Likewise, early sexual initiation is common in African settings ranging from 26% in Nigeria [11] to 55% in Ghana [12].
Early sexual initiation is a significant risk factor for STDs [13]. A girl's first sexual encounter is frequently unplanned, putting her at risk for STDs, HIV infection, and unwanted pregnancy [7]. Adolescents who have multiple partners early and unprotected stand a high risk of developing HIV and other sexually transmitted diseases, as well as a high prevalence of teenage pregnancy [14,15]. Mistimed pregnancy and insecure abortion, getting a fistula, and contracting sexually transmitted infections are all important public health concerns in low-income nations today [16].
Given that sexually active young women are at risk of a variety of negative health consequences, including ill health, social, and economic consequences for both women and the feature generation [7]. It increases the likelihood of school dropout, poor academic performance, stigma, and discrimination, as well as STDs like HIV/AIDS, risky sexual practices, unwanted pregnancy, mental illness, and maternal death [17]. It also affects the social and economic position of adults [18].
There is a lack of evidence on the magnitude of early sexual initiation and associated factors in Sub-Saharan Africa. Therefore, this study aimed to assess the Prevalence and associated factors of early sexual initiation among youth females in sub-Saharan Africa. The results of this study could guide public health interventions and programs to reduce the magnitude of early sexual initiation in SSA, which in turn reduces the incidence of child and maternal morbidity and mortality.
Study design, setting, and period
The data source for this study was the Demographic and Health Survey (DHS) data. It is a cross-sectional study conducted every five-year to generate updated health and health-related information. The information was gathered in partnership with the International Classification of Functioning, Disability, and Health (ICF) and Measure DHS in each country [4]. The research was based on the most recent standard Demographic and Health Survey (DHS) of 33 sub-Saharan African countries conducted from 2011-2021. These countries were divided into four regions: Eastern Africa (Burundi, Comoros, Ethiopia, Kenya, Malawi, Mozambique, Rwanda, Tanzania, Uganda, Zambia, Zimbabwe), Central Africa (Angola, Cameroon, Chad, the Democratic Republic of the Congo, Republic of the Congo, Gabon), Western Africa (Benin, Ivory Coast, Gambia, Ghana, Guinea, Mauritania, Liberia) they cover 9.4 million square miles and a total population of 1.1 billion inhabitants. The datasets are found publicly on the DHS website, www. dhspr ogram. com. A multi-stage stratified cluster sampling technique was employed to recruit study participants for the survey. The surveys are population-based and have huge sample sizes, and they are nationally representative of each country. A multi-stage cluster sampling procedure was employed in all surveys [12].
Source and study population
All youth females (aged 15-24 years) in SSA were the source population. The study population was youth females in sub-Saharan African (SSA) countries in the selected Enumeration Areas (EAs).
Sample size and sampling procedure
In general, all selected national surveys used the most recent census frame. DHS samples are often classified by geographic region or provinces and within each region, by urban/rural areas. The majority of DHS sample designs are a multi-stage sampling technique based on an existing census frame. Enumeration Areas (EAs) were the primary sampling units and Households (HHs) were the secondary sampling unit. Following the listing of households, equal probability systematic sampling is used to select a specified number of households in the designated cluster [12]. Each DHS report on the Measure DHS website included a comprehensive sampling technique (www. dhspr ogram. com). Weighted values were computed using Individual women's records (IR) DHS datasets to restore the representativeness of the sample data. Finally, this study comprised a total weighted sample of 184,942 youth females from all 33 nations. A total of 47 countries are located in Sub-Saharan Africa. Of these countries, only 41 countries had a Demographic and Health Survey Report. From these, after excluding countries that had no DHS report after 2011 and countries where the DHS dataset was not publicly available, 33 countries were included in this study (Table 1).
Outcome variable
The outcome variable of this study was early sexual initiation among youth females which has a binary response (Yes/No). The DHS asked youth females "age at first sexual initiation?". Then, youth females who had early sexual initiation before the age of 15 coded as "1", otherwise coded as "0".
Independent variables
Independent variables at the individual and community levels were considered. Variables at the individual level were categorized as socio-demographic, pregnancyrelated, and behavioral factors. Socio-demographic variables; the age of the youth female, female education, religion; wealth index, marital status, family size, and working status were considered. Characteristics of pregnancy such as parity and pregnancy desirability were also considered. Finally, behavioural characteristics like chewing chat, cigarette smoking, hearing about STIs, and media exposure were included. Media exposure status is created from the frequency of reading a newspaper or magazine, watching TV, and listening to the radio. If a woman has at least one yes, she has considered having media exposure. Residence, sub-Saharan Africa region, community media exposure, income level of each nation, the survey year, and community women's education were considered as community-level variables. The level of poverty in the community was determined by the proportion of women in the poorer and poorest quintiles obtained from the wealth index results and classified as low (communities in which < 50% of women had poor and poorest wealth quintiles) and high (communities in which ≥ 50% women had poorest and poorer wealth quintiles) poverty communities. Aggregate values measured by the proportion of women with a minimum primary level of education derived from data on respondents' level of education. Then, it was categorized using national median value to values: low (communities with < 50% of women have at least primary education) and high (communities with ≥ 50% of women have at least primary education) community level of women's education. A communitylevel media exposure is measured by the proportion of women who had been exposed to at least one media; television, radio, or newspaper and classified based on national median value as low (communities with < 50% of women exposed) and high (communities with ≥ 50% of women exposed) [19][20][21].
Data management and analysis
This study was conducted using data received from the official DHS measure website www. measu redhs. com. We extracted the outcome and independent variables from the Individual women's Records (IR) data [22]. Based on the guide to DHS Statistics in STATA version 16, data were cleaned and recoded. Before conducting any statistical analysis, we weighted the data for design and nonresponse using the weighting factor provided in the DHS data, as per the survey report's suggestion, to restore the survey's representativeness and obtain valid statistical estimates.
Model building
A multi-level binary logistic regression model was fitted to assess factors associated with early sexual initiation. Four models were fitted and the model with the lowest viadeviance (-2LLR) was chosen as the best-fitted model. The first was a null model (Model 1) (a model with no covariates) that fitted to examine the variability of early sexual initiation across the community/EAs. Individual-level variables and community-level variables were included in the second (Model 2) and third (Model 3) models, respectively. In the final model, both individuallevel and community-level variables were fitted simultaneously in the fourth model (Model 4). The variance inflation factor (VIF) was used to detect multicollinearity, and all variables had VIF values less than 10 and the mean VIF value of the final model was 1.47.
Parameter estimation method
Fixed effects (a measure of association) were used to assess the relationship between odds of early sexual initiation among youth females and explanatory variables at both individual and community levels. Variables with a p-value less than 0.2 in the bivariable multilevel binary logistic regression analysis were considered in the multivariable multilevel binary logistic regression analysis. In the multivariable multilevel binary logistic regression analysis, the Adjusted Odds Ratio (AOR) with the 95% Confidence Intervals (CIs) was reported to declare the statistical significance and strength of the association. The random effect measures used to measure the variation were estimated using the Median Odds Ratio (MOR), Intra-class Correlation Coefficient (ICC), and Proportional Change in Variance (PCV). MOR is defined as the central value of the odds ratio between the region's highest risk and the lowest risk when randomly picking out two clusters. The PCV reveals the variation in utilization of early sexual initiation among youth females explained by factors. The ICC, displays the differences in early sexual initiation among youth females between clusters [19,23,24].
Sociodemographic characteristics of the study population
A total weighted sample of 184,942 youth females was included in this study. Among these youth females, almost half (53.77%) were between the ages of 15 and 19 years. The majority of the research participants (57.57%) lived in rural areas and about 92,366 (49.94%) of the youths attained secondary education. About 94,933 (51.33%) of the youth female belonged to a community with high media exposure and more than half (57.98%) of them were from low-income countries [ Table 2].
The prevalence of early sexual initiation among youth females in SSA
The prevalence of early sexual initiation among youth females in sub-Saharan Africa was 46.39% (95% CI; 41.23%, 52.55%) ranging from 16.66% in Rwanda to 71.70% in Liberia (Fig. 1)
Model comparison and random effect analysis
As indicated in Table 3, the ICC in the null model was 0.136, which means that about 13.6% of the variations of early sexual initiation among youth females were attributed to the between-cluster variation, but the rest 86.4% were attributed to individual level characteristics. The MOR value was 1.86, which indicates if we randomly choose two youths from different clusters, a youth from a cluster with higher early sexual initiation was 2. 52 times more likely to experience early sexual initiation than youths from a cluster with a lower proportion of early sexual initiation. Furthermore, the PCV value in the final model was 71.15% indicating that the variation of early sexual initiation among youth females was explained by the final
Multivariable multilevel binary logistic regression analysis
In the final model, individual variables such as female education, marital status, heard about STI, media exposure, wealth index, and occupation, and communitylevel variables like residence, the region in SSA, income status, and community media exposure were significantly associated with early sexual initiation.
Discussion
This study aimed to identify the individual and community-level predictors of early sexual initiation among youth females in Sub-Saharan Africa. Based on this, the overall prevalence of early sexual initiation among youth females in -sub-Saharan Africa was 46.39% [95%CI: 41.23%, 51.55%).] which is higher than a study conducted in Indonesia 11% [25], brazil, 7% [10], Taiwan 9.3% [26] and china 18.1% [9]. The variations might be related to factors associated with early sexual initiation of these countries being high-income countries with improved socio-economic status, youth females education, gender equity, and reproductive health youth female are services compared to SSA, which in turn could improve their levels of understanding of the consequences of early sexual initiation [26,27].
The odds of early sexual initiation among youth females who attained primary and above education level were lower compared to those who didn't attain formal education. Studies reported in Indonesia [28], Nigeria [29], Ghana [12,15], and Ethiopia [18,30]. This could be because youth female's education can result in the corresponding improvement in their level of awareness about reproductive health i.e. about the optimal age for sexual initiation and informed of the consequences of early sexual initiation and related comorbidities which may prevent them from involvement [12,15]. The odds of having early sexual initiation among youth females who heard about STI were lower than in females who were not heard about it. This was supported by other studies [4,31]. This might be because youth females have awareness of STI, its causes, and consequences, therefore are less likely to have early sexual initiation. Youth females who were not married had lower odds of having early sexual initiation compared to those who were married. In line with studies reported in Ethiopia [16,32]. The possible justification could be because early marriage is common practice in many African countries and linked with deeply rooted customs, norms, and values of the community, and they are more likely to have early sexual initiation [31]. An employed youth female had higher odds of early sexual initiation compared with unemployed females which is supported by a study done in sub-Saharan Africa [33]. This may be because youth females are exposed to different risky sexual behaviors at their workplace and this sexual assault leads to early sexual initiation [34]. The odds of having early sexual initiation among youth females belonging to rich and middle household wealth were lower than those of youth females who belonged to poor household. It was in line with the previous studies reported in Africa [2,11,18]. It could be economically poor respondents who might be cheated by gifts (either in cash or in-kind) which do initiate them to volunteer
Model 4 AOR [95% CI]
Age for sexual activity take part. In our studies youth females from a community with media exposure and media exposure in the households had lower odds of having early sexual initiation compared to those who didn't have communities with lower media exposure and media exposure in the households. This was supported by previous studies [2,35,36], it could be because media exposure could improve youth females' knowledge, attitude, and practices towards early sexual initiation. In our studies Youth females from rural areas had more likely of early sexual initiation compared to urban females; this finding was consistent with previous studies [16,37,38]. This may be due to A possible explanation for this could be low awareness of the community on reproductive health issues and the bad consequence of early marriage for rural adolescents [18]. In our studies youth females from lower-middle and -upper-middle-income statuses were less likely to participate in early sexual than youth from lower-income statuses. This is consistent with the study conducted in Africa [12,39]. This might be because females from lowincome families may participate in earlier sexual relations to obtain money and other benefits; whereas upper and lower-middle-income people have good health-seeking behavior and awareness of lifestyle determinants. Youth females in Central Africa, and West Africa were higher odds of early sexual initiation than women who live in East Africa, however, youth females in Southern Africa were decreased odds of early sexual initiation than those who live in East Africa. This is consistent with the study conducted in Africa These could be ascribed to altering conventional norms as a result of globalization, which causes changes in sociodemographic characteristics such as religion, media exposure, education, and youths people's socioeconomic situation [40,41]. This difference could be the reason why young females' sexual initiation is different across the region.
Strength and limitation of the study
The strength of this study is the use of advanced statistical models, that consider individual/household and community-level predictors, which also increases the paper's quality. Moreover, the results of the current study, which included 36 nations in sub-Saharan Africa, can be simply generalized to the entire SSA. However, the study has the following limitations. The most important variables such as community attitude, norms, values, knowledge, and attitude towards optimal age for sexual initiation were not considered in this study as these variables were not available in DHS. In addition, the cross-sectional nature of the data does not allow for a cause-effect relationship. The finding may also affect due to the DHS survey year variation. Furthermore, because the majority of the health variables in the DHS are dependent on self-report, the study will be susceptible to recall and social desirability bias.
Conclusion
The prevalence of early sexual initiation among youth females in Sub-Saharan Africa was high compared to others. Individual-level factors such as female education, marital status of the female, wealth index, heard about STIs, female occupation, and media usage, have a significant association with early sexual initiation. From community-level variables, a region in SSA, residences, community income, and community media usage, were found to be significantly associated with early sexual initiation. Accordingly, the authors recommend that policymakers and health planners would be better to design programs and plans to increase the youth female's awareness about early sexual initiation through formal/informal education considering early sexual initiation and its health impact on women and also governmental and non-governmental organizations should prioritize modifiable socio-economic determinants and scale up maternal health programs to assist rural and the poorest women.
To have a deeper understanding of the factors, future researchers should take into account maternal health and community knowledge, attitudes, and behavior about the reduction of early sexual initiation by adopting a mixed approach (qualitative and quantitative studies). To decrease maternal complications caused by early sexual initiation in the region, special emphasis on associated factors must also be paid to improving maternal healthcare accessibility, utilization, and quality as well as to reduce early sexual debut and promote healthy sexual relationships among young adolescents.
|
2023-03-30T13:59:48.086Z
|
2023-03-30T00:00:00.000
|
{
"year": 2023,
"sha1": "e7b0505351b180e9f60e63d8cb5dedd1c57a8ebe",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "e7b0505351b180e9f60e63d8cb5dedd1c57a8ebe",
"s2fieldsofstudy": [
"Sociology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
252225113
|
pes2o/s2orc
|
v3-fos-license
|
High—throughput and automated screening for COVID-19
The COVID-19 pandemic has become a global challenge for the healthcare systems of many countries with 6 million people having lost their lives and 530 million more having tested positive for the virus. Robust testing and a comprehensive track and trace process for positive patients are essential for effective pandemic control, leading to high demand for diagnostic testing. In order to comply with demand and increase testing capacity worldwide, automated workflows have come into prominence as they enable high-throughput screening, faster processing, exclusion of human error, repeatability, reproducibility and diagnostic precision. The gold standard for COVID-19 testing so far has been RT-qPCR, however, different SARS-CoV-2 testing methods have been developed to be combined with high throughput testing to improve diagnosis. Case studies in China, Spain and the United Kingdom have been reviewed and automation has been proven to be promising for mass testing. Free and Open Source scientific and medical Hardware (FOSH) plays a vital role in this matter but there are some challenges to be overcome before automation can be fully implemented. This review discusses the importance of automated high-throughput testing, the different equipment available, the bottlenecks of its implementation and key selected case studies that due to their high effectiveness are already in use in hospitals and research centres.
Introduction
The COVID-19 pandemic has become an enormous challenge for the health systems of many countries. As of today, according to information provided by the World Health Organization (WHO, https://covid19.who.int, 08/05/2022) it is calculated that over 6 million people have lost their lives and 530 million more have tested positive for the virus. Studies have suggested that approximately 15.6% of people infected are asymptomatic and may not be aware they carry the virus (1), making them potentially deadly vectors of infection.
The European Centre for Disease Prevention and Control, https://www.ecdc.europa.eu/en/publications-data/contacttracing-covid-19-evidence-scale-up-assessment-resources, April 2021) advises that an effective pandemic control strategy requires robust testing as well as a comprehensive track and trace process for positive patients. Despite these recommendations, the high infection rate of SARS-CoV-2 has led to rampant spread of the disease and consequently a huge demand for diagnostic testing. To meet this demand and increase the testing capacity of communities worldwide, automated workflows have come into prominence as they enable high-throughput screening, faster processing, exclusion of human error, repeatability, reproducibility and diagnostic precision (2).
This review discusses the importance of automated highthroughput testing, the different equipment available, the bottlenecks of its implementation and key selected case studies that due to their high effectiveness are already in use in hospitals and research centres. We also cover the development and automation of different protocols for SARS-CoV-2 testing, highlighting their advantages and disadvantages as well as considering their impact in the COVID-19 pandemic.
Even though vaccination campaigns across the world have been initiated, it will take a long time before most people are vaccinated against SARS-CoV-2. Additionally, manual testing has limited capacity in terms of number of tests processed, while for high-throughput testing it can reach a few thousand tests per day depending on the setup (3). While vaccine and testing campaigns are ongoing, many companies at the cutting-edge of computer-aided biology like Analytik Jena (Germany), Beckman Coulter (USA), Hamilton (USA) and Tecan (Switzerland) have been donating their time and effort to provide automated testing solutions for SARS-CoV-2 diagnosis, as the crisis is still at a peak in many countries (Computed aided biology, https://www.computeraidedbiology. com/cab-companies-on-covid19, 03/21/2021).
In addition to this, to maximise test processing, highthroughput testing is usually combined with other molecular diagnosis developments that could potentially help to reduce not only testing times but also hand labour and potentially the need for costly, additional equipment. For instance, although RT-qPCR is considered by academia to be the "gold standard" for SARS-CoV-2 testing (4), novel techniques have been developed or adapted for COVID-19 testing based on the latest developments in emerging fields such as synthetic biology. Some examples of these alternative point-of-care methods include workflows based on the reverse transcription loop-mediated isothermal amplification assays (RT-LAMP) (5,6) and CRISPR-based tests (7) which could potentially drastically reduce the consumables required and capital costs while maintaining the same testing efficiency.
RT-qPCR
Until now, the RT-PCR real time reverse transcriptionpolymerase chain reaction has been the gold standard for COVID-19 diagnosis worldwide (8). It is used to amplify a sequence of DNA using the polymerase chain reaction (9). In the diagnosis of SARS-Cov-2, target genes are usually E gene (oligonucleotide sequence of envelop gene) and N gene (nucleocapsid gene). The RNA-dependent RNA-Polymerase gene (RdRp/Hel) is also targeted to confirm the presence of COVID-19 in the patient sample (10) and in some other labs S gene and Orf1ab have also been used (11). Due to RT-qPCR's expensive instrumentation requirements and timeconsumption, alternatives methods have been also developed for rapid detection.
RT-LAMP based diagnostics
The Reverse transcription loop-mediated isothermal amplification RT-LAMP based diagnostic was initially developed for the Middle East Respiratory Syndrome coronavirus (MERS-Cov) (12). It is used for viral genetic pathogen diagnostics (mainly RNA viruses) as it takes an hour to complete the amplification process, considerably less than the RT-PCR method. In RT-LAMP, there are strands replacing RNA polymerase and primers which amplify the specific DNA sequence of the virus. This technique has been adapted for diagnostics of SARS-CoV-2 with a limit of detection (LOD) 100 RNA copies/ reaction (13).
CRISPR based diagnostics for COVID-19
CRISPR-based diagnostics have been introduced as a feasible method of high-throughput testing. CRISPR techniques are highly adaptable and vary depending on the situation (3). To automate these systems, it is important to understand some of the key steps needed to be performed in such diagnostic systems: • Isothermal amplification method: recombinase polymerase amplification (RPA or reverse-transcription loop mediated isothermal amplification (RT-LAMP)
SHERLOCK STOPCovid
The Specific High Sensitivity-Enzymatic Reporter UnLOCKing (SHERLOCK) was originally developed by the Fang Zheng laboratory in order to detect cases from both the Dengue and Zika Virus (14). SHERLOCK STOPCovid is an adaptation of the original method for COVID-19 detection (7) and it includes three key steps ( Figure 1):
1) Lysis of virus contained in patients sample using
QuickExtract for viral RNA extraction. 2) Detection of the virus using the STOPCovid reaction.
During this step RT-LAMP is combined with Cas12b for viral detection (instead of the Cas13a originally used in SHERLOCK). 3) Results visualisation using lateral flow paper dipsticks which captures the cleaved reporter RNA with labelled ends on specific antibody bands. For high-throughput processing the readout step can also be performed by fluorescence, using a DNA reporter.
DETECTR
DNA Endonuclease Targeted CRISPR Trans Reporter (DETECTR) method was reported by Broughton et al. (15) and can be divided into the following steps ( Figure 1): 1) Viral RNA amplification by RT-LAMP. 2) Cas12a identification of the SARS-Cov-2 sequence, and further reporter molecule is cleaved, indicating presence of SARS-CoV-2 in the sample. 3) Results visualisation using lateral flow paper dipsticks or plate reader.
The LOD for this diagnostic method is 10 copies/µl reaction. A similar approach was used by Malcı et al. (16). In this study a One-Pot COVID-19 CRISPR/Cas12a-RPA reaction was performed and optimised using design of experiments (DoE).
Results revealed that addition of reverse transcription buffer and RNase inhibitor (compounds usually omitted in one-pot reactions) can significantly improve the performance of the reaction (16). Very importantly, the authors suggested that the process is highly scalable using automation and highthroughput testing.
3) Results visualisation using lateral flow paper dipsticks or plate reader.
The LOD of this assay is 10 copies/µl in both fluorescence and lateral flow dipstick (17).
FELUDA
The FnCas9 Editor Linked Uniform Detection Assay (FELUDA) is a CRISPR Cas9 based method developed by Azhar et al. (18). The steps used in this method involve ( Figure 1): 1. RNA extraction 2. Viral RNA amplification by PCR using biotinylated primer, which is immobilised on beads containing streptavidin coating. Amplification can also be performed by RT-RPA amplification method. 3. Fluorescence-labelled Cas9 complexes have sgRNA that interacts with immobilised target sequence. 4. Analytical signals generated and visualised using a streptavidin coated lateral flow dipstick.
FELUDA reached a LOD of ∼10 copies of purified viral sequence after optimizing PCR conditions. When coupled with RPA LOD is ∼400 copies of starting RNA substrate per µl (18). To end this section, it is important to highlight some of the advantages and disadvantages that these methods have in comparison to the standardized and widely used RT-PCR and RT-LAMP. All the CRISPR technologies mentioned above have a huge potential to be automated at a lower cost which may be an important advantage compared to traditional methods in lower income communities for high-throughput diagnostics. This is due that many of these diagnostic technologies can work at a single temperature with minimal equipment requirements and complexity (7). However, one of the biggest disadvantages of these new diagnostic methods is the availability and cost of some of reagents (16), whereas reagents for RT-PCR and RT-LAMP can be purchased from most of the retailers in the field at bulk cost. We anticipate that current efforts in lowering the cost of such reagents will provide some relieve to the shortage of reagents in the near future for these types of technologies (19).
Sensitivity is also an important aspect, according to literature, some of these CRISPR based methods can offer LOD as low as 10 copies/µl (15,17) which is still one order of magnitude higher than the traditionally used techniques of RT-PCR with a LOD of ∼0.1 copies of viral RNA per µl of transport media (20) and RT-LAMP with LOD of ∼6.5 RNA copies/µl (13). Future work optimising the CRISPR assays such as the one performed by Malcı et al. (16) could further reduce the LOD to match similar levels as the traditional methods.
Finally, it is important to remark that despite that the previous methodologies have followed the sampling and testing approvals and procedures in accordance with recommendations from regulations agencies such as the FDA, CDC (7) and the WHO (15), clinical testing have only (created with BioRender.com). Workflow comparing different diagnostic methodologies for COVID-19. Images describe a general overview of how each method works, starting from the sample taking and then to viral RNA extraction. From this point, the next step is either direct amplification or retro transcription into DNA for further detection using different methods, potentially RT-PCR or CRISPR-based analysis, followed by the visualisation step.
been done for research purposes as the approval for commercial purposes would further require validation and approval from the corresponding sanitary authorities. An exception to this was made for the SHERLOCK protocol as it was granted Emergency Use Authorization by the FDA tom carry out testing. However, this was only limited to laboratories certified under the Clinical Laboratory Improvement Amendments of 1988, that meet requirements to perform high complexity tests (FDA, https://www.fda.gov/ media/137746/download, 23/08/2022).
Automation at the epicentre of the outbreak
One of the most prominent examples of automation support for SARS-CoV-2 testing and screening was at the origin of the outbreak in Wuhan, China. By the 9th of February 2020, Huo-Yan Lab (or Fire Eye Lab) managed to perform 14,000 tests per day by completing an automated extraction of nucleic-acid as of part of the RT-qPCR testing workflow. By the 1st of March 2020, less than a month later, capacity was increased to 20,000 tests per day (21). In this workflow (Figure 2), the authors used the MGISP-960 automated platforms (MGI, China). RNA extraction was performed using MGI's (China) MGIEasy Magnetic Beads Virus DNA/RNA Extraction Kit to get high-throughput and standardised clinical testing as shown in Figure 2 (22). To compare the efficiency of the procedure, manual extractions were completed by using the QIAamp Viral RNA Mini Kit. Manual processing took 1 h and 50 min for 24 samples while comparatively just 1 h and 8 min was required to process 192 samples using automated extraction. Another impressive platform developed by BGI (China) is the MGISP-NE384. This platform is another high-throughput automated nucleic acid extractor which has adopted magnetic rod technology, allowing processing of 384 samples in only 35 min (BGI Genomics, China). These results indicate that automation is key for a successful strategy against SARS-CoV-2.
Automation of alternative SARS-CoV-2 testing methodologies
The alternative testing techniques described in Figure 1 have shown promising results due to their simplicity, good limit of detection and the lack of need for expensive equipment (7,15,18). These reasons make them perfect candidates to be used in high-throughput testing. As a response to the ever-growing needs for rapid COVID-19 testing, many biofoundries which can be described as automation facilities with the capacity to design, test and build biological constructs on different scales, have offered their input on how to use their automation platforms for mass, high-throughput SARS-CoV-2 testing.
A perfect example of this is the work carried out at the London Biofoundry, where three different automated workflows were developed. By using Analytik Jena's (Germany) CyBio FeliX and Labcyte's (USA) Echo 525 liquid handling platforms, workflows involving RT-qPCR, CRISPR-Cas13a, and LAMP were automated for SARS-CoV-2 detection (2). As previously mentioned, both Cas13a and LAMP methods were shown to be innovative alternatives that eliminated the need of a qPCR device, reducing both equipment and reagent cost. Another advantage was the reduction in complexity, making it simpler to perform and analyse the results for non-specialised personnel, as well as easier to automate. Diagrams of these two novel detection approaches are shown in Figure 3.
The automated process was divided into two stages for each diagnostic methodology: RNA extraction and then amplification steps. The first step was carried out by CyBio FeliX machine (Analytik Jena, Germany) while the latter was performed by Beckman's (Beckman Coulter Company, USA) Echo 525. Two different RNA extraction kits from Analytik Jena and Promega and three different qPCR master mixes from ThermoFisher (TaqPath and Fast Virus) and NEB (Luna) were also tested.
The authors prepared dilutions of virus-like particles (VLP) containing 5, 25, and 250 copies per reaction to test for the sensitivity of the automated methodologies. This allowed the development of the automated workflow to be sped up without needing to have a constant supply of clinical COVID-19 patient samples, which would require Biosafety Level 2 Laboratory (BSL-2) in order to handle them. Using the VLP also allowed comparing the methods using well controlled and characterised samples for the three different diagnostic methods. It was observed that the detection threshold when using LAMP was at least 30 copies of VLP, while for CRISPR and qPCR the detection threshold was 2.5 VLP.
The automated RT-qPCR workflow was also validated with 173 patient samples obtained from Northwest London Pathology (NWLP). A comparison between the qPCR workflow developed with the selected RNA extraction kit (Analytik Jena innuPREP Virus DNA/RNA Kit) to that used at that time by the NWLP (a multiplexed-tandem PCR workflow) was made. A good correlation (R 2 = 0.8310) was shown between the results given by these two workflows. A second validation was made to further expand the workflow for its use with RNA extraction kits from different suppliers. On this occasion the Analytik Jena innuPREP Virus DNA/ RNA Kit previously validated was compared with the Promega Maxwell HT Viral TNA extraction kit. A high correlation (R 2 = 0.9357) was obtained between the results given by these kits. Finally, the automated RT-qPCR described above, which was the only FDA-validated method at the time of its development, was put into operation in two London hospitals with a capacity of 2,000 tests per day.
Microfluidic COVID Testing
Lab-on-a-chip technologies and microfluidic systems have been increasingly used in various applications within biotechnology as they offer unique advantages such as portability, precise liquid control and low reagent requirement (23). Therefore, microfluidic technology can accelerate conventional biochemistry-based tests especially for highthroughput testing with lower sample volumes. In the last decade, many automated microfluidic molecular diagnosis platforms have been developed and some of them are also commercially available for field use (24).
Microfluidic systems have been also adapted to develop alternative SARS-CoV-2 detection methods. Ramachandran et al. used on-chip electric field control for automated nucleic acid purification in their CRISPR/Cas12a mediated COVID-19 detection method (25). Following an off-chip RT-LAMP isothermal amplification, the selective ionic focusing technique was implemented on a microfluidic chip to purify the nucleic acid templates to be targeted by Cas12a/gRNA complexes to produce fluorescence signals. Researchers reported more than 96% accuracy on 64 clinical samples using this integrated microfluidic system (25). RT-LAMP was also used for automated nucleic acid amplification in a centrifugal microfluidic system (26). After sample preparation, this platform performed a fully automated process from sample-in to answer-out using centrifugal force for nucleic acid separation in a microfluidic disc (26). For the detection of multiple respiratory tract pathogens including SARS-CoV-2, a microfluidic chip-based PCR-array system, Onestart, was developed using magnetic force for nucleic acid purification (27). Onestart was able to complete the sample-in-answer-out process including lysis of samples, nucleic acid extraction and amplification and result output in a fully automated manner. The study reported consistent results with real-time PCR with 100% specificity in 21 different pathogens (27).
Apart from nucleic acid-based detection methods, microfluidic systems have been also employed for serology (created with BioRender.com). RT-qPCR test workflow comparing both manual and automated nucleic-acid extraction in Huo-Yan Lab. Automation platforms increase testing capacity whilst simultaneously decreasing processing time. Modified from Liu et al. (22).
Jonguitud-Borrego et al. 10.3389/fmedt.2022.969203 Frontiers in Medical Technology assays. A computer-controllable semi-automatic microfluidic device has been developed for SARS-CoV-2 antigen detection (28). The device consisted of 200 microchambers for highthroughput testing and was capable of detecting the whole spike antigen with 95% sensitivity in clinical samples (28). Moreover, an automated microfluidic platform has been developed for anti-SARS-CoV-2 antibody testing (29) as COVID-19 antibody tests can be used to obtain important information about the patient's medical history (30). The platform, named automated ELISA on-chip, was used to detect antibody levels of COVID-19 patients and vaccinated individuals. The photos taken by smartphones were analysed by an image processing software and comparable results with the traditional ELISA on microplate method were obtained (29). Automated microfluidic technologies have a great potential to increase the accessibility of COVID-19 diagnostic tests and to accelerate high-throughput detection processes, especially for POC testing. In addition, the automated microfluidic platforms developed for other pathogens (31), can be readily adapted to be used for COVID-19 diagnosis. In this way, more alternative methods might be available to be used for Free and open source scientific and medical hardware (FOSH) With the onset of the outbreak worldwide, many biotech companies that develop cutting-edge technologies and automation platforms adapted their technologies for COVID-19 screening. As an example, Hamilton offered an automated RNA extraction solution with its MagEx STARlet platform and also an automated qPCR mix prep solution with PCR Prep STARlet (Hamilton Company, USA). Besides these, the Fluent and Freedom EVO platforms of Tecan were adapted for use to automate RNA extraction and PCR preparation processes (Tecan, Switzerland). Liquid handling platforms such as the CyBio FeliX (Analytik Jena, Germany) and Echo 525 (Labcyte Inc., USA) were shown to be easily integrated into a SARS-CoV-2 automated workflow, nevertheless the accessibility of all the aforementioned equipment was limited to a few laboratories because of high pricing.
Some other companies like Hologic (USA) and Roche (Switzerland) offered equally expensive options with the added disadvantage that they predominantly use proprietary and expensive reagents/ reagent cartridges, causing a decrease in access and flexibility between different protocols. Due to this, the community-driven "Free and Open-Source Scientific and Medical Hardware" (FOSH) rose to action to offer cheaper, reliable, and customisable platforms. FOSH follows the same rules as open source software which consist in offering "blueprints" for a specific tool in a manner where every user can study, learn, share, customise and even commercialise a specific tool or protocol for any particular application (32). For laboratory automation a recent popular example is the Opentrons' OT-2 platform. This platform offers an affordable and opensource lab automation system that allows complete user customisation, including the potential for scaled-up molecular diagnosis reactions. Opentrons developed its own population-scale SARS-CoV-2 testing procedure involving three steps operated by OT-2 robots and one RT-qPCR step (Opentrons, USA). In this workflow, sample plating was the first step performed, using an OT-2 to transfer samples from the collection tubes into a 96 well-plate. The second step applied an RNA extraction process by using a magnetic module. Finally, the RNA isolated from the samples were prepared for an RT-qPCR task, to be completed separately in a different room to avoid cross contamination. To scale up this workflow to 2,400 samples per day a set-up with ten OT-2 robots working simultaneously was proposed by the company (Opentrons, USA).
A thriving GitHub community (https://github.com/ Opentrons, 01/24/2022) is available to develop and share custom scripts for precise operation of the robot. At the same time, several SARS-CoV-2 related protocols including RNA extraction are available at protocol.io (https://www.protocols. io/groups/opentrons-covid19-testing, 01/24/2022). One of the key advantages of using the OT-2 platform is its affordable price compared to other high-throughput automation platforms. It is also highly customisable and easy to operate. However, despite the many positives, OT-2 platforms have some important limitations to consider, including: the lack of control mechanisms to detect clot formation or sample volume, lack of many specific modules such as de-lidders, incubators, centrifuges, tube capping/de-capping and a lack of hardware to ensure sample tracking which is essential for quality standards (33). Nevertheless, many of these functionalities can be added using opensource external instrumentation, but advanced programming skills are required for to successfully implement these tools (34).
The Biomedical Diagnostic Centre (CBD) of the Hospital Clinic of Barcelona has recently obtained the green light from the FDA to use Opentrons for fast, high-throughput SARS-CoV-2 screening with a capacity of 2,400 test per day (testing speed is 96 samples in 4 h with a re-start time for a new cycle every 70 min) (Opentrons, USA). The workflow designed by researchers at CBD utilised four Opentrons OT-2s, one KingFisher Flex extraction instrument by ThermoFisher and one ABI 7,500 qPCR device (33) as shown in process diagram in Figure 4A. The procedure included the initial setup, sample preparation, plate filling for RNA extraction and qPCR mix preparation. RNA extraction and real-time qPCR were carried out by KingFisher Flex (Thermo Fisher Scientific, USA) and ABI 7,500 qPCR (Applied Biosystems, USA) respectively while all the other tasks were performed by the OT-2 robots. The European Molecular Genetics Quality Network provided an external quality assessment comparing this system and the Roche, Cobas 6800, and Hamilton-Seegene platforms. The results of the external assessment showed consistent Ct levels (which are inversely proportional to the amount of target nucleic acid in the samples) between the system developed and other similar platforms (33).
Mobile and high-throughput testing facilities
The strategy of track and trace has been a very significant part of the SARS-CoV-2 fight, (35), leading researchers to develop innovative solutions to increase the access of automated molecular screening workflows. As an example, a modular and mobile Biosafety Level 2+ laboratory called CONTAIN was developed for automated molecular testing of SARS-CoV-2, taking advantage of the versatile OT-2 (created with BioRender.com). Different protocols using the open-source OT-2 automated platform for COVID-19 testing (A) automated workflow Designed at CBD. An initial run preparation was performed using open-source Python coding. Initial sample setup, sample preparation, and plate filling and qPCR mix preparation were performed by OT-2 robots. RNA extraction was processed by KingFisher Flex and RT-qPCR was run by ABI 7500. Analysis results were exported as a user-friendly R file. platforms (35). This mobile lab was set up in 40 ft shipping containers, which each held five OT-2 robots and performed RT-qPCR-based diagnostics with a maximum daily testing capacity of 2,400 tests. The CONTAIN lab consisted of three separated stations; Station A for unpackaging and logging of samples, Station B for an RNA extraction step completed by four OT-2 robots, and finally Station C containing one OT-2 and two qPCR devices for the RT-qPCR run. Figure 4B represents the general layout and workflow of a CONTAIN lab. For the RNA extraction process, the open-source Bio-On-Magnetic-Bead (BOMB) protocol, which utilises magnetic beads, was adapted to run in an OT-2. In the CONTAIN lab workflow the initial step of sample plating was done manually, allowing processing of the same number of samples as the solution provided by Opentrons (2,400 per day) in a semi-automated way using five OT-2s instead of the 10 recommended by Opentrons. Compared with the clinical results, CONTAIN values showed a strong correlation with R 2 = 0.7698 on 30 patient samples, highlighting the effectiveness of mobile automation (35). This innovative approach of a mobile/container laboratory benefiting from open-source automation platform highlights some important potential: a mobile lab could be shipped between cities or even around the world-allowing rapid deployment in virus hotspots globally. In addition, the containers could be stacked to build larger facilities depending on testing requirement (35).
Low-cost bio-automation SARS-CoV-2 testing constraints
Cost of the equipment When it comes to automation, the cost of the platforms is usually the first hurdle to overcome (36), particularly for lowresource settings. Prices of equipment may vary widely between platforms starting from a few thousand dollars in the cheapest cases up to nearly a quarter million dollars for the most advanced alternatives (Synthace, UK).
As mentioned in previous sections, the OT-2 robot by Opentrons is one of the most affordable platforms, however, the initial price is still nearly $6,000 USD (as of 2020) with the most basic option only including one single channel pipette and one multichannel pipette. Additionally, while the price of the robot alone can be a significant limitation, the basic configuration is not sufficient to completely automate COVID-19 testing. Extra proprietary modules which are sold separately (i.e., thermal cycler, temperature, and magnetic modules, etc.) are necessary for automated COVID-19 testing and increase the overall cost to $15,000 USD, making even this low-cost platform an unaffordable option for lowresource laboratories.
Cost and access of consumables
Consumables and access to them are another important constraint to consider before implementing an automation testing platform. One key issue is that most automation platforms require the user to use their own consumables (34). Pipette tips, tube racks, containers, and reagents are in many cases-custom made for each particular equipment and must be bought directly from the manufacturer at inflated prices, imposing additional costs (taxes, shipping, etc.) to acquiring these supplies (34,36). In addition to the cost, another critical concern associated with consumables is the limited access to these supplies caused by global shortage or distribution issues. In some cases, due to the sudden increase in demand, some consumables (proprietary pipette tips, racks, containers or tubes) are commonly out of stock for weeks and the waiting time for receiving them can be extremely long (36).
Reagent supply is a significant barrier to adopting automated testing for many laboratories-even those who possess an OT-2 or other affordable platforms. This also applies to detection of SARS-CoV-2 nucleic acids as most protocols require highly specialized and expensive reagents. Excluding the economic side, the limited access to specialised reagents and the frequent requirement for cold transport and storage (37) impose additional limitations on performing high-throughput nucleic acid testing, particularly in lowresource settings.
In order to reduce the dependency on expensive and proprietary consumables-and also as an alternative to relieve supply dependency from the manufacturer-efforts are being made towards utilising 3D printing to develop affordable and compatible alternatives. In general, the major equipment (robot) is designed to use proprietary consumables, nevertheless, some low-cost automation machines are also compatible (or can be adapted with minor modifications) with consumables that are regularly used in many biomedical laboratories. This is particularly true for plastic tips, racks and 3D printed containers (38). Additionally, there is already an Opentrons github community (https://github.com/Opentrons, February 2021) that has come up with 3D printing design ideas for different necessities. These communities usually offer their designs for free so that the user can try them and make modifications as they see fit.
Software for automation, data analysis and user interface (UI)
Another important factor for adopting an automation testing platform is the general requirement of at least basic programming skills to operate low cost robots or intermediate programming skills to perform more complex tasks (34). The Jonguitud-Borrego et al. 10.3389/fmedt.2022.969203 Frontiers in Medical Technology majority of "wet lab" researchers use computers as data analysis tools that are often performed in specialised software and do not require any programming skills. While the established automation platforms mentioned before include a robust and easy to use user interface to perform basic and advanced operations, most of the low-cost automation platforms generally only include a relatively simple user interface for designing and performing basic operations. Generally, an application program interface (API) is included in Python allowing the user to code detailed instructions to the liquidhandling robot to perform more complex tasks. It is important to remark that therefore, for advanced customisation and flexibility in the low-cost automation platforms, programming skills are absolutely necessary to take advantage of the full potential of automation and data analysis. Researchers interested in automation will indeed need to have an "amphibious" set of skills consisting of both "wet" and "dry" biology and programming skills respectively to effectively work with automation (39).
Nasopharyngeal swabs sampling bottleneck
Ramping up testing for effective SARS-CoV-2 surveillance has faced several barriers. One of these is the reliance on nasopharyngeal swabs (NP), as sampling with swabs can be uncomfortable for people, discouraging them from getting tested frequently. The post-processing of NP is also difficult to automate (40). NP swabs also have to be collected by a trained individual, adding a logistical barrier and putting countries with less logistic support at high risk of getting their testing staff infected (40,41). In addition, when SARS-CoV-2 spread worldwide at rapid pace, it was reported that countries suffered intense strain on the healthcare consumable supply chain (i.e., Swabs) (42,43). An alternative proposal to alleviate the scarcity of swabs is the application of 3D printing technology to produce them (43,44).
Saliva sampling has emerged as a more suitable option for low-resource and remote settings. Saliva sampling is a simple approach with the potential to drive down costs, while also relieving pressure from the consumable supply chain and promises to facilitate more effective testing due to the safe and non-invasive nature of its collection. It is also highly compatible with an automated approach and finally saliva samples contain high viral load (45). Recognising these benefits, the FDA approved a saliva collection and preservation device for downstream COVID-19 testing (45). Direct comparison of saliva to nasopharyngeal (NP) swabs from the same individuals revealed that saliva samples could provide similarly consistent and sensitive results for COVID-19 detection (YALE, School of Public Health, https:// publichealth.yale.edu/salivadirect/, April 2021).
In order to quickly inactivate/lyse virions while also protecting their RNA from endogenous RNAses in the saliva sample, a simple protocol was implemented utilising a shelfstable reducing agent, tris(2-carboxyethyl) phosphine (TCEP), combined with the divalent cation chelator ethylenediaminetetraacetic acid (EDTA) and a brief period of heat (95°C) (46). This protocol is called HUDSON (heating unextracted diagnostic samples to obliterate nucleases) and is compatible with most technologies described in Figure 1 as process of extraction and amplification of RNA. Saliva was collected in a viral transport media tube and transported to the laboratory for analysis where a simple RNA extraction was performed which did not require expensive RNA extraction kits. A solution of TCEP (100 mM) and EDTA (1 mM) was added to the sample. A mild heat treatment consisting of 50°C for 5 min and 64°for 5 min was employed. Diluted TCEP/EDTA is compatible with LAMP (45) hence it could also be employed with RT-RPA.
It is important to mention, that from personal preliminary work, it has been found that the re-detection rate based on NP can be lower than that by saliva. This can be explained by the fact that q-PCR made from saliva samples often lead to inaccurate results consequence of a failure during saliva sampling which derive in a need for double checking for redetection. In addition to this, different saliva samples can have different viscosity. In order to ensure consistent, automated pipetting of saliva, it may be necessary to dilute those samples, thus influencing the test and decreasing the speed of the aliquoting step and sensitivity of the analysis. Therefore, saliva viscosity for consistent automated diagnostic precision requires further investigation.
The role of high-throughput testing in the detection of new variants
Since the beginning of the pandemic the diagnostics strategies used by different governments have varied across different times. According to Mercer et al. (47), there are normally 5 phases of testing during the pandemic: zoonotic transmission, global spread, outbreak, community transmission and regional/seasonal outbreak. During the last 3 phases of the COVID19 pandemic, population-scale testing was carried out by RT-qPCR using automated facilities and equipment like the ones described in the previous sections, with the addition that routinely a proportion of positives samples were subjected to whole-genome sequencing for surveillance of existing and new variants (47).
The Rosalind Franklin Laboratory in Royal Leamington Spa is an example of a massive testing facility used for detection of new variants. This facility has a processing capacity of 400,000 PCR tests a week and it couples it with genomic sequencing capabilities for the detection of new variants of concern (VOC) (48). They make use of an ultra-high throughput PCR Nexar workflow which enables up to 150,000 tests per day per system, making it the highest PCR testing capacity per system worldwide (49).
Another important example is found in the UK Lighthouse Labs Network at Alderley Park, were 8 million samples were analysed by RT-qPCR in only 10 months with a capacity of 80,000 samples per day (11). For this testing strategy, 3 viral regions of SARS-CoV-2 virus were targeted: N, S and Orf1ab (11). To allow massive testing, nucleic acid extraction was performed by a Kingfisher Flex extraction platform (Thermo Fisher Scientific, USA) while amplification was carried on a Quantstudio Flex 7 System (Thermo Fisher Scientific, USA). Positive results were tracked using the postal district of sample origin which allowed to track how the mutation was spreading through the UK. When testing started in April 2020 the 3 regions examined by PCR would show a positive result. The consistency in the results within the same sample indicated that the same viral strain was present among all samples tested. However, by September of the same year there was an increase in samples testing negative for the S region but still positive for N and Orf1ab regions. The change from positive to negative in one of the regions of the virus with respect to the initial samples suggested a mutation of the virus. Whole-generation sequencing confirmed a new lineage (B.1.1.7 better known as Alpha) variant which was designated as a Variant of Concern (VOC). As larger scale automated testing continued, by January of 2021, 70% of daily samples corresponded to this variant while the number increased to 98% by February (11).
Finally, it is very relevant to mention that apart from the UK Lighthouse Labs Network -dedicated to COVID-19 testing for NHS test and Trace-the Coronavirus Disease 2019 (COVID-19) Genomics UK Consortium (COG-UK) has enable an important genomic epidemiology database by developing high-throughput sequencing and analysis workflows of thousands of SARS-CoV-2 genomic sequences (50). COG-UK has created a web resource that allows the analysis of viral mutations and variants in the UK. The repository contains millions of sequences that enable in-silico surveillance of new variants (51).
Integration of NSG into automated COVID-19 diagnostics
As sequencing technologies improve and reduce in cost, they have become stronger candidates as an alternative for mass clinical testing. Next Generation Sequencing (NGS) is one of these technologies with a variety of potential applications, including metagenomic NGS (mNGS), allowing for an unbiased approach to the detection of pathogens. A great advantage of mNGS is unbiased sampling which enables every species in the sample, leading to identification of unexpected and even unknown pathogens (52). This technology was crucial for identification and characterization of SARS-CoV-2 genome (47,50). Automation has also been used to further increase the capacity of detection of such NGS workflows where different automation devices have been used in tandem to diagnose clinical microbiological samples.
A recent strategy published by researchers from the United States gives a perfect example of how NGS can be coupled with a high throughput workflow for the identification potential diagnostic and therapeutic genes for SARS-CoV-2 (50). The workflow consists in 4 steps described as follows.
1. Extraction of viral RNA, cDNA preparation and amplification by PCR. All of these steps made use of Agilent's Bravo robot (USA) 2. PCR products were purified using BlueCatBio's (Germany) Bluewasher and pooled into a single library using Hamilton's (USA) Startlet. Amplicons were separated by size using Sage Science's (USA) Blue Pippin. 3. Library was sequenced using Illumina's NovaSeq 6000. 4. Bioinformatic analysis using an in-house developed bioinformatic pipeline.
It is important to mention that this is not the first method reported for the purpose of identifying genes of interest for SARS-CoV-2 but it is the first that overcomes the limitations of the workflow-that often limits the scalability of the process-by the integration of various liquid handling robots (50).
Conclusions
Automation is now an inseparable part of modern society as it eases many processes that are encountered during everyday life. The studies mentioned in this report have highlighted the impact that automation of molecular diagnosis processes have had on increasing the capacity for COVID-19 testing using RT-qPCR. Alternative methods which would require less complex equipment based on CRISPR technologies have shown to be easily automated, although further validation is needed before being fully implemented for mass testing. Thus, it is apparent that partnerships must be built between companies and academia working in a variety of fields to develop more powerful solutions for automated diagnosis workflows for infectious diseases.
Some of the most important challenges to be addressed in this field include the capital cost of equipment and the cost and accessibility for healthcare consumables. Open-source approaches are democratising access to automation, however there are still many bottlenecks to overcome, including the advanced computing skills required to operate such automation platforms. Nasopharyngeal swabbing was another obstacle to full automation as at present this process requires significant human involvement. Saliva sampling seems to a solution to this bottleneck as it requires less consumables and it is more practical to automate, however, more studies are needed to validate automated saliva sampling, specially using low-cost automation.
From all the protocols and equipment discussed, the selection of one of them may vary depending on the capabilities and requirements of each laboratory. Samples size, budget, expertise of the personnel and availability of equipment are factors that should be considered when choosing the correct workflow for high-throughput diagnostic. RT-PCR and RT-LAMP remain as the golden standard due to their accuracy, availability of reagents and well proven efficiency. Nevertheless, CRISPR-based technologies have a huge potential for lower cost automation and their versatility provide a huge potential to be used in marginalised to be used as a point-of-care testing technology.
The use of automation is definitely becoming the norm for high-throughput diagnostics, where low cost open-source automation has reached the right level of maturity to accelerate and democratise the access of such tools to a wider audience.
Author contributions NJB, KM, and LRS contributed to conception and design of the study. KM wrote the first draft of the manuscript while NJB edited this version and added a significant number of new sections and edited the final version. NJB and KM contributed equally to this work and both should be credited as first authors. All authors wrote specific sections of the manuscript depending of their expertise. All authors contributed to the article and approved the submitted version.
Funding
This study was supported by the University of Edinburgh SFC-GCRF Covid-19 Fund, the British Council (Grant Number: 527429894), and the Engineering and Physical Sciences Research Council (Grant reference: 2260830). Funding from the National Council of Science and Technology is also acknowledged for NJB as well as the YLSY program of the Ministry of National Education of Turkey for KM.
|
2022-09-15T13:47:26.875Z
|
2022-09-15T00:00:00.000
|
{
"year": 2022,
"sha1": "65ecae477837e8f21240a1758b6559fb455bf09e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "65ecae477837e8f21240a1758b6559fb455bf09e",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
3056316
|
pes2o/s2orc
|
v3-fos-license
|
Effects of Perinuclear Chromosome Tethers in the Telomeric URA3/5FOA System Reflect Changes to Gene Silencing and not Nucleotide Metabolism
Telomeres are repetitive DNA sequences that protect the ends of linear chromosomes. Telomeres also recruit histone deacetylase complexes that can then spread along chromosome arms and repress the expression of subtelomeric genes in a process known as telomere position effect (TPE). In the budding yeast Saccharomyces cerevisiae, association of telomeres with the nuclear envelope is thought to promote TPE by increasing the local concentration of histone deacetylase complexes at chromosome ends. Importantly, our understanding of TPE stems primarily from studies that employed marker genes inserted within yeast subtelomeres. In particular, the prototrophic marker URA3 is commonly used to assay TPE by negative selection on media supplemented with 5-fluoro-orotic acid (5FOA). Recent findings suggested that decreased growth on 5FOA-containing media may not always indicate increased expression of a telomeric URA3 reporter, but can rather reflect an increase in ribonucleotide reductase (RNR) function and nucleotide metabolism. Thus, we set out to test if the 5FOA sensitivity of subtelomeric URA3-harboring cells in which we deleted various factors implicated in perinuclear telomere tethering reflects changes to TPE and/or RNR. We report that RNR inhibition restores 5FOA resistance to cells lacking RNR regulatory factors but not any of the major telomere tethering and silencing factors, including Sir2, cohibin, Mps3, Heh1, and Esc1. In addition, we find that the disruption of tethering pathways in which these factors participate increases the level of URA3 transcripts originating from the telomeric reporter gene and abrogates silencing of subtelomeric HIS3 reporter genes without altering RNR gene expression. Thus, increased 5FOA sensitivity of telomeric URA3-harboring cells deficient in telomere tethers reflects the dysregulation of TPE but not RNR. This is key to understanding relationships between telomere positioning, chromatin silencing, and lifespan.
INTRODUCTION
Telomeres, which are repetitive DNA sequences at the ends of linear chromosomes, maintain genome stability and modulate gene expression. In the budding yeast Saccharomyces cerevisiae, telomeres mediate the recruitment of the Silent Information Regulator (SIR) complex, which is composed of the histone deacetylase Sir2 as well as the adapter proteins Sir3 and Sir4 (Longtine et al., 1989;Moretti et al., 1994;Buck and Shore, 1995;Imai et al., 2000;Moazed, 2001b). Sir2-dependent deacetylation of histone tails on nearby subtelomeric nucleosomes promotes the recruitment of additional SIR complexes. Iterative cycles of histone deacetylation and SIR recruitment promote the spreading of compact silent chromatin structures limiting access to RNA polymerase II and silencing genes within subtelomeric regions along chromosome arms (Gottschling et al., 1990;Moazed, 2001a). This reversible and heritable gene silencing process is known as telomere position effect (TPE) or telomeric silencing (Gottschling et al., 1990;Moazed, 2001a). The histone acetyltransferase Sas2 opposes the indefinite spreading of SIR complexes to more internal locations along the chromosome resulting in a gradient of telomeric silencing in which TPE is strongest right next to telomeres and gradually weakens as the distance to telomeres increases (Suka et al., 2002). Disruption of Sir2 or Sir3 significantly decreases replicative lifespan, which is the number of times a mother cell buds to generate a daughter cell before reaching senescence (Kaeberlein et al., 1999). Thus, telomeric maintenance and regulation of TPE by the SIR complex is crucial for the maintenance of replicative lifespan.
While it has only been recently recognized in mammals, TPE has been extensively studied in S. cerevisiae particularly through the use of reporter genes, such as URA3, inserted within subtelomeric chromosomal regions (Gottschling et al., 1990;Tennen et al., 2011). Counter selection of URA3 expression on media supplemented with 5-fluoro-orotic acid (5FOA) constitutes a highly sensitive assay with a wide dynamic range for the assessment of changes to gene expression (Boeke et al., 1984;Gottschling et al., 1990). For example, wild-type cells, but not SIR-deficient cells, harboring a URA3 reporter positioned proximal to the left arm telomere of chromosome VII (URA3-TELVII-L) can grow on 5FOA-containing media (Gottschling et al., 1990;Aparicio et al., 1991;Moazed, 2001a).
In S. cerevisiae, telomeres are clustered into 4-8 foci at the inner nuclear membrane (INM) and it is thought that perinuclear telomere anchoring and clustering maintains a high local concentration of SIR complexes to ensure efficient telomeric silencing (Maillet et al., 1996;Mekhail and Moazed, 2010;Chan et al., 2011). During the S phase of the cell cycle, telomere anchoring to the INM relies primarily on interactions between Sir4 and two major pathways, one implicating a protein called Esc1 (Establishes Silent Chromatin 1) and the other involving the SUN (Sad1-UNC-84) domain-containing protein Mps3 (MonoPolar Spindle 3; Andrulis et al., 2002;Bupp et al., 2007). Interestingly, Mps3 is itself implicated in at least two different perinuclear telomere anchoring processes, one implicating the enzyme telomerase and the other the Cohesin-related V-shaped cohibin complex, which is composed of Lrs4 and Csm1 (Antoniacci et al., 2007;Schober et al., 2009;Brito et al., 2010;Corbett et al., 2010;Wong, 2010;Chan et al., 2011). Specifically, cohibin is thought to link Sir4-bound telomeres to each other as well as to Mps3 and the LEM (Lap2β-Emerin-Man1) domain-containing INM protein Heh1 (Chan et al., 2011). While the deletion of SIR and cohibin proteins severely abrogates perinuclear telomere clustering and silencing, disruption of Esc1, and especially Mps3 or Heh1 leads to relatively mild phenotypes (Andrulis et al., 2002;Bupp et al., 2007;Grund et al., 2008;Schober et al., 2009;Corbett et al., 2010;Chan et al., 2011). Importantly, determining the relative impact of the various known telomere tethering/clustering factors in telomeric silencing assays, including those employing URA3-TELVII-L reporters, has been instrumental in identifying the above described contributions of these various factors within the perinuclear molecular networks regulating chromosome ends (Aparicio et al., 1991;Andrulis et al., 2002;Bupp et al., 2007;Grund et al., 2008;Chan et al., 2011).
However, recent findings suggest that increased 5FOA sensitivity in telomeric URA3-based assays may not always reflect disruptions to TPE but can rather reflect changes in nucleotide metabolism (Rossmann et al., 2011). In particular, deletions or mutations that alter the levels of ribonucleotide reductase (RNR), a complex that generates deoxyribonucleoside triphosphates needed for DNA synthesis, can increase 5FOA sensitivity even when TPE is unaffected (Rossmann et al., 2011).
Confidence in published data implicating the various known perinuclear telomere tethering factors in TPE remains high because most studies have assessed TPE via several approaches including the examination of endogenous subtelomeric gene expression as well as silent chromatin histone marks. However, it is unclear if the 5FOA sensitivities of cells lacking various telomere tethering/clustering factors in telomeric URA3/5FOA-based assays accurately reflect changes to TPE or rather possibly represent a combinatorial effect of changes to both TPE as well as nucleotide metabolism. Therefore, we set out to test these possibilities.
RESULTS
Silent Information Regulator and cohibin complexes are required for the establishment of endogenous silent chromatin marks and the silencing of endogenous subtelomeric genes (Aparicio et al., 1991;Chan et al., 2011). In addition, both SIR and cohibin proteins are also thought to be required for the silencing of the exogenous reporter genes URA3 and ADE2 inserted within subtelomeric regions (Gottschling et al., 1990;Chan et al., 2011). However, some mutations can hyper-activate RNR function and lead to a false lossof-silencing in assays relying on the telomeric URA3 reporter for the assessment of TPE on 5FOA (Rossmann et al., 2011). In addition, the expression of URA3 and ADE2 genes may be linked in some mutants via purine-pyrimidine cross-regulation (Rossmann et al., 2011). Therefore, we first sought to monitor TPE via the use of the HIS3 reporter gene, which is another prototrophic marker whose expression can be assessed in sensitive genetic assays without relying on 5FOA ( Figure 1A; Rossmann et al., 2011). Loss of HIS3 silencing can be positively selected for on media containing 3-amino-1,2,4-triazole (3AT), which is a competitive inhibitor of the HIS3 gene product (Brennan and Struhl, 1980). Wildtype and other cells were grown on either non-selective media, media lacking histidine, and media lacking histidine but supplemented with increasing amounts of 3AT. Importantly, sir3∆, lrs4∆, and csm1∆ cells grew much more efficiently than wild-type cells on 3AT-containing media ( Figure 1B). In addition, the difference in growth phenotypes of sir3∆, lrs4∆, or csm1∆ relative to wild-type cells steadily increased in a 3AT dose-dependent fashion ( Figure 1B). These results are consistent with previously published data revealing that cohibin proteins are required for the silencing of several endogenous subtelomeric genes as well as the SIR-dependent deacetylation of endogenous histones at chromosome ends (Chan et al., 2011). Thus, collectively, these findings demonstrate that SIR and cohibin are required for the silencing of the HIS3-TELVII-L reporter gene and suggest that results obtained via the use of URA3 or other exogenous reporter genes may indeed reflect changes to TPE and not RNR function, although this remained to be directly tested.
Thus, we next set out to test if RNR inhibition affects the 5FOA sensitivity of URA3-TELVII-L cells lacking the major telomere silencing protein Sir2 (Figure 2A; Aparicio et al., 1991;Palladino et al., 1993). RNR inhibition was able to rescue the growth of Pol30 or Dot1 mutants, which were originally thought to be involved in TPE maintenance based on the telomeric URA3 assay, but it was later discovered that these mutants did not have a general telomere silencing defect (Rossmann et al., 2011;Takahashi et al., 2011). In particular, pharmacological inhibition of RNR function via the addition of sublethal concentrations of hydroxyurea (HU) was able to restore 5FOA resistance to pol30-8 cells (Rossmann et al., 2011). In addition, Pol30 physically interacts with Chromatin Assembly Factor-1 (CAF-1; consisting of Cac1, Cac2, and Cac3), which is a histone chaperone complex (Moggs et al., 2000). Disruption of CAF-1also increases RNR and hyper-sensitizes cells to 5FOA leading to a false loss-of-silencing in telomeric URA3/5FOA silencing assays (Rossmann et al., 2011). Consistent with these findings, FOA resistance can be restored to cac1∆ cells via the use of HU ( Figure 2B; Rossmann et al., 2011). In contrast, we found that the FOA sensitivity of sir2∆ cells was unaltered by HU treatments (Figure 2B). Cohibin is thought to promote telomeric silencing at least in part by promoting perinuclear telomere clustering thereby increasing the local concentration of Sir2 at chromosome ends (Chan et al., 2011). This notion is based in part on ChIP data revealing that although SIR proteins were required to recruit cohibin to telomeres, loss of cohibin in turn reduced Sir2 concentrations at telomeres (Chan et al., 2011). These data point to a putative model for the generation of telomere clusters, where low amounts of SIR proteins bound at telomeres first recruit cohibin complexes . Cohibin complexes then start to cluster telomeres thereby increasing the local concentration of SIR proteins, which in turn recruits more cohibin complexes, and the cycle continues until perinuclear telomere clustering is complete . Consistent with the notion that cohibin acts through SIR to maintain telomere silencing, the 5FOA sensitivity of cells lacking cohibin subunits, Lrs4 or Csm1, similar to sir2∆ cells, was unaffected by HU treatment ( Figure 2C). The growth rates of wild-type cells as well as cells lacking Cac1, Sir2, or cohibin proteins on HU-containing but 5FOA-free media were similar indicating that the concentrations of HU used are not indiscriminately affecting overall cellular growth, as expected (Figures 2B,C; Rossmann et al., 2011).
Together, our results suggest that the 5FOA sensitivity of telomeric URA3-haboring cells that lack SIR or cohibin proteins reflects changes to TPE and is not due to hyperactivation of RNR. It was previously shown that the treatment of some mutants with 5FOA can increase RNR gene expression (Rossmann et al., 2011). In www.frontiersin.org particular, Pol30 mutants exhibited about a threefold increase in RNR4 transcript levels after a 4 h 5FOA treatment and it is thought that this increase, coupled to a mild increase in URA3 expression in these mutants, induces 5FOA sensitivity in Pol30-deficient cells (Rossmann et al., 2011). We found that 5FOA treatment increases RNR4 transcript levels in dot1∆ and cac1∆ cells, but not wild-type, lrs4∆, or heh1∆ cells ( Figure 3A). In addition, HU treatment abolished the 5FOA-induced increase in RNR4 expression typically observed in dot1∆ and cac1∆ cells ( Figure 3B). CDC21 encodes the enzyme thymidylate synthase, which catalyzes the conversion of dUMP to dTMP within the RNR pathway ( Figure 3C; Rossmann et al., 2011). In fact, 5FOA-induced changes to RNR gene expression repress Cdc21 and consequently dTMP generation causing a disruption of nucleotide metabolism. Consistent with this, we found that CDC21 overexpression, similar to the RNR-inhibiting HU treatments discussed above, was able to rescue the growth of dot1∆ or cac1∆ cells, but not lrs4∆ cells, on 5FOA-containing media ( Figure 3C; Rossmann et al., 2011). In addition, qRT-PCR analysis revealed that the expression of URA3 at TELVII-L was indeed increased in lrs4∆ and sir3∆ cells ( Figure 3D; Chan et al., 2011). Taken together, these results indicate that the 5FOA sensitivity of cells deficient in cohibindependent telomere tethering, but not dot1∆ or cac1∆ cells, does indeed reflect changes to TPE, but not nucleotide metabolism. In addition, these findings indicate that RNR-inhibiting HU treatments can be used to evaluate the TPE-dependent/RNRindependent contribution of various telomeric factors, such as cohibin complexes, to 5FOA resistance in telomeric URA3 reporter gene assays.
Cohibin complexes are thought to cooperate with both of the INM proteins Heh1 and Mps3 to mediate perinuclear telomere tethering and silencing ( Figure 4A; Bupp et al., 2007;Grund et al., 2008;Chan et al., 2011). Consistent with this notion, 5FOA sensitivity in telomeric URA3 assays is relatively weak for mps3∆75-150 cells (Mps3 full length deletion is lethal) and negligible for heh1∆ cells, but the 5FOA sensitivity of mps3∆75-150 heh1∆ cells is relatively higher and is closer to the sensitivity of lrs4∆ cells ( Figure 4B). In addition, the sensitivity of lrs4∆ cells is similar to that of mps3∆75-150 lrs4∆ cells indicating that Mps3 may operate at least in part through cohibin to ensure subtelomeric silencing ( Figure 4B). Importantly, deletion of LRS4 or HEH1 did not change mps3∆75-150 protein levels ( Figure 4C). These results support a model in which Heh1 and Mps3 act through cohibin to ensure telomeric silencing (Chan et al., 2011). Thus, we asked if these 5FOA sensitivity phenotypes would be altered by RNR inhibition. Importantly, cac1∆-rescuing/RNR-inhibiting sublethal concentrations of HU were unable to restore 5FOA resistance to mps3∆75-150, mps3∆75-150 heh1∆, or mps3∆75-150 lrs4∆ cells (Figure 4B). These results indicate that the differing sensitivities to 5FOA observed for these various genotypes indicate changes to TPE and not nucleotide metabolism.
Esc1 is another major factor that is thought to operate at least partly independently of cohibin to ensure perinuclear telomere tethering and silencing ( Figure 5A; Andrulis et al., 2002;Chan et al., 2011). Consistent with this notion, although lrs4∆ cells are more sensitive to 5FOA than esc1∆ cells, the deletion of ESC1 abolishes the residual low levels of 5FOA resistance typically observed in lrs4∆ cells indicating that cohibin and Esc1 can operate in parallel to promote telomeric silencing ( Figure 5B; Chan et al., 2011). Thus, we tested if the 5FOA sensitivity profiles linked to these genotypes are affected by changes to RNR. Importantly, cac1∆-rescuing/RNR-inhibiting sublethal concentrations of HU did not alter the 5FOA sensitivity profiles of esc1∆, esc1∆ heh1∆, or esc1∆ lrs4∆ cells (Figure 5B). These results confirm that Esc1 exerts a relatively small yet at least partly independent contribution relative to cohibin within the processes promoting telomeric silencing. Overall, our findings indicate that data obtained using the telomeric URA3/5FOA system indicate that Esc1, SIR, as well as cohibin complexes cooperating with the nuclear envelope proteins Mps3 and Heh1, are part of a perinuclear protein network that ensures TPE and controls gene expression patterns within subtelomeric regions independently of confounding RNR-related effects.
DISCUSSION
Our data reveal that the 5FOA sensitivity phenotypes observed for cells lacking one or combinations of telomere tethering/silencing factors, including SIR and cohibin complexes, Esc1, Mps3, and Heh1, are unaltered by RNR inhibition and that 5FOA treatment of these cells does not lead to increased RNR expression (Figure 6). Our results show that the observed phenotypes reflect changes to TPE and to the expression of the exogenous URA3 reporter gene. In addition, we find that major telomere silencing factors such as SIR and cohibin proteins are required for the silencing of a subtelomeric HIS3 reporter gene. Furthermore, our previous study showed that cells deficient in SIR or cohibin proteins had increased expression of an ADE2 or URA3 reporter gene inserted next to telomere V-R, indicating that the disruption of TPE is not specific to telomere VII-L (Chan et al., 2011). Moreover, similar results were obtained when the expression of endogenous subtelomeric genes located on various chromosomes was assessed (Chan et al., 2011). Thus, TPE is essentially abolished in cells lacking SIR proteins and is significantly weakened in cohibin-deficient cells. Together with previous studies, our findings indicate that individual INM proteins play a lesser role in ensuring TPE but additive effects are observed. Specifically, Mps3 and Heh1 seem to be operating at least partly through cohibin while Esc1 can operate at least partly independent of cohibin.
Our results are consistent with a model in which major telomere tethering and silencing factors, including SIR and cohibin complexes, Esc1, Mps3, and Heh1 play key roles within a perinuclear protein network ensuring telomere tethering and silencing ( Figure 5A). The positioning of various factors within this network is based on results from proteinprotein/DNA interactions, ChIPs examining silent chromatin marks, expression of endogenous subtelomeric genes, telomere FIGURE 6 | Perinuclear telomere tethers impact the telomeric URA3/5FOA reporter system as a result of changes to TPE and not nucleotide metabolism. Unlike dot1∆ or cac1∆ cells, RNR expression is unaffected in cells lacking various perinuclear telomere tethering factors including Sir2, cohibin, Esc1, Mps3, and Heh1. In addition, the 5FOA sensitivity of URA3-TELVII-L-harboring cells that lack these telomere-associated factors, but not Dot1/Cac1, is not restored upon RNR inhibition, which can be achieved via defined HU treatments or CDC21 overexpression. Thus, the effects of major telomere tethering factors in telomeric URA3/5FOA reporter systems assaying for TPE reflect changes to chromatin assembly and gene expression but not nucleotide metabolism.
Given that the pharmacological inhibition of RNR was unable to rescue the 5FOA sensitivity of cells lacking various telomere tethering and silencing factors, any possible effect that these factors may still have on nucleotide metabolism would be expected to be minor or insufficient to hyper-sensitize the cells enough to alter their growth on 5FOA. Indeed, we show that RNR4 transcription is unchanged in cohibin-deficient cells upon treatment with 5FOA, consistent with the notion that the 5FOA sensitivity of cells lacking cohibin proteins is due to disruption of TPE. Thus, while we find that telomeric URA3/ 5FOA TPE assays incorporating pharmacological or genetic RNR inhibitors provide a very useful tool to sensitively dissect networks controlling TPE, we still are of the opinion that the latter should also be evaluated via the examination of endogenous subtelomeric reporter genes and histone marks as we previously reported (Chan et al., 2011). Genome-wide or multiple chromosome analyses of subtelomeric gene expression is important given that certain mutants may display a disruption of TPE that is telomere-specific (Takahashi et al., 2011). In particular, a previous study found that loss of telomeric silencing in dot1∆ cells was only observed at a handful of telomeres (Takahashi et al., 2011). Interestingly, the gene that showed the most severe loss-of-silencing in the absence of Dot1 was located on TELVII-L near the site of the URA3 reporter gene that is commonly used to assess TPE in the URA3/5FOA silencing assay (Takahashi et al., 2011). This study demonstrates how mutants identified using the URA3/5FOA system may not display a general telomere silencing defect and exposes another weakness, in conjunction with RNR upregulation, in relying solely on the URA3/FOA system to identify proteins required for telomeric silencing.
Furthermore, overall cellular effects such as changes to replicative lifespan should also be examined in telomeric silencing mutants, given that the disruption of TPE and telomere maintenance leads to a decrease in replicative lifespan (Kaeberlein et al., 1999). Consistent with this notion, we previously showed that cohibin-deficient cells, which have lower concentrations of Sir2 at telomeres and display a strong loss of TPE across the genome, have decreased replicative life spans similar to that of Sir2 deficient cells (Chan et al., 2011). Increasing local telomeric Sir2 concentrations not only rescued telomeric silencing in lrs4∆ cells, but also rescued replicative lifespan defects linked to the disruption of TPE (Chan et al., 2011). This highlights the importance of regulating processes that affect telomeric SIR complex recruitment and consequently telomeric silencing in order to maintain replicative lifespan.
All in all, our work indicates that the effects of major telomere tethering and silencing factors on the 5FOA sensitivity of telomeric URA3 reporter genes do reflect changes to TPE and not nucleotide metabolism. By providing key missing pieces of the puzzle of telomere regulation, this work highlights the important relationship between spatial genome organization, gene expression control, and cellular lifespan.
STRAINS AND MATERIALS
Endogenous genes were deleted by PCR (Mekhail et al., 2008). The mps3∆75-150 mutant was generated by cloning the mps3∆75-150 transcript into PRS314 at the C-terminus of TRP using Pst-I and Sal-I restriction enzymes. Positive clones were confirmed by plasmid digestion and standard DNA sequencing. Yeast strains are listed in Table 1. Plasmids are listed in Table 2. Primers used in this study are listed in Table 3. Plasmids pKM133 and pKM135 were a kind gift from B. Stillman (Rossmann et al., 2011). The anti-Mps3/Nep98 antibody was a kind gift from S. Nishikawa and T. Endo (Nishikawa et al., 2003).
WHOLE CELL PROTEIN PREPARATION
Whole cell lysates were prepared as previously described (Chan et al., 2011). Briefly, cells (OD 600 ≈ 1.0) were subjected to bead beating with an equal volume of silica beads and lysis buffer [50 mM HEPES-KOH pH7.5, 150 mM NaCl, 10% glycerol, 0.5% NP-40, 1 mM EDTA, complete tablet protease inhibitor (Roche), and 1 mM PMSF] for 2 × 30 s with an intermittent 2 min incubation on ice. Lysates were clarified by two consecutive rounds of centrifugation at 16,000 rcf for 5 and 15 min. Samples were sheared through a 26G1/2 needle and boiled at 95˚C for 5 min prior to SDS-PAGE.
LIQUID 5FOA AND/OR HU TREATMENTS
Treatment of cells were conducted as previously described (Rossmann et al., 2011). Cells were cultured overnight in SC medium containing 20 mg/l uracil and then diluted 1:50 and grown to log phase (OD 600 ≈ 1.0). After which, 20 ml of culture was taken for RNA extraction, while the remainder of the culture was split and treated with 100X 5FOA solution to a final concentration of 1 g/l or the equivalent amount of DMSO. For HU rescue experiments, HU was added to the corresponding cell cultures to a final concentration of 10 mM.
RNA EXTRACTION
Total RNA was prepared from logarithmically growing cells (OD 600 ≈ 1.0) via hot phenol extraction. Cells were centrifuged and resuspended in 400 µl of AE buffer (50 mM NaOAc pH 5.3 and 10 mM EDTA in 0.1% DEPC). Forty microliters of 10% SDS and 440 µl of acidic phenol (pH 4.5) was added to each sample and incubated at 65˚C for 5 min. The samples were rapidly chilled in a dry ice/EtOH bath until phenol crystals appeared. The samples were then centrifuged for 2 min at max speed at 4˚C, and the upper phase was transferred to fresh tubes. One volume of phenol:chloroform (pH 4.5) was added to each sample, followed by centrifugation, and transferring of the upper phase to a fresh tube. Forty microliters of 3 M NaOAc (pH 5.3) and 2.5 volumes of cold 100% EtOH was added to each tube prior to centrifugation to precipitate the RNA. The resultant RNA pellet was washed with 2.5 volumes of cold 80% EtOH. The pellet was left to dry and then resuspended in 0.1% DEPC and quantified. Subsequently, 100 µg of the precipitated RNA was cleaned-up using RNeasy Mini Kit (Qiagen) with on-column DNase digestion. 1 µg of total RNA was treated with 1 U DNase I (Invitrogen) to further remove genomic DNA contaminations.
QUANTITATIVE REVERSE TRANSCRIPTASE PCR
A 20 µl reverse reaction was carried out using 10 mM dNTPs, 50 µM random non-amers (Sigma), 500 ng total RNA, 5× First-Strand Buffer (Invitrogen), 100 mM DTT, 40 U/µl RNase OUT (Invitrogen), and 200 U/µl M-MLV reverse transcriptase (Invitrogen) at 23˚C for 10 min, 37˚C for 60 min, and 70˚C for 15 min. A 10 µl qPCR reaction using 2× Power SYBR Green PCR Master Mix (Applied Biosystems), 1 µM each forward and reverse primer, and 1 µl of cDNA prepared from the RT reaction. The primers used in this study are listed in Table 3.
|
2016-05-04T20:20:58.661Z
|
2012-08-02T00:00:00.000
|
{
"year": 2012,
"sha1": "6d9293e9855a6326e461b0bbc2453820616e353f",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fgene.2012.00144/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6d9293e9855a6326e461b0bbc2453820616e353f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
256089204
|
pes2o/s2orc
|
v3-fos-license
|
Unraveling the therapeutic effects of mesenchymal stem cells in asthma
Asthma is a chronic inflammatory disease associated with airway hyper-responsiveness, chronic inflammatory response, and excessive structural remodeling. The current therapeutic strategies in asthmatic patients are based on controlling the activity of type 2 T helper lymphocytes in the pulmonary tissue. However, most of the available therapies are symptomatic and expensive and with diverse side outcomes in which the interruption of these modalities contributes to the relapse of asthmatic symptoms. Up to date, different reports highlighted the advantages and beneficial outcomes regarding the transplantation of different stem cell sources, and relevant products from for the diseases’ alleviation and restoration of injured sites. However, efforts to better understand by which these cells elicit therapeutic effects are already underway. The precise understanding of these mechanisms will help us to translate stem cells into the clinical setting. In this review article, we described current knowledge and future perspectives related to the therapeutic application of stem cell-based therapy in animal models of asthma, with emphasis on the underlying therapeutic mechanisms.
Background
It has been estimated that pulmonary diseases are the third leading cause of death worldwide [1]. The pathological changes that occurred in asthma are complex and accompanied by prominent airway conduit inflammation and obstruction, bronchiole epithelial metaplasia, and overproduction of mucus [2]. Ultimately, the existence of such conditions leads to airway hyperactivity and exaggerated reactions in response to numerous endogenous and endogenous stimuli [3]. Several reports have shown that different immune cell types, predominantly eosinophils, and Th were recruited to the asthmatic niche coincided with abnormal formation of ECM [4]. Along with these changes, terminal alveoli and air sacs also show excessive pathological remodeling which is indicated by the thickening of the interstitial wall [5]. Ultimately, the persistence of asthmatic changes contributes to epithelial cell detachment and shedding, per-bronchiolar cuffing, type 1 collagen synthesis, and progressive loss of oxygen and carbon dioxide interchange [6]. From a mechanistic standpoint, asthma is categorized into both allergic and non-allergic forms which correlate with dynamic production IgE antibodies. In allergic asthma, the immune system is hyperactive and responds adversely to diverse stimuli [7]. Pathological examinations have revealed that Th2 cells actively were recruited into bronchioles and released different cytokines such as IL-4, IL-5, and IL-13. The cytokines shift the activity of resident pulmonary cells, such as epithelial cells, fibroblasts, and smooth muscle cells, and other immune cells mainly mast cells, eosinophils, and IgE-producing B lymphocytes [8,9]. Due to the inefficiency of current treatment protocols, many researchers and clinicians try to find safe and effective modalities in the alleviation and restoration of the pulmonary system after asthmatic disease. This review aims to scrutinize experiments related to the application of stem cells and underlying cellular and molecular mechanisms participating in the alleviation of asthmatic changes.
To date, several methods have been developed for the induction of bronchial asthma using different animal models. For example, cotton dust, OVA, Ascaris lumbricoides antigens, cockroach extracts (Blatella germanica), house dust mite extract (HDM) (Dermatophagoides pteronyssinus, Dermatophagoides farina), fungi, and molds (Aspergillus fumigatus, Alternaria alternata), ragweed, latex (Hevea brasiliensis), and bacterial lipopolysaccharide (LPS) are allergens commonly implicated in asthma development and exacerbations [10][11][12][13]. Besides, the application of Alum via intraperitoneal route followed by airway challenges induces an acute allergic response that will mask immune events that are only partially contributing to inflammation and airway hyper-responsiveness [12,14]. Considering the previously published experiments, it becomes clear that OVA, extracted from a chicken egg, is a widely used allergen for the sensitization of bronchioles in different animal models solely because of all the immune tools are available and recruited to the pulmonary niche after OVA challenge [13]. OVA can be produced on large scales at a lower cost. However, each sensitization method has strengths and weaknesses. It has been shown that repeated airway exposures to OVA may contribute to immune tolerance and the induced airway inflammation is not quite like what happens in the human asthmatic lungs. Distinct properties of HDM such as appropriate immunogenicity, direct activation of innate cells, and intrinsic enzymatic activity, make it suitable to mimic asthma-like conditions [13]. Commensurate with these descriptions, the selection of certain allergen correlates with the number of replicates and using alone or in combination with other allergens [15].
Application of stem cells in the asthmatic niche
Over the past decade, the use of stem cell-based therapies and bioengineering modalities has been extensively studied for the regeneration of lung diseases. There are a growing number of scientific reports conducted in terms of different stem cells application to promote either structural or functional pulmonary restoration with a focus on both differentiation capacity and paracrine activity [16]. In this regard, it seems that adult bone marrow stem cells, including different lineages such as hematopoietic stem cells (HSCs), mesenchymal stem cells (MSCs), and endothelial progenitor cells (EPCs) hold splendid promise for the healing of injured tissues. The bone marrow niche is an appropriate microenvironment for the dynamic growth of HSCs and EPCs, while MSCs could be isolated from different tissues. In keeping with this theme, the regenerative potential of multiple stem cell types has been previously assessed in different chronic pulmonary pathologies such as asthma, COPD, and broncho-pulmonary dysplasia [16] (Table 1). According to the released data, stem cells were administered via intravenous infusion, intraperitoneally, or transplanted directly into the pulmonary niche via intranasal and intra-tracheal routes [42,43]. Delving a bit deeper, the best appropriate administration route and cell delivery methods have not been determined yet. Considering relative ease of doing the experiments, it is thought that cell delivery via the intra-tracheal pathway is logically preferred to the other routes. For instance, the possibility of cell bio-distribution and colonization to the nonpulmonary tissues will be decreased dramatically which in turn reduces the dose of transplant target cells. Also, the direct introduction of distinct cell types to the injury site could yield better regenerative outcomes [43]. Even specific anatomical regions in the pulmonary niche encompass specific stem cells that are generally inactive under the normal condition and further acquire regenerative potential during injuries to the epithelial layers. Several reports highlighted the existence of pluripotency and stemness features in bronchioalveolar stem cells, nonciliated bronchiolar secretory cells (Clara cells), basal cells, alveolar type II pneumocytes, and submucosal gland stem cells [44,45]. Inside the lungs, there are a fraction of stem cells, namely c-Kit + cells, with highly regenerative potential and self-renewal capacity [46]. It is thought that a large amount of c-Kit + cells from bone marrow and systemic circulation and notably massive recruitment of these cells into the pulmonary niche not only did not ameliorate the progression of pulmonary disease but also exacerbate the progression of pathological responses [20]. Overall, the results from different animal models of asthma confirmed the advantageous and potential benefits after transplantation of stem cells which coincided with a reduction of inflammatory reactions, immune cell recruitment, and regulation of Th1 to Th2 ratio, reduced collagen fiber deposition in the lung parenchyma, and changes in the dynamic synthesis of pro-and anti-inflammatory cytokines. Besides, structural changes and pathological remodeling of the blood-air barrier, epithelial metaplasia, and mucus-producing goblet cells are faded post-stem cell transplantation [47].
Application of MSCs in asthma
In a review of previously published experiments, MSCs have been extensively applied in the alleviation of asthma in different animal models more than other types of stem cells [48]. Many researchers showed that MSCs could proliferate for multiple passages which allow for large-scale production of these cells for different regenerative medicine applications in animal models of Adipose tissue MSCderived extracellular vesicles Intravenously OVA-induced allergic asthma in mouse 7 days TGF-β↓, fibrosis↓, inflammation↓, bronchiolar Siglec-F + eo-sinophils↓, eotaxin↓, CD3 + CD4 + cells↓, CD4 + CD25 + Foxp3 + cells↑ [10] asthma. Based on a scientific document, it has been shown that MSCs are capable of suppressing inflammatory response and pathological remodeling in the asthmatic context [47,49]. Based on conducted experiments, MSCs were transplanted to the asthmatic animals at the range from 1 × 10 6 to 5 × 10 7 [50,51]. According to histological examination, these cells easily migrate toward inflammatory sites in response to cytokine concentration gradients after systemic or local administration. It can be claimed that the production of different factors and cytokines triggers MSCs activation. In vitro pretreatment of bone marrow-derived mesenchymal stem cells with sera from asthmatic mice increase immunomodulatory properties in allergic asthma [52]. It seems that the positive therapeutic effects of MSCs are mainly done by releasing an array of factors in a paracrine manner which modulates the cell-based and humoral immune responses compared to differentiation potential and juxtacrine activity [43]. In support of this statement, several papers were published that the majority of transplant MSCs are cleared from the pulmonary niche after few days possibly through phagocytosis by alveolar macrophages or apoptosis pathways, raising the question of how they prompt such long-lasting immunosuppressive effects [53]. The activity of recipient immune cells, cytotoxic T cells, promotes MSCs apoptosis via perforindependent mechanism [54]. Although it may seem that the decrease of transplanted MSCs by immune rejection could diminish regenerative outcome this phenomenon is done in antibody-and MHC-free manner [54].
Surprisingly, the possible apoptotic death of transplanted MSCs in the asthmatic niche could in part, but not completely, regulated local cellular and humoral immunity via the regulation of phagocytes recruited to the pulmonary tissue [55]. Besides, an elevated ROS generation and enhanced pro-inflammatory cytokines could accelerate functional MSCs depletion at the site of inflammation by eliminating trans-differentiation capacity, self-renewal, and prompt aging [56]. Despite these limitations after the introduction of MSCs to the asthmatic niche, MSCs potentially possess magnificent immunomodulatory capacity without provoking immunogenic responses. MSC secretome harbors diverse factors and cytokine could regulate the functional activity of T and B lymphocytes, dendritic cells, and natural killer cells [57]. Even in the presence of TNF-α and IFN-γ, MSCs can acquire immunosuppression phenotype and immunomodulatory properties. It seems that the production of indoleamine 2,3-dioxygenase and prostaglandin E 2 is actively involved in this phenomenon [58]. Several experiments revealed that MSCs exhibited a different restorative capacity based on isolation from various tissues. Also, the MSC subtype possesses a different multipotential capacity [59]. Bone marrow is a primary and well-known source for the isolation of MSCs. However, alternative sources for MSCs isolation in order of importance and number of conducted studies are adipose tissue, blood, umbilical cord blood, skeletal muscle, tendons, lung, etc. [60]. In addition to differences in the therapeutic capacity of MSC subtype, ND, non-determined; "↑", increase; "↓", decrease; "→", ineffective experiments have shown distinct immunomodulatory properties for multiple MSC types. For instance, it has been elucidated that MSCs isolated from adipose and placenta tissues elicited robust immunomodulation in experimental allergic asthma compared to bone marrow MSCs [61,62]. Lung and umbilical cord blood MSCs have short-lasting persistence in inflamed sites compared to bone marrow MSCs administrated intravenously [63,64]. Li et al. claimed that the introduction of placental-derived MSCs in OVA-sensitized rats, upregulated IL-10, reduced IL-17, and blunted Th17 to Treg ratio [65]. Recent data examining the anti-asthmatic properties of placental MSCs within the pulmonary niche showed the reduction of eosinophils in bronchoalveolar lavage fluid and suppression of IgE and IL-4. Along with these changes, the proliferation of goblet cells, and synthesis of mucus closed to near-normal levels and lymphocyte polarization toward Th2 was interrupted [34]. It is noteworthy that various MSC subtypes could exhibit diverse regenerative potential in the asthmatic pulmonary niche. As abovementioned, it seems that the source of MSCs could alter persistence in the inflammatory asthmatic niche correlated with the ability to express adhesion molecules, distinct integrins. MSC subtype isolated from the multiple tissues possess different global gene expression profiles and paracrine activity which may contribute to diverse regenerative outcomes [66]. In support of this statement, it has been shown that intra-tracheal administration of mouse MSCs from three different tissues such as adipose tissue, bone marrow, and lungs modulated inflammatory response and structural remodeling with different outcomes in OVA-induced asthma model, possibly via distinct secretome profile [66]. Abreu and colleagues highlighted a superior paracrine activity of bone marrow MSCs in comparison with adiposederived counterparts [66]. Hence, it seems logical to select an appropriate MSC type with a special multipotentiality regarding the entity of pulmonary injury.
Mechanisms involved in the therapeutic effects of MSCs in experimental asthma models Immunomodulation As abovementioned, MSCs have been extensively applied in several experimental studies in asthma. It seems that the therapeutic properties of MSCs are mainly correlated with the immune system regulation at the site of inflammation [43,63,64,67]. MSCs are eligible to regulate Th2 to Th1 ratio, synthesis of interleukins such as 4, 5, and 13; IgE; and mucus after introduction into the asthmatic niche. By increasing TGF-β and IFN-γ, MSCs can abort untamed allergic responses [67][68][69]. These events coincide with the increase of CD4 + CD25 + FoxP3 + Treg cells, IL-10 in bronchioalveolar discharge, a decrease of mast and goblet cells, suppression of nitrosative stress, and inhibition of phagocytic activity in alveolar macrophages and [38,67,70] (Fig. 1). There is a close reverse relationship between the eosinophil population in bronchoalveolar lavage and Treg cells which seem critical for the protective impact of MSCs [71]. Most experiments implied that the control of Th2 lymphocytes and relevant allergic reactions could be an efficient strategy for the control of asthmatic injuries [67][68][69]. Notably, different types of MSCs transplantation have different time-dependent therapeutic outcomes. Due to non-specific bio-distribution and the problem associated with the capacity to cross the blood-air sac barrier, intravenous administration possibly leads to inadequate MSCs recruitment to the asthmatic niche. Despite these limitations, this approach is recommended in unstable conditions [72]. Some authorities claimed that approximately 4-to 5-fold doses of MSCs is required to yield the same therapeutic outcome applicable to results by local administration [72]. Based on previously published data, direct intra-tracheal delivery of whole bone marrow mononuclear cells yielded more cells trapped inside the lung parenchyma in comparison with the systemic route. Both modalities resulted in a similar immunomodulatory capacity of MSCs [25]. It has been shown that intraperitoneal administration of bone marrow MSCs is the potential to modulate the allergic asthma reaction. Soon after injection into the peritoneal cavity, these cells easily migrate to the pulmonary niche and exert immunomodulatory capacity [73,74].
If we assume that the therapeutic capacity of transplant MSCs is mainly done via paracrine activity, thus, it is logical to transplant distinct cell types to the asthmatic niche locally rather than via a systemic route [28]. Of course, we must be aware that the local administration is per se invasive, expensive due to surgical procedure and postoperative care. Due to mechanical pressure and inflammatory tissue conditions, a fraction of transplant MSC dies soon after injection into inflamed sites. Despite these limitations, fewer MSCs are needed in local delivery to accomplish therapeutic efficacy. Calling attention, paracrine activity is just efficient in a short distance after close interaction of pulmonary resident cells with MSCs in proximity [75]. Considering these limitations, the repeated dose of MSCs could, if not completely, circumvent disadvantage related to systemic administration in the asthmatic niche [76]. Even though, a repeated dose of MSCs increases the risk of ectopic non-specific overgrowth in different tissues except the lungs [77]. It should be in mind that most studies in animal models reported the efficiency of MSCs in the alleviation of asthmatic changes soon after initial sensitization. Some researchers used episodic allergen exposure to the stabilized asthma-like condition [24]. However, there are few long-term follow-up studies related to the therapeutic effects of MSCs asthmatic animals. Commensurate with these descriptions, there was no basis for the statement that MSCs could completely restore or retrieve the asthmatic lung function. For example, Trizl and co-workers performed six intravenous administrations of MSCs bi-monthly in asthmatic cats and followed up for 1 year [24]. They declared that MSCs failed to suppress inflammation of airway conduits in terms of eosinophil number and bronchiolar hypersensitivity. In another study done by the same research group, five systemic administrations of MSCs showed an anti-inflammatory response at day 130 while data represented the lack of prominent inflammation suppression by month 9 [78]. The scientific rationale for these data could correlate with the fact that bona fide MSCs exert therapeutic effects only short periods after transplantation via differentiation, juxtacrine, and paracrine mechanisms before their death.
Multiple mechanisms action of MSCs in asthmatic niche
In addition to the immunomodulatory capacity of MSCs, growing evidence has proposed the existence of multiple mechanisms, such as trans-differentiation capacity, cell fusion, mitochondrial transfer, and paracrine activity done by microvesicles and exosomes, in these cells to alleviate asthma-related pathology [79] (Fig. 1). As aforementioned, multiple experiments confirmed the existence of factors and cytokines in MSCs secretome which are packed inside extracellular nano-sized vesicles, namely exosomes and transported to the target cells. Additionally, the direct differentiation potency of MSCs has been neglected previously by reports which are possibly due to the enhanced MSCs death at the site of transplantation [22,80]. A study conducted by Spees and co-workers showed that the simultaneous culture of human MSCs with heat-injured pulmonary epithelial cells promoted differentiation toward epithelial-like lineage [81]. Based on the facts from experiments, it would not be an exaggeration to mention that paracrine activity is the main suggested therapeutic bioactivity of MSCs in the asthmatic niche [82]. These cells have inherent capacity to release 40-200 nm nanoscale exosomes which harbor multiple anti-inflammatory factors that could regulate the function of immune cells [83]. Interestingly, these nanoparticles easily spread in the bio-fluids, are stable, and survive in a harsh milieu when even the source cells could not survive [84]. On this basis, Cruz and co-workers claimed that systemic injection of conditioned media (CM) or extracellular vesicles harvested from mouse and human MSCs are equally effective in the regulation of Th2/Th17-associated asthma hypersensitivity and inflammation in a mouse model of mycosis [85]. Concerning the fact that some regenerative effects of MSCs are done via releasing soluble effectors, thus, MSC-free therapy such as CM and exosomes could be an alternative due to easy storing and handling. By using these approaches, it is less likely to see cellular emboli, tumorigenesis, and unwanted immune responses after transplantation [86]. According to data from a study conducted by Keyhanmanesh They also found that the expression of adhesion molecules such as intercellular adhesion molecule-1 and vascular cell adhesion molecule-1 significantly decreased which per se decreased immune cell recruitment to the pulmonary tissues [48]. In contrast, Ahmadi and colleagues reported the inefficiency of rat MSC CM on the modulation of inflammation in OVA-induced asthma [23]. The reason for contradictory results could correlate with the time and route of administration, and total dose [66,87]. Additionally, short activity and rapid distribution of factors after transplantation compared to cell injection could be logic for transient therapeutic effects of CM and exosomes. Aside from the fact that CM and exosomes are integral to the paracrine activity of MSCs in injured sites, more investigations are highly demanded to address underlying mechanisms. Some data showed that MSCs promote tissue regeneration via mitochondrial transfer is a response to external stimuli. The critical role of gap junctional channel, tunnel tube formation and Rho-GTPases such as Miro1 were previously confirmed by which mitochondrial mass was transferred to the damaged cells [88,89]. In this regard, it has been shown that connexin-43 GJC + MSCs retrieved epithelial cell bioactivity after mitochondrial transfer in lipopolysaccharide-associated acute pulmonary inflammation. Islam and co-workers found that the suppression of connexin-43 was interrupted by mitochondrial transfer from MSCs to epithelial cells [90]. It seems that the phenomenon of mitochondrial transfer is effective in the alleviation of other pulmonary injuries. For example, Li et al. confirmed the therapeutic effect of bone marrow MSCs against rat COPD induced by cigarette smoke [91]. In a recent study, it has been shown that the intra-tracheal administration of induced pluripotent stem cell-derived MSCs improved mitochondrial dysfunction in epithelial cells via mitochondrial transfer via connexin-43 [26]. Mitochondrial transfer is efficient to slow down the procedure of apoptotic changes in epithelial cells [92]. In addition to the mitochondrial donation, multiple factors released by MSCs could inhibit the apoptosis signaling pathway either in a non-mitochondrial or mitochondrial-dependent manner [93]. Besides, other cell-protective mechanisms, namely autophagy, actively could alter the development of asthmatic remodeling. For instance, it has been shown that intravenous administration of MSCs reduced inflammation in the pulmonary microvascular system via engaging autophagy-related effectors during ischemia/reperfusion in the model of mouse by inhibiting miR-142a-5p in endothelial cells [94,95].
Unraveling the regenerative effects of MSCs and other stem cell types could be done in both protein and gene levels. The discovery of miRNAs and other factors function is a de novo strategy for the reduction of asthmatic changes [96]. As implied by previous experiments, molecular examination confirmed the alteration of miRNAs during asthmatic changes. Therefore, elucidation of miR-NAs and relative changes could be a reliable tool for monitoring the progression of asthma [97,98]. Further investigations in animal asthma models showed the potency of MSCs and induced pluripotent stem cellderived MSCs to modulate the expression of proinflammatory miRNAs, such as miRNA-155, -133, mmu-miR-21a-3p, and mmu-miR-449c-5p coincided with the induction of miRNA21 and mmu-miR-496a-3p [97,99,100]. For example, previous studies showed that MSCs could alter the phenotype and bioactivity of different cells via horizontal transfer of genetic data such as miRNAs and mRNAs [101]. It has been shown that exosomes are the major players in the paracrine activity and transfer of genetic materials and factors from MSCs to the immune cells [101]. The exposure of M1 proinflammatory macrophages to the MSC-derived exosomes induced polarization toward M2 anti-inflammatory macrophages [102]. As previously mentioned, the overproduction of Th2-derived cytokines, such as IL-4, IL-5, IL-9, and IL-13, is associated with dysregulated immunity in asthma [103]. It has been shown that the application of MSCs in different asthmatic models could decrease inflammatory response by altering the levels of Th2-derived cytokines and miRNAs associated with the function of these cells [21]. The interaction of microbial pathogenassociated molecular patterns (PAMPs) with pulmonary epithelial cells and immune cells via toll-like ligand receptors in asthmatic niche leads to the production of cytokines and chemokines [104]. Toll-like ligand receptors were also expressed on the surface of different stem cells such as MSCs and endothelial progenitor cells [105,106]. It seems that the simultaneous stimulation of MSCs and immune cells suggests the putative role of MSCs in controlling the activity of immune cells. The activation of toll-like ligand receptors by PAMPs could frustrate pulmonary macrophages and release a large content of chemokines such as CXCL8 and CXCL11 which in turn increase the migration of MSCs toward the site of asthmatic niche. Also, the presence of MSCs suppresses the activity of microbes by producing antibacterial proteins [107]. It has shown that MSCs can suppress the function of complement cascade by releasing complement inhibitors such as factor H, leading to the inhibition of C3 and C5 convertase [108]. According to different experiments, several miR-NAs have critical roles in the inflammation of airway conduit, including miRNA-126, miRNA-let-7, and miRNA-155 [97]. In a study performed by Kuo and co-workers, the therapeutic potential of MSCs has been proved to reduce stretch-induced inflammatory miR-155 in pulmonary bronchiolar epithelial cells by downregulating miR-155 [109]. Data showed that MSCs are promising cell sources to alter the expression of miRNAs in immune cells and pulmonary tissue to reduce the inflammatory condition.
Despite numerous advantages of MSCs application, there are very few reports regarding MSC side effects in pulmonary disease. For instance, it was shown that allogenic transplantation of MSCs in patients with idiopathic pulmonary fibrosis did not cause serious clinically and laboratory abnormalities [110]. However, the long-term follow-up of these patients revealed a total number of 2 deaths per 9 MSC-treated cases because of disease exacerbation [110]. Clinical trials conducted already by local investigators in different countries showed that transplantation of MSCs was appropriately tolerated and only a limited number of side effects were observed due to uncontrolled suppression of immune cells. Besides transdifferentiation of transplanted to undesired cell types, the progression of tumor-like cells and possible metastasis to remote sites are the main challenging issues [110]. Attention should be made to interpret the immunomodulatory properties of MSCs after transplantation under acute and chronic inflammation. It has been shown that the administration of allogenic MSCs increased alveolar macrophage activity a few hours after transplantation via the intravenous route indicated by enhanced MCP-1, CXCL-1, and IL-6 production [111]. To increase the survival rate and modulatory effect of transplanted MSCs, the simultaneous administration of mycophenolate mofetil was performed from the time of cell administration onwards. This strategy inhibits the accumulation of reactive T cells and allogeneic rejection [112]. There is still a long way to go to confirm the therapeutic outcomes of MSCs in different pulmonary allergic diseases.
Xenogeneic transplantation of human MSCs into an animal model of asthma
Despite the existence of inherent species variation in MSC function, some experiments conducted xenogeneic lung transplantation models in animals using human MSCs [113][114][115]. Similar to animal MSCs, human MSCs exhibited potent immunomodulatory properties either in the acute or chronic asthma mouse model [114,115]. It has been elucidated that typical hallmarks of asthma were mostly subsided after transplantation of human MSCs, isolated from bone marrow, adipose tissue, and umbilical cord, in the mouse model [30]. Systemic injection of human bone marrow MSCs via tail vein induced pulmonary macrophage polarization toward M2 type via the promotion of the TGF-β/Smad signaling pathway [116]. Interestingly, xenogeneic transplantation of human MSC retro-orbital administration in mice diminished the hyaluronan mucus in the chronic asthma model [32]. These findings support the assumption that autologous, allogeneic, and xenogeneic transplantation of MSCs could promote anti-inflammatory outcomes via engaging different mechanisms. Regarding massive genetic heterogeneity in allogeneic and xenogeneic models, these cells are, although not fully, able to exert regenerative outcomes.
Clinical trials
According to the promising results of animal studies, some efforts have been made to investigate the paracrine and juxtacrine effect of MSCs on the human counterpart of asthma. By March 2020, clinical trial results (conducted in URL: https://clinicaltrials.gov) represented about 9 clinical trials to deal with asthma in the patients ( Table 2). Of the nine clinical trials, two ongoing examinations evaluated the efficacy of MSCs in asthmatic patients. In a clinical trial conducted by the University Of Miami Miller School Of Medicine, the therapeutic effects of allogeneic MSCs were examined intravenously and patients were followed for 12 weeks. In the second study performed by Punta Pacifica Hospital of Panama City, intranasal administration of human umbilical cord MSC-derived trophic factors was evaluated in adult asthmatic patients.
Conclusions and future perspectives
Overall, MSC delivery could diminish inflammation of lungs and airway conduits in the asthmatic animal models. The therapeutic effects of MSCs are done by involving different molecular and cellular pathways related to immunomodulation, mitochondrial donation, and protection against different pathways leading to cell death such as apoptosis and oxidative stress. It is highly recommended to establish diverse basic experiments and clinical trials to address the precise underlying mechanisms of MSCs therapy in asthmatic subjects. Long-term monitoring of asthmatic patients who received MSCs could carefully highlight the possible ineffectiveness and side effects before making a solid decision about cellbased therapies.
Defining the exact mechanism of MSC-therapy in the asthmatic condition is mandatory before the advent of cell therapy as an alternative modality in the clinical setting. The long-term outcomes and survival of locally or systemically administrated MSCs should be determined. The possible side effects of in vitro expansion of MSCs should be determined. We suggest future investigations need to find appropriate dosing and administration routes. Meanwhile, the exact bioactivity of MSCs is still unclear under acute and chronic inflammation.
|
2023-01-23T14:26:56.508Z
|
2020-09-15T00:00:00.000
|
{
"year": 2020,
"sha1": "a7b8b479acb8f315c793b3542cab2dff7a2a8e66",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s13287-020-01921-2",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "a7b8b479acb8f315c793b3542cab2dff7a2a8e66",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
285235
|
pes2o/s2orc
|
v3-fos-license
|
Multicast Capacity Through Perfect Domination
The capacity of wireless networks is a classic and important topic of study. Informally, the capacity of a network is simply the total amount of information which it can transfer. In the context of models of wireless radio networks, this has usually meant the total number of point-to-point messages which can be sent or received in one time step. This definition has seen intensive study in recent years, particularly with respect to more accurate models of radio networks such as the SINR model. This paper is motivated by an obvious fact: radio antennae are (at least traditionally) omnidirectional, and hence point-to-point connections are not necessarily the best definition of capacity. To fix this, we introduce a new definition of capacity as the maximum number of messages which can be received in one round, and show that this is related to a new optimization problem we call the Maximum Perfect Dominated Set (MaxPDS) problem. Using this relationship we give tight upper and lower bounds for approximating the capacity. We also analyze this notion of capacity under game-theoretic constraints, giving tight bounds on both the Price of Anarchy and the Price of Stability.
Introduction
A fundamental quantity of a wireless network is its capacity, which informally is just the maximum amount of data which it can transfer. There is a large literature on analyzing and computing the capacity of wireless networks under various modeling assumptions, including models of how interference works and assumptions on how nodes are distributed in space. The last decade has witnessed a flurry of activity in this area, particularly for worst-case (rather than random) node distributions, motivated by the ability to apply ideas from multiple areas of theoretical computer science (approximation algorithms and algorithmic game theory in particular) to these problems.
We continue that line of work in this paper, but with a new, and arguably more natural, definition of capacity. Much of the research in the last decade (see, e.g., [7,8,1,11,10,9,12], has used a point-to-point definition of capacity: given a collection (s i , t i ) pairs of nodes, and some model of interference, the capacity is the maximum number of pairs which can simultaneously successfully transmit a message. This is sometimes motivated by its utility in scheduling: if we are trying to support many unicast demands in a wireless network, a natural thing to do is make as much progress as possible in each time step, i.e., maximize the number of successful transmissions. For this reason, the problem of computing the maximum capacity is also sometimes called the One-Shot Scheduling problem [1,8].
But while well-motivated by scheduling, this is not obviously the right definition of capacity. For example, suppose we are in a classical radio network where we are given a communication graph and interference is destructive: u will receive a message from v if v sends a message, u does not send a message, and no other neighbor of u sends a message. Suppose that we are given a star topology with r as the center and leaves x 1 , . . . , x n , and that there is a demand from r to each leaf. What is the capacity of this network? Traditionally, the answer would be 1: only one of the unicast links can be successful, since r can only send one message at a time. On the other hand, if r really only has a single message which it is trying to send to all of its neighbors, then all of these demands can be satisfied in a single round, so the capacity is n.
In other words, recently popular notions of capacity do not take into account the ability to multicast or broadcast, since they assume that there is a different message for each unicast demand. But one of the defining features of traditional wireless networks is that antennas are omnidirectional; this is one of the main differences between wireless and wireline networks. So if our goal is to understand the capacity of a network, it might be reasonable to measure this as the total number of messages which can be successfully received in a single time step, without taking demands into account at all. After all, this would be the true limit on the single-step "usefulness" of the network.
In this paper we study this notion of capacity in radio networks. We first show that it is equivalent to a new optimization problem we call the Maximum Perfect Dominated Set (MaxPDS) problem, and then using this connection we give tight upper and lower bounds on its approximability. We also study it in a distributed context by following the lead of previous work on distributed network capacity [1,4,2] and looking at a natural game-theoretic formulation in which each transmitter acts a a self-interested agent, and proving bounds on the price of anarchy and the price of stability.
Modeling and MaxPDS
We consider the classical radio network model. In this model there is a communication graph G = (V, E), and each node in V can act as either a transmitter or a receiver. In a given unit of time (we make the standard assumption of synchronous rounds), each node can either broadcast a message to all of its neighbors, or choose to not broadcast and thus act as a receiver. Interference is modeled by requiring a single message arriving at each receiver, or else the messages interfere and cannot be decoded. In other words, a vertex i can successfully decode a message from a neighbor j if and only if i is not broadcasting (and so is acting as a receiver), j is broadcasting, and no other neighbor of i is broadcasting. If multiple neighbors if i are broadcasting then their messages all interfere with each other at i, and so i would not receive any message.
In this model, the equivalent of the unicast notion of "capacity" used in recent work would be a maximum matching (or maybe a maximum matching subject to being a subset of some set of demands), which can clearly be computed in polynomial time. But this may be significantly smaller than the number of nodes which successfully hear a message, as the star example shows. So we will instead adopt a multicast notion of capacity: Definition 1. The multicast capacity of a wireless network G = (V, E) is the maximum number of nodes which can simultaneously receive a message.
It is straightforward to relate this to reasonably well-studied notions in graph theory. In particular, since each node successfully receives a message if and only if it does not broadcast and exactly one of its neighbors does broadcast, the multicast capacity is quite similar to the definition of a perfect dominating set [15,16,13].
We say that a node is perfectly dominated by S if exactly one neighbor is in S. Note that a perfect dominating set always exists since we can set S = V , in which case trivially every node of V \ S = ∅ is perfectly dominated by S (a node is perfectly dominated by S if exactly one neighbor is in S). This motivated [15,16] and others to study the Minimum Perfect Dominating Set problem, in which the goal is to find a perfect dominating set of minimum size (analogous to the Minimum Dominating Set problem, in which we only require domination rather than perfect domination). But note that if S is the set of nodes transmitting, the set of non-transmitters which are perfectly dominated is exactly the set of nodes who successfully receive a message. Thus the multicast capacity of a network is equal to the maximum number of nodes which can be perfectly dominated at the same time. Hence computing the multicast capacity is equivalent to solving the following problem: Note that the solution to this may not be a perfect dominating set. Rather, it may dominate some vertices multiple times and may not dominate some at all in order to perfectly dominate the maximum number of vertices. This problem has not been considered or defined before (to the best of our knowledge).
Our Results and Outline
Our first results are matching upper and lower bounds for MaxPDS, i.e., for the problem of maximizing multicast capacity. The precise lower bound we obtain depends on the hardness assumption that we use, but all are essentially polylogarithmic. Theorem 1. MaxPDS cannot be approximated to better than a polylogarithmic factor. More precisely: • Let ε > 0 be an arbitrary small constant, and suppose that NP ⊆ BPTIME(2 n ε ). Then there is no polynomial time algorithm which approximates MaxPDS to within O(log σ n) for some constant σ = σ(ε).
• Under the assumption that the Balanced Bipartite Independent Set Problem (BBIS) cannot be approximated better than O(n ε ) for some constant ε > 0 (Hypothesis 3.22 of [3]), there is no polynomial time algorithm which approximates MaxPDS to within o(log n).
We complement these lower bounds with an essentially matching upper bound.
There is a polynomial time O(log n)-approximation algorithm for MaxPDS.
The lower bound is described in Section 2.2, and the upper bound is in Section 2.3. Both are obtained in a similar way: a connection to another problem known as the Unique Coverage Problem UCP. We discuss UCP in more detail in Section 2.1, but informally it is a variation of Maximum Coverage with a similar uniqueness requirement as in MaxPDS (an element only counts as covered if it is contained in exactly one chosen set). Upper and lower bounds for UCP are known [3], so we derive our lower bounds by reducing from UCP to MaxPDS (in particular, the different lower bounds and their hardness assumptions are all direct from equivalent bounds and assumptions for UCP). For the upper bound, for technical reasons we do not give a black-box reduction to UCP, but instead give an algorithm which is directly inspired by the upper bound for UCP from [3].
The bulk of this paper is devoted to the next two results, which are about a natural gametheoretic version of MaxPDS / multicast capacity which we call the multicast capacity game. Informally, this is a game in which the nodes are players, and the utility of each node is 0 if it does not transmit, and otherwise is the number of neighbors which successfully heard the message minus the number who did not. In other words, each node gets a benefit from successfully transmitting its message to a neighbor, but pays a price for an unsuccessful transmission. We follow the lead of previous work on unicast capacity, in which a similar game is analyzed [1,4,2]. The motivation is twofold. First, the game is a reasonable (though obviously not perfect) model of what incentives might be like for transmitters (at least in certain situations). But perhaps more interestingly, proving bounds on the quality of the equilibria gives a bound on any distributed algorithm which converges to such an equilibrium.
Since Nash equilibria are the standard solution concepts in game theory and algorithmic game theory, we focus on them here. In particular, we will bound the Price of Anarchy (the optimal number of successful receptions divided by the expected number of successful receptions in the worst Nash) and the Price of Stability (the same but with respect to the best Nash). Note that, like in the unicast game of [1,4,2] but unlike in most games considered by the AGT community, the quality of a solution is not just the social welfare (sum of utilities) or some notion of fairness, but is instead a quantity (number of received messages) which is not directly optimized by any player. We provide nearly matching bounds on these quantities.
Theorem 4. There is an instance of the multicast capacity game in which the Price of Stability is
We prove Theorem 3 in Section 3.1 and Theorem 4 in Section 3.2. Note that the combination of these two bounds means that we have an extremely good understand of the value of the Nash equilibria (in particular, stronger than if we just had nearly-matching upper and lower bounds on the price of anarchy). The first bound tells us that in every instance, every single Nash equilibrium is within O( √ n) from the optimum, while the second tells us that there are instances in which all Nash equilibria are at least Ω √ n log(n) from optimum.
Related Work
As discussed earlier, this paper follows a fascinating line of work in the last decade on computing the capacity of wireless networks. There has been a particular focus on the SINR or physical model, in which we explicitly reason about the signal strength and interference at each receiver.
However, there has also been significant work directly on graph-based models (e.g., [4]) and on the relationship between graph models and the SINR model [12] (which shows in particular that graphs can do a surprisingly good job of representing the physical model, motivating continued study of graph models). Typically in these graph models each link is represented by a node (rather than an edge) and two nodes are adjacent if they interfere, in which case the unicast capacity is equal to the maximum independent set. Typically authors assume (e.g. [1,4,12]) that the graph has some geometric structure (such as being a unit-disc graph) which makes computing maximum independent sets (at least approximately) an easier task. From the perspective of computing the capacity, the most directly related work (and much of the inspiration for this paper) are [1] and [8], which to a large extent introduced the unicast capacity problem for worst-case inputs and gave the first approximation bounds. These bounds were improved in a series of papers, most notably including a constant-factor approximation [14], and have been generalized to even more general models and metrics, e.g. [9,11].
Much of this paper focuses on analyzing a natural game-theoretic version of multicast capacity. This is directly inspired by a line of work on a related game for unicast capacity, initiated by [1] and continued in [4,2]. These papers study various equilibria for the unicast capacity game (Nash equilibria in [1], coarse correlated equilibria in [4,2]) and prove what are essentially price of anarchy bounds (upper bounds on the gap between the optimal capacity and the capacity at equilibrium). This game was also considered in [5], which showed the existence of Braess's Paradox in the game (improvements in technology can result in worse performance) but bounded the damage it could cause. This paper is equivalent to [1] in that it is only the beginning of the study of the multicast capacity game; analyzing more complicated notions of equilibrium and studying Braess's paradox are interesting future work.
Notation
Throughout this paper, we use the following notation and conventions. Given any graph G = (V, E), unless otherwise stated, we refer to undirected graphs with |V | = n. Additionally, for any vertex All approximation ratios in this paper are written such they they are at least 1, so for maximization problems such as MaxPDS they are given as the ratio of the optimal solution to the solution constructed by the algorithm. This convention also extends to the price of anarchy or stability for a utility-maximization game.
Hardness and Approximations
In this section, we present a hardness of approximation result for the Maximum Perfect Dominated Set Problem as well as an approximation algorithm. We begin by defining the Unique Coverage Problem (UCP), which will be useful for both the upper and lower bounds.
The Unique Coverage Problem
The Unique Coverage Problem (UCP) was introduced by Demaine et al. in [3], where they gave both upper and lower bounds. In particular, it is defined as follows.
Definition 4. Given a universe U of elements and a collection S of subsets of U , the unique coverage problem (UCP) is to find a subcollection S ⊆ S of subsets which maximizes the number of elements that are uniquely covered, i.e., are in exactly one set of S [3].
The unique coverage problem is a variation on the maximum coverage problem with an added uniqueness requirement. In UCP, a solution attempts to maximize the number of elements covered by exactly one set, rather than the number of elements covered by at least one set. This is similar to the requirement we need for MaxPDS, which is that we are maximizing the number of perfectly dominated vertices, not the number of vertices dominated by at least one vertex.
Demaine et al. [3] proved the equivalent of Theorem 1 for UCP (all bounds and assumptions are exactly the same, just for UCP rather than MaxPDS) and an O(log n)-approximation for UCP. Because of the similarity between UCP and MaxPDS, we base both our upper and lower approximability bounds on UCP.
Hardness of Approximation
In this section, we provide a polynomial time approximation-preserving reduction from UCP to MaxPDS, thus implying that the same hardness assumptions used to show a lower bound for UCP also hold for showing the hardness of approximating MaxPDS.
Theorem 5. Assuming UCP cannot be approximated to within O(log c (n)) for some constant c satisfying Theorem 1, then MaxPDS is hard to approximate to within O(log c (n)).
Proof. Consider an instance of UCP with a universe U of elements and a collection S of subsets of U . For specified parameters α ′ , β ′ , given a subcollection S ′ ⊂ S, we define the following two cases.
1. S ′ is a Yes-instance of UCP if the number of elements uniquely covered is at least α ′ .
2. S ′ is a No-instance of UCP if the number of elements uniquely covered is less than β ′ .
Given an instance of this problem, construct an undirected bipartite graph G ′ = (V ′ , E ′ ) such that V ′ consists of a vertex s i for each set S i ∈ S and a vertex x i for each element e i ∈ U. Let (s i , x j ) ∈ E ′ if e j ∈ S i . Let A denote the set of vertices s i corresponding to sets in S, and let B denote the vertices corresponding the elements in U.
Construct a new bipartite graph G = (V, E) such that V consists of A and k copies of B, denoted B 1 , B 2 , . . . , B k . Let V have an additional vertex v that is adjacent to all vertices in A. Let E consist of k copies of E ′ , one for each bipartite subgraph over (A, B i ) for all i ∈ [k].
Consider some solution S ′ to the UCP instance. Define D = {s i : S i ∈ S ′ } ∪ {v}. If S ′ is a Yes-instance of UCP, then the number of vertices perfectly dominated by D is α ≥ α ′ k, because in each of the B i , there are at least α ′ perfectly dominated vertices. On the other hand, if S ′ is a No-instance of UCP, then there are only β < |S| + kβ ′ vertices perfectly dominated by D, because {s i : S i ∈ S ′ } perfectly dominates less than kβ ′ of the vertices in the B i and v perfectly dominates the |S| vertices in A. Now, set k = |S| . Then α ≥ α ′ |S| = α ′ k and β < |S| + |S| β ′ = k + kβ ′ = k(β ′ + 1). Therefore, the approximation ratio for MaxPDS in this setting is α , which is trivially true. Since all we have done is create |S| repetitions of B, this can be done in polynomial time.
Therefore, this reduction begins with an instance of UCP with an approximation ratio of α ′ β ′ and transforms the problem into an instance of MaxPDS with an approximation ratio of α β . Let n ′ be the size of the input to this reduction, and let n be the size of the resulting instance of MaxPDS. By assumption, α ′ β ′ = Ω(log c (n ′ )). Therefore, we want to show that α β = Ω(log c (n)). We start with n ′ = |S| + |E ′ | and we end with n = |S| + k |E ′ |.
= Ω(log c (n)) as desired, thus showing that MaxPDS is hard to approximate to within O(log c (n)).
This reduction from UCP to MaxPDS shows that MaxPDS is hard to approximate to within O(log c (n)) under any hardness assumption for which UCP is hard to approximate to within O(log c (n)). In particular, this holds for the three different hardness assumptions used to show the hardness of approximating UCP, thus proving Theorem 1.
Approximation Algorithm
In this section we present an O(log(n))-approximation algorithm for MaxPDS, i.e., a proof of Theorem 2. In [3], Demaine et al. show an O(log(n))-approximation algorithm for UCP. Because of the similarities between UCP and MaxPDS, the obvious approach of MaxPDS would be to give a black-box reduction to UCP. For example, the following reduction would be very natural: given a MaxPDS instance G = (V, E), create a UCP instance such that for every vertex i ∈ V , there exists a set S i containing elements corresponding to N (i) ∪ {i}. However, in the UCP instance, a set S i can uniquely cover element i, while in the MaxPDS instance, if a vertex i is in the set, it cannot be perfectly dominated by definition. In other words, an algorithm for UCP might get "credit" for uniquely covering i with set S i , even though the equivalent solution to MaxPDS would not get credit for perfectly dominating i. A similar issue arises if we leave i out of S i , since in that case a different set could get credit for uniquely covering i even if S i is included, which would correspond to "perfectly dominating" i even if i in the MaxPDS solution.
To get around this difficulty, we simply directly give an O(log n)-approximation algorithm for MaxPDS. Both the algorithm and the analysis are straightforward adaptations of the algorithm for UCP from [3], so we defer them to Appendix A.
Game Theory
Given our understanding of maximizing the multicast capacity in a graph, we now analyze the problem in a distributed setting as a game with self-interested players. We define a natural game for this setting, where a player i has incentive to broadcast if most of N (i) would receive i's transmission, but does not have an incentive if i would mostly be broadcasting to neighbors that aren't listening.
Formally, the multicast capacity game is defined as follows. Let S = {0, 1} n be the strategy space, where for each player i ∈ [n], for each s ∈ S, s i = 1 when i chooses to broadcast and is 0 otherwise. Let c i (s) denote the number of neighbors of i which are broadcasting under s, i.e., c i (s) = |{j : j ∈ N (i) ∧ s j = 1}|. Then, given s ∈ S, define A i (s) = {j ∈ N (i) : c j (s) = 1 and s j = 0} to be the neighbors of i receiving exactly one message under s, and B i (s) = {j ∈ N (i) : c j (s) ≥ 2 or s j = 1} to be the neighbors of i either receiving at least two messages under s or broadcasting, meaning that i cannot succeed on these neighbors. Note that |A i (s)|+ |B i (s)| = |N (i)| for all i ∈ V . The utility for player i is u i : S → Z, defined as This game intuitively models the fact that each node would like to send its message to its neighbors, and gets a benefit proportional to the number of successes but with a penalty for failures (possibly due to either the cost of wasting the transmission power, or more altruistically, a payment for the interference caused).
This definition of the multicast capacity game gives us a way to analyze the relationship between the quality of equilibria in a distributed setting and the optimal solution to the graph theory problem in question. Similarly, in [4], the independent set game is discussed, which is just the graph-theoretic unicast capacity game. The utilities are similar to those in our game: player i receives utility 1 for broadcasting while none of N (i) is broadcasting, -1 for broadcasting if a neighbor is broadcasting, and 0 for being quiet (the motivation is that each node is a link, and an edge between two links means that they cannot simultaneously succeed). While similarly motivated, it turns out that the independent set game and our multicast capacity game are quite different in terms of their equilibria.
A pure Nash equilibrium (PNE) is a strategy vector s ∈ S in which no player has any incentive to deviate. Slightly more formally, s is a pure Nash equilibrium if u i (s −i , s i ) ≤ u i (s) for all players i, where s −i , s i is a vector formed by replacing the i'th coordinate of s with s i . We can generalize this by allowing probabilities, and in particular allowing each player to have a probability distribution over its possible strategies (i.e., a distribution over {0, 1}). Such a collection of distributions is a mixed Nash equilibrium (MNE), or just a Nash equilibrium, if for every player the expected utility (when s is from the product distribution of the player distributions) cannot be increased by changing its own distribution.
For each strategy profile s ∈ S, we define V (s) as the number of successful receptions throughout the network (note that V (s) is not simply the social welfare, i.e., the sum of the utilities, and hence differs from much of modern algorithmic game theory). Hence the optimal solution, in the sense of MaxPDS and the previous section, is simply OPT = max s∈S V (s).
In this section we analyze this game. We first note in Appendix B that it is not clear whether this game always admits a pure Nash equilibrium, since the natural guess that maximal perfect dominating sets are equilibria is incorrect (in the independent set game any maximal independent set is a pure Nash). Fortunately, as we all know, mixed Nash equilibria do always exist, and so in Sections 3.1 and 3.2 we analyze general mixed Nash equilibria and their distance from optimality.
Price of Anarchy
The Price of Anarchy of a game is a measurement which compares the equilibrium with the lowest value to OPT.
Definition 5 (Price of Anarchy (PoA)). Let N be the set of product distributions σ over S corresponding to Nash Equilibria (pure or mixed). Then the Price of Anarchy is defined as , which is the ratio of OPT to the Nash Equilibrium with the lowest value.
We now prove Theorem 3, an upper bound of O( √ n) on the price of anarchy. We give an outline of the proof, but all omitted details can be found in Appendix B.
be an instance of the Maximum Perfect Dominated Set game. Assume that G is connected, because any vertex i with no neighbors cannot contribute to the value of a Nash Equilibrium or the value of OPT, and can therefore be deleted. Let σ be a product distribution over the set of strategies for each player and suppose that σ is a MNE.
We define the following probabilities with respect to σ. Given any vertex i, let p i be the probability with which i broadcasts. Then, for each j ∈ N (i), define α ij as the probability that, if i chooses to broadcast, j would successfully hear the transmission from i. Formally, Additionally, let S i be the expected number of vertices that receive only a transmission from i given that i chooses to broadcast, which is Then, we can define the following quantities. Let B be the expected number of broadcasters; that is, B = i∈[n] p i . Let S be the expected number of successes, meaning the vertices that are receiving exactly one transmission and are not broadcasting. This is the value which we are trying to bound. To express S in terms of α ij , consider the following. Given any vertex i with neighbors j 1 , . . . , j m ∈ N (i), let X j k denote the event that i successfully receives a message from j k for any k ∈ [m]. Because i can only be a success for one of the j k , then X j 1 , X j 2 , . . . , X jm are disjoint. Each of these events occurs with probability p j k α j k i . Hence, Pr [i is a success] = j∈N (i) p j α ji . Therefore, we can write S as S = i∈[n] j∈N (i) p j α ji .
Let F be the expected number of failures due to collisions, that is, vertices that are not broadcasting but are also receiving at least two transmissions. Formally, Finally, let A be the expected number of vertices that do not broadcast and do not receive any transmissions, which is We now give some lemmas which will let us relate these quantities.
Lemma 7. S i ≥ F i for any vertex i with p i > 0.
The previous two lemmas will let us relate B and F to S.
Relating A to S is significantly more difficult, and requires reasoning separately about vertices which contribute significantly by themselves to A and vertices which do not contribute much individually. Dividing up vertices in this way lets us reason more combinatorially about degrees between various sets, allowing us to eventually prove the following.
We are now ready to complete the proof of Theorem 3.
Price of Stability
The Price of Stability of a game is a measurement which compares the equilibrium with the highest value to OPT. Intuitively, it gives a bound on how "good" an equilibrium can be, when compared to OPT.
Definition 6 (Price of Stability (PoS)). Let N be the set of product distributions σ over S corresponding to Nash Equilibria (pure or mixed). Then the Price of Stability is defined as max s∈S V (s) sup σ∈N Es∼σ[V (s)] , which is the ratio of OPT to the Nash Equilibrium with the highest value.
We now prove Theorem 4, a lower bound of Ω( √ n/ log n) on the Price of Stability.
Proof of Theorem 4. Let G = (V, E) be a graph composed of n = q + 3 √ q + 2 vertices for some parameter q, such that V = A ∪ B and B = i∈[ √ q+2] B i . Let A be a clique on √ q + 2 vertices, and for each v i ∈ A, let B i be an independent set of size √ q such that v i is adjacent to each vertex in B i . We will show that in this graph any MNE has value of at most O( √ q log(q)), whereas OPT in this graph has a value of at least q + 2 √ q. Let σ be a MNE, and let p i be the probability with which i broadcasts.
For any i ∈ A, suppose that j∈A,j =i p j > log(2 √ q + 2). Then, the probability that i succeeds at broadcasting to A is equivalent to the probability that no other node j ∈ A broadcasts. Since σ is a product distribution, this probability is Then, the expected utility of broadcasting with probability p i for vertex i would be This is maximized when p i = 0 so vertex i has no incentive to broadcast. Therefore, in any MNE, the expected number of broadcasters in A can be at most log(2 √ q + 2). If any vertices in B broadcast, they can add at most √ q + 2 successes, all of which are in A. Therefore, Now, consider OPT. When all vertices in A broadcast with probability 1, the number of successes is q + 2 √ q so OPT ≥ q = Ω(n). Therefore, the Price of Stability in this graph is at least Ω( √ n log(n) ).
Conclusion
In this paper, we analyzed the multicast capacity of a network as both an optimization problem on a graph and a game in a distributed network. We introduced the Maximum Perfect Dominated Set problem as the equivalent of maximizing multicast capacity in a graph, and showed both upper and lower bounds for the approximability of the problem. We also defined the multicast capacity game and gave complementary bounds on the Price of Anarchy and the Price of Stability. We hope that this is only the beginning of analyzing the multicast capacity of wireless networks. Many interesting open questions remain, paralleling the work on unicast capacity. For example, what if we consider restricted classes of graphs, such as unit-disc graphs, which are typically used to model wireless networks? Does MaxPDS become easier, and are equilibria in the multicast capacity game closer to optimum? Or what if we consider version of equilibria such as coarse correlated equilibria which well-known learning algorithms (namely, no-regret algorithms) are known to converge to? In [4,2] these equilibria were used to analyze simple distributed algorithms for unicast capacity maximization -can something similar be done here? And for all of these questions, what happens if we work in the SINR model rather than the graph model?
A Proofs from Section 2 A.1 Proof of Theorem 2
Let G = (V, E) be an instance of MaxPDS with |V | = n. For any set S of vertices, let f (S) denote the number of perfectly dominated vertices by S. Let ALG be an initially empty set and let OPT denote the optimal set of dominating vertices in the above instance.
Partition the vertices into log(n) groups G i such that v ∈ G i if 2 i ≤ d(j) < 2 i+1 . Then, there must exist a group i ⋆ such that |G i ⋆ | ≥ 1 log(n) · n. Additionally, since f (OPT) ≤ n, then Our solution ALG is now constructed by randomly adding each vertex i to ALG independently with probability 1 2 i ⋆ when i ⋆ > 0, and with probability 1 2 when i = 0. Let S ⊂ V be the vertices that are perfectly dominated by ALG.
Then, the probability that v is perfectly dominated by ALG is the probability that exactly one of N (v) is in ALG and the remaining vertices in N (v) are not in ALG. Since each vertex is chosen to be in ALG independently, then when i > 0, When i = 0, then d = 1 and Therefore, Therefore, E[f (ALG)] f (OPT) = O(log(n)) as desired. Note that while the above algorithm is randomized, it is straightforward to derandomize in polynomial time using the standard method of conditional expectation.
B Proofs from Section 3 B.1 Nash Equilibria and Maximal Perfect Dominating Sets
When analyzing a game such as the multicast capacity game, we would like to understand when pure Nash equilibria exist. Because the optimal solution to an instance of this game is a maximum perfect dominated set, a natural idea to find a potential equilibrium is to consider a maximal perfect dominated set, which is a set of vertices such that the addition or deletion of any vertex to or from that set does not increase the number of perfectly dominated vertices. For example, in the independent set game discussed above, it is easy to see that the utilities are defined such that every maximal independent set constitutes a pure Nash equilibrium. While that may provide hope of showing the existence of a PNE in our game by analyzing the maximal perfect dominating sets, in this section we show that there is not necessarily overlap between the set of PNEs and the set of maximal perfect dominating sets.
Our main result for this section is the following.
Theorem 12. There exists an instance of the multicast capacity game for which there exists at least one PNE and there is no intersection between the set of PNEs and the set of maximal perfect dominating sets.
Proof. Consider the following example. Let G = (V, E) be a graph shown in Figure 1. Let C 1 and C 2 each be independent sets of size 10, where a i is adjacent to all vertices in C i for i ∈ {1, 2}. We now enumerate the set of PNEs in this graph. Suppose that there exists a PNE where at least one of the a i does not broadcast. Since the majority of N (a i ) is in C i , it must be the case that most of C i is transmitting. However, this would not be a PNE, because if more than one node in C i is broadcasting, they all have incentive to stop broadcasting. Therefore, there does not exist a PNE where either a 1 or a 2 is not broadcasting. Now, consider a 1 ∪ a 2 as the set of broadcasters. This is already a PNE, because no other nodes have incentive to broadcast. Furthermore, one can see that this is the only PNE. Note that this is not a maximal perfect dominating set, because if b 1 were to broadcast, u would be added to the number of successes, and in fact the resulting set {a i ∪ a 2 ∪ b 1 } is a maximal perfect dominating set. Therefore, there is no intersection between the set of PNEs and the set of maximal perfect dominating sets in this graph.
B.2 Proofs from Section 3.1
Proof of Lemma 6. Consider the sum B + S + F + A. Each of B, S, F, A is a sum over all i ∈ [n]. Therefore, taking the ith term from each, the contribution this term to B + S + F + A is Therefore, B + S + F + A = i∈[n] 1 = n.
Proof of Lemma 7. Let i ∈ [n] with p i > 0. By definition, Because σ is a MNE, E s∼σ [u i (s)] ≥ 0 so p i (S i − F i ) ≥ 0. Therefore, since p i > 0, it must be the case that S i − F i ≥ 0, so S i ≥ F i as desired.
Proof of Lemma 8. Suppose that p i > 0. Because we are not considering any isolated vertices, By Lemma 7, S i ≥ F i . Therefore, since S i accounts for more than half of the above sum, S i ≥ 1 2 .
Proof of Lemma 9. Let i be a vertex that is broadcasting with p i > 0. By Lemma 8, we have that j∈N (i) α ij ≥ 1 2 . Therefore, Proof of Lemma 10. Let i be any vertex and let j 1 , j 2 , . . . , j m ∈ N (i). Let X i 1 , X i 2 , . . . , X i m denote the events that i is a failure for j k for each k, meaning the event that j k attempts to broadcast to i but i, while not broadcasting, receives at least one other transmission. The probability of such an event X i k occurring is p j k (1 − α j k i ). Let X i denote the event that there exists a j ∈ N (i) such that j attempts to broadcast to i and fails. Then, by a union bound we have that p i S i = S by Lemma 7, so F ≤ S as desired.
Proof of Lemma 11. Let β i be the probability that i does not broadcast and does not receive any messages. Let X = {i ∈ [n] : β i > .9} be the set of vertices who do not broadcast or receive messages with probability greater than .9. Let Y = V \ X. For any vertex i, let d Y i = |N (i) ∩ Y | and let d X i = |N (i) ∩ X|. For any set of vertices U and vertex i, let S U i be the expected number of successes i receives on U ∩ N (i) given that i chooses to broadcast and similarly let F U i be the expected number of failures i receives on U ∩ N (i) should i choose to broadcast.
For any j ∈ X and i ∈ N (j), since β j > .9, we have that Therefore, for any i, S X i =
|
2017-02-13T20:37:18.000Z
|
2017-02-13T00:00:00.000
|
{
"year": 2017,
"sha1": "b201ed98c54726b7423c35c284b65e2af815decf",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b201ed98c54726b7423c35c284b65e2af815decf",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
3839261
|
pes2o/s2orc
|
v3-fos-license
|
Environmentally Realistic Exposure to the Herbicide Atrazine Alters Some Sexually Selected Traits in Male Guppies
Male mating signals, including ornaments and courtship displays, and other sexually selected traits, like male-male aggression, are largely controlled by sex hormones. Environmental pollutants, notably endocrine disrupting compounds, can interfere with the proper functioning of hormones, thereby impacting the expression of hormonally regulated traits. Atrazine, one of the most widely used herbicides, can alter sex hormone levels in exposed animals. I tested the effects of environmentally relevant atrazine exposures on mating signals and behaviors in male guppies, a sexually dimorphic freshwater fish. Prolonged atrazine exposure reduced the expression of two honest signals: the area of orange spots (ornaments) and the number of courtship displays performed. Atrazine exposure also reduced aggression towards competing males in the context of mate competition. In the wild, exposure levels vary among individuals because of differential distribution of the pollutants across habitats; hence, differently impacted males often compete for the same mates. Disrupted mating signals can reduce reproductive success as females avoid mating with perceptibly suboptimal males. Less aggressive males are at a competitive disadvantage and lose access to females. This study highlights the effects of atrazine on ecologically relevant mating signals and behaviors in exposed wildlife. Altered reproductive traits have important implications for population dynamics, evolutionary patterns, and conservation of wildlife species.
Introduction
The role of sex hormones in the expression of sexually selected traits has been established in many vertebrate species, especially in males [1,2,3,4,5]. Disruption of the expression or perception of such traits can influence mate choice and evolutionary patterns [6,7,8,9]. The increase in various forms of pollution is becoming an important factor in such disruptions [6,10] and is hence instrumental in shaping evolutionary trajectories. A common form of pollution is caused by endocrine disrupting compounds (EDCs), which interfere with proper hormonal functioning. These compounds can be natural or synthetic in origin, including organochlorines, organophosphates, polychlorinated biphenyls (PCBs), phthalates, synthetic hormones and hormone-blockers, and phytoestrogens. Many of them have anthropogenic sources such as pesticides, industrial effluents, pulp mill effluents, plastics and sewage. Significant routes of exposure include direct exposures from living in contaminated soil or water, as well as indirect exposures through eating contaminated prey [11,12,13,14]. EDCs can alter reproductive success by affecting all aspects of the reproductive system, including gonadal formation, production of hormones and gametes, sex determination [15], formation of egg shells [16], and production [15,17] and maintenance of mating signals and behaviors [18,19].
The effects of EDCs on wildlife have been receiving increasing attention in the literature in recent years. While earlier toxicological studies focused on mortality effects from acute exposures, ecotoxicologists are now focusing on sub-lethal effects of more realistic exposures. Sub-lethal effects can be subtle yet farreaching by influencing population and community dynamics through cascading effects. Population level effects may include altered demographics [20,21,22,23] and mating systems [24,25,26,27]. This can affect community dynamics by impacting species closely associated with the focal species. Multi-generational effects due to persistence of pollutants in the environment across generations, or via maternal transfer, can affect evolutionary trajectories of these species as a result of altered sex ratios and mating systems.
The current study focused on the effects of atrazine, a widely used triazine herbicide. Atrazine is the second most commonly used pesticide in the US [28]. It is resistant to degradation, and its half-life in surface waters can be over 700 days [29,30]. Many animal species that spend all or part of their life cycle in water can be exposed to significant levels of the chemical for a considerable part of their life. Concentrations of atrazine in water bodies around agricultural fields are expected to be in the range of ppb (90 day average) depending on the type of crop and application rate [31]. Non-target species inhabiting water bodies around agricultural fields are particularly at risk for exposure to atrazine. Atrazine induces aromatization of testosterone to estradiol [32,33], thereby causing an estrogenic effect in exposed individuals; however, this mechanism has been debated [34]. Several studies have demonstrated the feminizing effects of atrazine in amphibians [35,36,37,38], yet the number of studies with ambiguous and conflicting results [39,40] contributes to preventing policy changes regarding the use of this pesticide.
Here, I tested whether prolonged exposure to atrazine can alter male mating signal expression, including ornamentation and mating behaviors. I used guppies (Poecilia reticulata) as a model organism to test these questions, as guppies have distinct sexual dimorphism, their mating signals and behaviors have been well characterized [41], and the role of sex hormones in the expression of these traits has been explored [5,42]. Further, guppies have been used for testing similar questions in other ecotoxicological studies [43,44,45]. Guppies are small tropical fish native to Trinidad and parts of South America. They are especially useful for testing hypotheses related to sexual selection. Males have different colored spots on their body and fins [41]; they perform characteristic courtship displays (called ''sigmoid'' displays) and attempt forced copulations. Mating is predominantly through female mate choice; females respond to courtship displays and to males with larger and brighter orange spots [41,46], but avoid forced copulatory attempts [41,47].
Although the pattern and intensity of orange spots are mostly governed by genetics [48,49], there is some indication that androgens are required for their expression [43,45,48,50,51,52], as well as for performing courtship displays [43,50,53]. Shenoy and Crowley [9] discuss in detail how hormones may be involved in the expression of sexual signals. An aromatase inducer like atrazine can alter hormonal balances by (1) increasing the estradiol concentrations, which would increase the estradiol: testosterone ratio, and directly reduce the production of testosterone [54,55], and by (2) reducing the concentration of testosterone available for conversion to 11-keto testosterone [56,57], an important teleost androgen required for the expression of secondary sexual characteristics.
I hypothesized that prolonged exposure to environmentally relevant doses of atrazine would (1) reduce the area and intensity of orange color spots, which are the primary male mating signals in guppies; (2) reduce the frequency of mating behaviors such as courtship displays and forced copulatory attempts (these were considered behaviors related to mating effort); and (3) in the presence of competing males, reduce the frequency of behaviors related to mating effort and those related to male-male aggression. The third hypothesis was tested because male-male competition is high in many animal species, including guppies, and examining behaviors in the context of mate competition is ecologically relevant. Further, contaminants are often differentially distributed in the landscape, and different individuals in a population may be exposed unequally; often, species that are migratory or that converge at breeding sites would have differentially exposed individuals within a population. Since individuals impacted to varying degrees would be competing together within a population, I tested the third hypothesis by pairing treated males with those that were not exposed to the contaminants. This also standardized the condition of each experimental male's opponent.
Ethics statement
The experimental protocol for this study was approved by the University of Kentucky Institutional Animal Care and Use Committee (protocol number 2007-0137). Treatments 85 guppies were randomly assigned to one of five treatments at 17 fish per treatment. The treatments included a control (no treatment), dimethylsulfoxide (DMSO, 6 ppb) as the solvent control, atrazine low-dose (1 ppb), atrazine high-dose (15 ppb), and ethynyl estradiol (2 ppb) as the estrogenic positive control. A solvent control was used because atrazine and ethynyl estradiol were dissolved in DMSO; all treatments received the same concentration of DMSO. Atrazine concentrations used were based on US EPA estimated environmental concentrations [31]. Pilot experiments helped determine sub-lethal ethynyl estradiol concentrations. Concentration of atrazine in the water column in three randomly selected jars per treatment was ascertained by liquid phase extraction with methylene chloride following an adaptation of US EPA Method 619 [58]-which produced 95% recovery of the target compound-and analyzed by gas chromatography/mass spectrometry. The average concentration at the end of one week was determined to be 0.26 ppb and 12.98 ppb for the low-and high-dose respectively, with negligible loss over the 7 days. No atrazine was detected in the control samples. Atrazine (98% purity) was purchased from Chem Service, Inc., through Fisher Scientific, and 17 a-ethynylestradiol (98% purity) was purchased from Sigma-Aldrich. Treatments continued for 16 weeks to simulate a long-term exposure.
Animals
Adult male guppies used for this study were descendants of wildcaught guppies from Trinidad. Three populations-Aripo Upper River, Aripo Lower River, Guanapo Upper River-were equally represented in all treatments to account for geographic and genetic variation. All males included showed clear color patterns and gonopodium development [41], indicating sexual maturity. During the period of the study, all fish were housed separately in individual glass jars with 1.6 L of aged, pre-aerated, carbon filtered, conditioned water. Tropical fish flake food was fed once each day in ad libitum quantities. Room temperature was maintained at an average of 25uC; the light: dark cycle was set to 12:12 hours. Water was changed once weekly with static renewal of chemical treatments. Mortality was recorded every day.
Color measurements
All fish were photographed once before the start of treatments and once after treatments stopped with a Nikon D50 digital SLR camera with a 55 mm telephoto lens and Nikon SB-400 AF Speedlight flash. The shutter speed was set to 1/60 s, aperture to 22 F and film speed to 200 ISO. The flash speed was set to 1/16 s and power to 20.7, and was covered with a single sheet of tissue paper to diffuse the light. All fish were photographed on the left side in the same position relative to the lens and flash. ImageJ 1.43u [59] was used to measure the area of orange spots and body area of each fish in mm 2 . An average value of the red (R), green (G) and blue (B) channels of each orange spot was also measured. Each fish was photographed along with an orange color standard, which was placed in the same position in every picture. Colors were standardized across all pictures by applying a correction factor to each of the average R, G, B values, such that the corrected R value of the fish in the picture to be measured, R i 9 = R i * R Sr /R Si , where R i is the average R value of the fish in the picture to be measured, R Sr is the average R value of the color standard on one picture chosen to be the reference picture; R Si is the average R value of the color standard on the picture to be measured. Similarly, G i 9 and B i 9 were calculated for each picture. A dark orange spot would have a high R9 measure, and lower G9 and B9 measures; on the other hand, a pale orange spot would have high R9, G9 and B9 measures. The repeatabilities of the corrected R9, G9 and B9 values were r = 0.98, r = 0.95 and r = 0.96, respectively. Further, a single composite variable comprising of all three color channels was created by inputting the corrected R9, G9, B9 values in a Principal Components Analysis and extracting one variable. The repeatability of this composite variable was found to be r = 0.98.
Behavior trials
At the end of the 16 week treatment period, the fish were subjected to two sets of behavior trials: the first set assessed behavior of the males towards a female in the absence of competition from another male, and the second set of trials assessed mating behaviors in the presence of a competing male. All trials were conducted within the first four hours after lights turned on and during the last four hours before lights turned off. All trials were conducted blind: the observer did not know the treatment that any of the fish had received and identified males by their color patterns only. Data were recorded in real time. The observer sat in darkness, 1 m away from the tank, to avoid startling the fish; the fish did not appear to notice or be disturbed by the presence of the observer. Trial tanks were illuminated with full spectrum light to ensure that all colors were perceived naturally by the other fish in the trial [60,61].
Trials without competing males. Each male was placed in a trial tank of dimensions 30620615 cm (height6length6width) and 7.5 L of water, with one virgin female from the same population. Water used was aged, pre-aerated, carbon filtered, and conditioned, and water temperature was maintained between 23-25uC. After a 5 minute acclimation period, the fish were observed for 10 minutes. The total number of sigmoid courtship displays, gonopodium swings and mating attempts were recorded throughout the trial period. Males frequently swing their gonopodium forward, and this appears to increase in frequency during mating or aggressive interactions; any gonopodium swing greater than 90u was counted.
Trials with competing males. These trials were conducted to test whether treatments altered male behaviors compared to an untreated male in the context of competition. Males were paired in the following fashion-each pair consisted of one male from the control group (opponent) and one male (focal male) from one of the other four treatment groups: DMSO, atrazine low-dose, atrazine high-dose, or ethynyl estradiol. Control group males were used in multiple pairs as there were not enough males to be used only once. Control group males were paired with each of the different treatment group males in random order. Males of a pair belonged to the same population. Pairs could not be size matched after matching for population; treatment group males were on average 14% of body area (8.32 mm 2 ) larger or smaller than paired control group males. Body size was measured as area rather than length because this was a more realistic measure of what competing males would perceive. Each pair was placed in a trial tank of dimensions 30620615 cm (height6length6width) and 7.5 L of water, with a virgin female from the same population. After a 5 minute acclimation period, behaviors were recorded for 10 minutes. At each 10 s point, I recorded which male was closer to the female. A male had to be more than one body length ahead of the other male to be ''closer'', and received 1 point in such cases. If both males were within one body length of each other, and within at least two body lengths of the female's vent, they were both recorded as being equally close; in such cases both males received 0.5 points. If both males were further than two body lengths from the female's vent, they were both recorded as being far from the female and received 0 points for that event. At the end of the 10 minute trial period, each male's ''closeness'' points were summed and its ratio to the total number of events gave a measure of proximity. Throughout the whole trial period, I counted for each male the total number of sigmoid courtship displays, mating attempts, aggressive displays to the rival male, and attacks on the other male. The number of gonopodium swings was not recorded, as these happened in quick succession, and the observer could not keep a reliable count for both males.
Data Analyses
All data were analyzed in SAS 9.2 [62]. All statistical procedures refer to SAS procedures.
Mortality. Univariate survival analyses (LIFETEST procedure) were first used to test which variables (among treatment and population of origin) were to be included in the final model to test for effects on mortality. Based on the log-rank test of equality over strata, population of origin was not included in the model (x 2 = 3.096, p = 0.38). Difference in mortality between treatments was then analyzed using regression analysis of survival data based on the Cox proportional hazards model (PHREG procedure).
Area of Orange Spots, Intensity of Orange Spots, and
Mating Behaviors in the Absence of Competition. The dependent variables were appropriately transformed to meet the assumptions of parametric tests wherever required. Pearson's product-moment correlations between the measures of color and mating behaviors were analyzed using the CORR procedure. A mixed model ANOVA (MIXED procedure) was used to analyze the treatment effects on (1) Area of orange spots: the change in proportion of orange between initial and final readings, (2) Intensity of orange spots: the change between initial and final readings of corrected R9, G9, and B9 values, and the composite variable, and (3) Mating behaviors in the absence of competition. The number of courtship displays and number of mating attempts. The correlation coefficients revealed that the number of gonopodium swings was correlated strongly with the number of courtship displays (r = 0.62, P,0.0001) and weakly with the number of mating attempts (r = 0.26, P = 0.04), and so this variable was eliminated from further analyses. A mixed model ANOVA using the MIXED procedure allows the use of fixed and random factors in the model; the effect of random factors, wherever included in the model, is removed and results are based on least square means that are adjusted for this effect.
Preliminary analyses determined that the control group and solvent control group did not significantly differ from each other for all variables and so the two groups were pooled as a common control group (area of orange spots, P = 0.9; R9, P = 0.22; G9, P = 0.22; B9, P = 0.81; composite variable, P = 0.27; number of courtship displays, P = 0.9; number of mating attempts, P = 0.08). Population of origin was input as the random effect wherever it improved the fit of the model as determined by significantly lower Akaike Information Criteria values (henceforth AIC statistics). For behavioral responses in the absence of competition, the identity of the female used for the trial (because females were used in multiple trials) was also included as a random factor, and time of day that the trial was conducted was included as a covariate, wherever these improved the fit of the model as determined by AIC statistics. Planned orthogonal contrasts were used to test whether (1) the atrazine low-dose and high-dose had similar effects on the response variables, (2) the two atrazine groups had significantly different effects on the response variables compared to the pooled control group, and (3) ethynyl estradiol had the strongest effect on the response variables compared to the other groups. One-tailed p-values were reported for these tests because of the clear directionality of the hypotheses. Further, Tukey's post-hoc tests were used to see which groups differed significantly from each other. Effect sizes with 95% confidence intervals of the differences between each of the treatment groups and the pooled control group were calculated as per Nakagawa and Cuthill [63].
Mating behaviors in the presence of competition: The dependent variables were appropriately transformed to meet the assumptions of parametric tests wherever required. Pearson's product-moment correlations between all variables were analyzed using the CORR procedure. Due to the moderate correlations between some of the response variables (see Table 1 for correlations), and because all behaviors recorded on a pair of fish occurred during the same trial period, a MANOVA was conducted with the GLM procedure using the focal male's responses from each pair. Covariates and random effects were not included as the GLM procedure is not equipped to handle these additional effects. Each response variable was then analyzed separately. Since males were paired, and their behaviors were dependent on each other, an ANCOVA was performed with the MIXED procedure to analyze the effect of the treatments on the focal male's behavior in response to his paired opponent's behavior, which was included as the covariate. Covariates were mean-centered within treatments so that mean estimates for each treatment corresponded with the mean value of the covariate. I specifically tested for differences between treatment intercepts (seen by a significant effect of the treatment) and slopes (seen by a significant interaction of treatment by covariate). A negative effect of the treatments on competitiveness would be indicated by a reduced slope and intercept of the relationship described above, compared to the DMSO (solvent control) group. The difference between the competing males in body size (measured by area of body in mm 2 ) and proportion of body area covered by orange were input as additional covariates if they improved the fit of the model as determined by AIC statistics. Similarly, population of origin and control male's identity (because males from the control group were used in multiple pairs) were input as random effects wherever they improved the fit of the model. Further, each treatment-control paired data set was analyzed separately for each treatment (DMSO, atrazine low-dose, atrazine high-dose or ethynyl estradiol) with a paired design to test whether the treatment male consistently behaved differently from his paired control opponent, depending on what the treatment was. A mixed model ANOVA (MIXED procedure) was used to test this, with the pair identity input as a random effect with compound symmetry as the covariance structure. Population of origin was also input as a random effect wherever it improved the fit of the model as determined by AIC statistics. Time of day that the trial was conducted, the control male's trial number, the differences in body size and proportion of body area covered by orange between the competing males were input as covariates if they significantly improved the fit of the model.
Mortality
There were no significant effects of the treatments on mortality rate (likelihood ratio test: x 2 = 6.87, df = 4, P = 0.14). The ethynyl estradiol group had the highest mortality over the 16 week period (47.06%) but the hazard ratio was not significantly higher than the control group (Hazard ratio = 3.38, P = 0.07). The mortality in the other groups was as follows: control 17.65%, DMSO 29.41%, atrazine low-dose 23.53%, and atrazine high-dose 11.76%. At the end of the exposure period, the number of surviving fish in each of the groups was: control = 14, DMSO = 12, atrazine low dose = 13, atrazine high dose = 15, ethynyl estradiol = 9.
Color
The treatments had a significant effect on the change in body area covered by orange (F 3, 58 = 14.19, P,0.0001; figure 1). This effect was mainly driven by the ethynyl estradiol group, which had a significantly lower proportional area of orange than the pooled controls (P,0.0001, effect size695% confidence interval [d695% CI] = 22.2760. 89), and all the other groups combined (planned orthogonal contrasts, p,0.0001). The atrazine high-dose appeared to reduce the area of orange (d695% CI = 20.7660.68, figure 1), but this was not statistically significant (P = 0.098).The atrazine low-dose did not reduce the area of orange (d695% CI = 20.1260.65), and the two atrazine groups differed from each other (planned orthogonal contrasts, P = 0.055, figure 1). Because of the difference between the two atrazine groups, they did not collectively reduce the area of orange compared to the pooled control group (planned orthogonal contrasts, P = 0.15). The Tukey's post-hoc tests brought out significant differences only between the ethynyl estradiol group and each of the other groups. The loss of power resulting from all pair-wise comparisons lead to a lack of statistical evidence for a difference between the atrazine high-dose and pooled control groups (unadjusted P = 0.04, Tukey's adjusted P = 0.14). The treatments did not affect the change in corrected R9 (F 3, 57 = 0.29, P = 0.83), G9 (F 3, 57 = 0.53, P = 0.67), and B9 (F 3, 57 = 0.19, P = 0.90) values, or the composite variable (F 3, 57 = 0.31, P = 0.82). The planned orthogonal contrasts did not reveal any significant patterns. Population of origin failed to improve the fit of the model for explaining the variation in body area covered by orange or corrected R9, G9, B9 and the composite variable, suggesting that this factor was not important in explaining the change in color over the study period.
Mating behaviors in the absence of competing males
The number of mating attempts was weakly but negatively related to the proportion of body area covered by orange (r = 20.25, P = 0.046); there were no other significant correlations between any of the other measures of color and behavioral variables. The number of courtship displays differed significantly between treatments (F 3, 61 = 9.79, P,0.0001; figure 2). The Table 1. Pearson's correlation coefficients between variables of mating behavior in the presence of competing males. planned orthogonal contrasts determined that the ethynyl estradiol group displayed significantly less than the other groups (P,0.0001). The two atrazine groups displayed similarly to each other (P = 0.40), and together they displayed significantly less than the pooled controls (P = 0.01). The effect sizes showed that the ethynyl estradiol group displayed less than the pooled control group (d695% CI = 22.1560.65), as did the atrazine high-dose group (d695% CI = 20.6460.64), but the atrazine low-dose group did not display less than the pooled control group (d695% CI = 20.5660.65). The Tukey's post-hoc tests revealed similar trends, though the lack of power weakened some of these results. The number of mating attempts did not differ between groups (F 3, 58.1 = 2.01, P = 0.12), and none of the planned orthogonal contrasts showed significant differences. Population of origin improved the fit of the model to explain variation in courtship display rates, but not the number of mating attempts.
Mating behaviors in the presence of competing males
The measures of mating effort were all moderately correlated with each other (proximity to the female and number of courtship displays: r = 0.51, P = 0.0002, number of courtship displays and number of mating attempts: r = 0.40, P = 0.0056, proximity to the female and number of mating attempts: r = 0.40, P = 0.0059; Table 1), and the measures of aggression were also moderately associated with each other (the number of aggressive displays and the number of attacks on paired male: r = 0.41, P = 0.004; Table 1). Further, the proximity to a female was negatively associated with the number of aggressive displays (r = 20.37, P = 0.01; Table 1), and this is because males do not focus on the female during aggressive interactions and can often be far from her. The treatments had a significant effect on the focal males' responses as a whole (Wilks' l = 0.42, F 15, 119.11 = 2.95, P = 0.0005).
The treatments did not have significant effects on the proximity or number of mating attempts, or their interaction with their opponent's behaviors; the treatments significantly influenced the number of displays, but this was driven by the effect of ethynyl estradiol rather than either of the atrazine groups (figure 3a, b, c; table 2). Population of origin improved the fit of the model explaining variation in proximity and number of courtship displays, but not the number of mating attempts. The number of mating attempts was influenced by the difference in body size between the competing males (P = 0.04); the larger the focal male was compared to his paired control opponent, the more mating attempts he made. There was a significant effect of the treatment (F 3, 28.9 = 8.25, P = 0.0004) and the interaction of treatment and covariate (F 3, 28.4 = 10.37, P,0.0001) on the number of attacks on the competing male (figure 3d). The atrazine high-dose and ethynyl estradiol treatments significantly reduced the slopes and intercepts of the regression lines between the focal male's behavior and the paired control male's behavior (table 2) compared to the DMSO group. Treatments also affected the number of aggressive displays made to the rival male (F 3, 30 = 4.1, P = 0.015; figure 3e) but had no effect on the interaction of treatment and covariate as there was no significant effect of the covariate itself. Both these variables were also influenced by the identity of the paired control male.
The analyses of the effects of treatments within pairs showed that the solvent control males did not differ from control males with regard to any of the variables tested (proximity,
Differential susceptibility to atrazine
Population of origin did not affect mortality rates, suggesting that guppies from the different populations were not differentially impacted. Atrazine treatments did not influence mortality rates. However, estradiol can be toxic [64,65,66], and ethynyl estradiol may have moderately increased mortality in this study, though the trend was not statistically significant.
Although the different populations would vary naturally in the intensity and area of orange [41], it is not surprising that they did not respond differently to the treatments, because the response variable analyzed was the change in these variables over the exposure period. On the other hand, the number of courtship displays was influenced by population of origin; it is well known that guppies from different populations display at different rates [41,67,68]. Similarly, display rates and proximity of the focal male in relation to that of the paired control male were influenced by population of origin. This appears to be an artifact of the inherent difference in courtship intensity between high predation and low predation sites [41,67,68]. Possibly, in the low predation sites, individuals are more conspicuous in their competitiveness and respond to high displaying competitors by also displaying more. But in high predation sites, individuals may be more cautious in responding similarly. Interestingly, the number of mating attempts in the presence or absence of competitors was not influenced by population of origin. Perhaps because sneak copulations are less conspicuous than courtship displays [67], males in any predation regime would perform these at comparable rates; however, this may not always be the case [68]. But it must be noted that the fish in this study had been raised in the absence of predators for a few generations, and some plasticity may account for the lack of antipredatory behaviors.
Impaired mating signals and implications for sexual selection
As seen in other studies examining the effects of EDCs on sexual traits [26,43,45,50,53,69,70,71,72,73], prolonged atrazine exposure reduced courtship display rates, and there was a trend for reduced expression of ornament size. The high dose of atrazine reduced the area of orange by 1%; this can alter female responses to male displays [74] such that his reproductive success is significantly reduced by two matings [75]. Area of orange is a highly heritable trait in guppies [41], and any reduction in the area must be due to reduced allocation of carotenoids to the orange spots. Though the preference for orange color varies across populations [76], female guppies generally show a preference for brighter males performing more courtship displays [41,46,77], and these appear to be honest signals of mate quality [78,79,80,81]. In this study, the number of courtship displays was not related to the proportion of body area covered by orange; but color was associated with mating behaviors in other ways (results not shown): a composite variable including the number of courtship displays and gonopodium swings was moderately correlated with the corrected blue channel, B9, a measure of intensity of the orange spots, indicating that color intensity was associated with displays. The number of mating attempts was negatively, albeit weakly, related to the proportion of body area covered by orange, suggesting that less colorful males tended to use sneaker strategies more frequently than more colorful males.
It is particularly interesting that the behavior most affected by atrazine exposure was one believed to be an honest mating signal. Several studies (such as [82,83,84] among others) indicate that sex hormones play an important role in maintaining the honesty of such signals via immuno-suppressing mechanisms: increased testosterone required for the maintenance of sexual signals can damage the immune system, and individuals with an already compromised immunocompetence would be unable to signal effectively [85]. Other mating strategies like forced and sneaky copulations may be governed more by factors such as population sex ratios [86,87], Table 2. Intercept and slope estimates of the treated males' behaviors in relation to those of the paired control males for each treatment group, as generated by the ANCOVA. predation risk [88] and dominance hierarchies [89]; it is unclear whether sex hormones play a role in the expression of these behaviors in any species, with the exception of one example [90]. In this study, the number of mating attempts was not affected by atrazine exposure. Forced copulatory attempts in guppies are not always successful [47] and are under selection pressure via malemale competition [86] and predation [88]. These patterns then raise the question whether environmentally altered hormone levels could affect the honesty of mating signals, and whether alternate mating strategies might become more dominant in populations impacted by EDCs [9]. Experiments testing such ideas would be valuable contributions to the fields of ecotoxicology and evolutionary biology. A few studies have analyzed the effects of EDCs on male competitive behaviors [91,92,93,94]. Male-male competition is high in many species, and an individual's aggression levels can influence his access to mates [95,96]. Pollutants are often unequally distributed across landscapes and within habitats. It is thus reasonable to expect individuals who have been impacted differently to compete against each other, especially in species that are migratory or that converge at breeding sites. The results of this study show that atrazine-impaired males in such cases may be at a mating disadvantage compared to those exposed less or not at all. Interestingly, in the presence of a rival male, the measures of mating effort (proximity to the female, number of courtship displays and number of mating attempts) were altered relatively little by atrazine exposure, but aggression was strongly reduced. I observed that when competing, the two males focused more on aggression and less on mating effort; as a result, treatment effects were stronger for the variables of aggression than for the variables of mating effort. It is pertinent to note that the difference between competing males in body area covered by orange did not influence any competitive behaviors, while differences in body size influenced only the number of mating attempts.
Aggressive displays are employed by animals to discourage the rival from attacking or competing for the resource, thereby circumventing active combat [97]. During behavioral trials, I observed that aggressive displays by one individual did not necessarily provoke aggressive displays by the other; however, attacks by one individual provoked a responding attack from the other, resulting in active fighting. Thus, I did not find a relationship between the number of aggressive displays by the focal and opponent males, but I did detect this relationship in the case of attacks, and atrazine exposure reduced the strength of the relationship. The paired control male's identity influenced the focal male's responses, suggesting that some individuals elicited stronger aggression than others. Despite this effect, the treatments had a significant effect on aggression levels. Further experiments testing whether the reduced aggression translates to reduced reproductive success would be informative. Also, it is important to know whether the EDC-altered aggression levels affect stress of exposed individuals [98], thereby influencing survival and self maintenance.
Altered mating signals and behaviors can influence population dynamics in many ways. An increased number of unattractive males in the population would alter the effective sex ratio, as females of many species, including guppies, exercise strong mate preference for sexual traits. A reduction in attractive males can also influence extra-pair mating rates [99], which can in turn alter offspring quality, disease transmission rates and predation risk [100]. EDC-altered sexual traits may not correlate with mate quality thus blurring the relationship between mate quality and signal; this can lead to females making ''incorrect'' mate choices that reduce their offspring quality and number [9]. These and other impacts of altered mate choice on population dynamics have been reviewed by Quader [100].
Understanding the population level effects of EDC-altered mating signals is important to conservation biology. Many contaminants are persistent and remain in the environment at substantial concentrations for several years [101], spanning multiple generations of short-lived species. Multi-generational disruption of sexual traits can alter evolutionary trajectories [9]. Future studies that aim to assess the evolutionary effects of altered sexual traits as a result of pollution must evaluate the longterm ecological consequences of chronic and persistent contamination.
Atrazine
Several studies of sub-lethal effects of atrazine have demonstrated estrogenic effects [33,102] and negative impacts on measures of reproduction, including fecundity, gonadal morphology, sperm counts, and hormone production [35,36,38,102,103,104]; Rohr and McCoy [39] have reviewed several such studies. A few studies have also examined the effects of atrazine exposure on secondary sexual traits: Hayes and colleagues found that larval exposure to low doses of atrazine reduced larynx size [36] and structure [35] in African clawed frogs. The larynx is important for vocalization, the primary mating signal in many anuran species; males with smaller larynxes produce suboptimal calls. However, there is still a dearth of literature on the effects of atrazine on sexual traits. The current study advances this issue and should encourage further focus on these key effects.
The low dose of atrazine affected only courtship display rates, and not any of the other variables measured, indicating that at this concentration (a minimum of 0.26 ppb), not all mating signals are impaired in guppies. Whether this concentration may affect mating signals in other species remains to be tested; as mentioned earlier, African clawed frog larvae exposed to atrazine concentrations ranging from 1-200 ppb showed reduced larynges at metamorphosis [36]. Where there was an effect of atrazine, especially the high dose, the direction of the effect was similar to that of ethynyl estradiol suggesting that at higher doses clear estrogenic patterns may have arisen. It must be kept in mind that non-sexual behaviors were not measured in this study and so it is possible that the effects of atrazine on sexual behaviors may be due to poor health in general. Regardless, the impacts on sexual traits seen here are significant enough to be of concern. Dose-response studies with a larger range of atrazine concentrations would help determine the concentrations and exposures influencing different end-points in wildlife species. Understanding the effects on sexual traits is especially important because of their subtle yet crucial implications for reproduction and populations dynamics. More studies along these lines will highlight the negative impacts of atrazine on wildlife reproduction. There may be similar effects on human health as well, because the mechanism of action of atrazine is similar across most vertebrate taxa, including humans [102].
|
2014-10-01T00:00:00.000Z
|
2012-02-01T00:00:00.000
|
{
"year": 2012,
"sha1": "8140871c54ef60aca9bd88bfb1c657e877e1c6a5",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0030611&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bdbe1b422d32287ae2ee1a2be42a182629df723b",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
265064549
|
pes2o/s2orc
|
v3-fos-license
|
Divergent mechanisms of reduced growth performance in Betula ermanii saplings from high-altitude and low-latitude range edges
The reduced growth performance of individuals from range edges is a common phenomenon in various taxa, and considered to be an evolutionary factor that limits the species’ range. However, most studies did not distinguish between two mechanisms that can lead to this reduction: genetic load and adaptive selection to harsh conditions. To address this lack of understanding, we investigated the climatic and genetic factors underlying the growth performance of Betula ermanii saplings transplanted from 11 populations including high-altitude edge and low-latitude edge population. We estimated the climatic position of the populations within the overall B. ermanii’s distribution, and the genetic composition and diversity using restriction-site associated DNA sequencing, and measured survival, growth rates and individual size of the saplings. The high-altitude edge population (APW) was located below the 95% significance interval for the mean annual temperature range, but did not show any distinctive genetic characteristics. In contrast, the low-latitude edge population (SHK) exhibited a high level of linkage disequilibrium, low genetic diversity, a distinct genetic composition from the other populations, and a high relatedness coefficient. Both APW and SHK saplings displayed lower survival rates, heights and diameters, while SHK saplings also exhibited lower growth rates than the other populations’ saplings. The low heights and diameters of APW saplings was likely the result of adaptive selection to harsh conditions, while the low survival and growth rates of SHK saplings was likely the result of genetic load. Our findings shed light on the mechanisms underlying the reduced growth performance of range-edge populations.
INTRODUCTION
Even in the absence of geographic barriers, all species have finite geographic ranges, with several ecological and evolutionary factors contributing to a species' range limit, often with complex interactions (Hoffmann and Blows, 1994;Willi and Buskirk 2019).From an ecological perspective, niche limitation is the primary factor that influences species' range limits.From an evolutionary perspective, the accumulation of genetic load in range-edge populations as a result of enhanced genetic drift and inbreeding plays a crucial role in limiting the ability of populations to adapt and expand beyond their range edge (Henry et al. 2015;Willi 2019;Perrier et al. 2022).Towards the range edge, population size tends to be smaller than in core habitats, and populations are often isolated (Kawecki 2008;Pironon et al., 2017).Small populations, especially those isolated from others, contain low genetic variation and are more vulnerable to genetic drift (Eckert et al. 2008;Willi and Buskirk 2019).Genomic signatures of accumulated genetic load causing reduced fitness in range-edge populations have been observed in several species (Zhang et al. 2016;Willi et al. 2018).
Another evolutionary factor that can limit species' range is selection (Hoffmann and Blows 1994).The opportunity for selection at the range edge might be greater than at the range center because the environment is less suitable for a species (Caruso et al., 2017;Angert et al. 2020).However, unique adaptations to niche-limited conditions can result in the fixation of favored alleles, leading to poor growth performance of rangeedge populations outside local conditions (Hoffmann and Blows 1994).In small and isolated populations, purifying selection is less effective in eliminating genetic load which means that deleterious alleles are more likely to accumulate in the genome.On the other hand, maladaptive gene flow from geographically central populations to range-edge populations can prevent local adaptation at the range edge and restrict further range expansion (the genetic swamping hypothesis; Haldane 1956).These intrinsic evolutionary factors provide some explanation of why range-edge populations fail to adapt and expand beyond the edge.
Common garden experiments can be used to compare performance between populations, and the low growth performance of individuals from range-edge populations is commonly reported (examples of tree species' studies include Oleksyn et al. 1998;Andersen et al. 2008;Kreyling et al. 2014;Lu et al. 2014).These studies provide strong evidence that range-edge populations perform poorly when transplanted elsewhere and it is assumed that certain evolutionary factors caused this reduced growth performance.However, in many cases, common garden experiments which demonstrated reduced growth performance of range-edge populations have explained its factors without directly characterizing both climatic conditions and genetic properties.In addition, factors reducing the performance of range-edge populations may differ depending on the species because climatic adaptive strategy can differ among the species (Frank et al. 2017).Because the patterns and processes of decline in growth performance of range-edge populations may vary with study species, climatic conditions and genetic characteristics, more case studies investigating such differences are needed.
Some studies claim that geographic range edges do not correspond to climatically marginal conditions for many species (Tsumura and Ohba 1993;Lira-Noriega and Manthey 2014;Pironon et al. 2015;Oldfather et al. 2020).In an analysis of 135 transplanted species, local populations showed strong adaptations to thermal marginal conditions, but their performance across various sites was no worse than other populations (Bontrager et al. 2021).This implies that climatic selection does not reduce the growth performance of individuals.For example, range-edge populations of Arabidopsis lyrate which showed moderate genetic diversity displayed good growth performance even under climatically distinct conditions from their local habitats (Sánchez-Castro et al. 2022).In addition, recent studies have suggested there is little evidence for the genetic swamping hypothesis; rather, gene flow from central populations has a positive effect on population fitness at the range edge rather than reducing fitness (Kottler et al. 2021).Moreover, Dauphin et al. (2020) indicated that genetic diversity of tree species is varied along the geographic position rather than climatic position (but see Mosca et al. 2012).Therefore, even in climatically marginal conditions, a difference in growth performance would be expected between populations that have enough gene flow to maintain moderate genetic diversity, compared with populations that are completely isolated with low genetic diversity.
In this study, the aim was to compare the relative position of the population within the species overall climatic envelope, genetic composition and genetic diversity of Betula ermanii, which is dominant in subalpine and alpine forests in Japan, and test any associations between survival, individual size and growth in the various planting sites.This study focused on 11 populations of B. ermanii in Japan including tree line (high-altitude edge) and southern-most (low-latitude edge) population.In the high-altitude edge population, individual trees of B. ermanii are characterized by low tree height, bending and multiple-stemmed trunks, which are the result of adaptations to strong winds and heavy snow fall (Okitsu 1991).The low-latitude edge population can be regarded as rear-edge populations of whole B. ermanii habitats and might have been affected by genetic drift because of their small population size and isolation from other B. ermanii populations.Even at the range edge, high-altitude populations are more likely to maintain genetic diversity than latitudinal edge populations, as proximity to other populations facilitates gene flow (Davis and Shaw 2001;Jump et al. 2009;Halbritter et al. 2015) without reducing the growth performance of saplings.This study comprised a common garden experiment at multiple sites to observe the survival rate, size and growth of saplings originating from 11 different populations, and to test the differences in growth performance and genetic factors between two types of range-edge populations: high altitude and low latitude.
Study species
Betula ermanii is a wind-pollinated deciduous tree species distributed in cool and snowy environments across eastern Russia, north China, Korea and Japan.The central range of the species is the Kamchatka Peninsula (Krestov 2003).Populations in southern Japan can be regarded as lowlatitude edge populations.The species occurs from 600-800 m a.s.l. in Kamchatka Peninsula, 700-1800 m a.s.l. in Russian Far East, 400-1100 m a.s.l. in Kuril Islands (Krestov 2003), 1700-2200 m a.s.l. in Changbai Mountain, the highest mountain in Northeast China (Wu et al. 2013;Yu et al. 2014) and 1500-1900 m a.s.l. in Mt.Hallasan, the highest mountain in Korea (TA's personal observation).In Japan, this species widely found in the subalpine forest (about 1500-2000 m a.s.l) and at the tree line of high mountains in Japan (about 2000-3000 m a.s.l.).Therefore, populations around the tree line of high mountains in Japan can be regarded as highaltitude edge populations.
Betula ermanii is a monoecious species, and their flowering occurs at the same time as the bud break in early spring.Blooming of the male flowers is earlier than the female flowers, then they can prevent self-pollination, though a germination rate of self-pollinated seeds is very low (Mori 1998).Molecular analysis suggested that B. ermanii is an allo-tetraploid species (Wang et al. 2021).A practical problem specific to genetic analyses of polyploids is the inherent difficulty in obtaining the dosage of alleles, so that different partial heterozygous genotypes (e.g., AABC, ABBC, ABCC) cannot be distinguished (Meirmans et al. 2018).Thus, in this study, we estimated the parameters that were independent of the ploidy level or regarded as having no polyploid-specific biases associate with the missing dosage information.
Common garden experimental design
To compare the difference of growth performance between the widerange populations of B. ermanii, we established the common gardens using their saplings.For the common garden establishment, we set up the sites in various environmental conditions to test the growth performance over a wide range of growing site conditions.Therefore, we established eight common gardens of B. ermanii saplings scattered throughout Japan.
In autumn 2019 or spring 2020, the containerized saplings were planted at eight established common garden sites across Japan: Nayoro (NYR), Sado high altitude (SDH), Tsukuba (TKB), Yatsugatake (YGT), Hiruzen (HRZ), Chiba (CBA), Shitara (STR) and Tano (TAN) (Fig. 1).The climatic conditions at each site are described in Paing et al. (2022), and these sites were outside the natural habitat range of B. ermanii.Briefly, the reason for this experimental design was to test the genetic difference of growth performance under conditions that the species may be exposed to under global warming and to try to detect related genes for local adaptation.Actually, in Japan, our study species typically inhabits steep areas on T. Aihara et al. mountain ranges which are in strict conservation areas, and establishing a sufficient number of planting sites within the natural range of this species is not feasible.At each site, the saplings were planted at 1.6-m intervals with a random planting design, to minimize potential systematic effects on the results of within-site microtopographical and environmental gradients.The total number of saplings planted at each common garden site was 183 (10 saplings from GYS, nine from AKS, four from CKS, and 20 from the other eight origin populations due to limited number of saplings available).
Climatic position of the origin populations
We estimated the climatic position of the 11 origin populations within the range of potential habitat of B. ermanii across Japan.Because of a data collection bias, the location data available online or in academic articles was not complete for the entire range of B. ermanii.Therefore, we predicted the overall B. ermanii habitat in Japan from climatic variables, then estimated the climatic positions of the populations within them.
Habitat suitability for B. ermanii in Japan was predicted using the maximum entropy principal algorithm in MaxEnt (Phillips and Dudík 2008).For presence data, we used a total of 1332 location data points, extracted from the national vegetation survey database (Ministry of the environment 2021), in addition to the data for the 11 origin populations (Supplementary Fig. 1).For absence data, 10,000 randomly selected background points were used.For climatic factors explaining B. ermanii's distribution, we used four bioclimatic variables that are important for the distribution of cool temperate and subalpine tree species in Japan, including Betula species (Tsuyama et al. 2014;Tsuda et al. 2015;Shitara et al. 2021): Bio 6, the mean daily minimum temperature of the coldest month; Bio10, the mean temperature of the warmest quarter; Bio 18, the precipitation during the warmest quarter; and Bio19, the precipitation during the coldest quarter.These bioclimatic variables were downloaded from CHELSA (Karger et al. 2017) at a 1 × 1 km spatial resolution.The model performance was assessed using the area under the curve (AUC) of receive operating characteristic (ROC) analysis.AUC values range between 0.5 (the model has no discrimination ability) and 1.0 (perfect discrimination) (Zweig and Campbell 1993).The threshold of habitat suitability characterizing B. ermanii habitat was applied to the maximum true positive rate plus the true negative rate.We assumed that regions beyond the threshold of habitat suitability represented potential habitat for B. ermanii in Japan.Analyses were undertaken using the dismo package in R version 4.0.4.(R Development Core Team 2021).
We obtained values for the mean annual temperature (MAT) and annual precipitation (AP) of the 11 origin populations and the potential habitat range for B. ermanii in Japan from the CHELSA bioclimatic variables.Then, we evaluated the climatic positions of the 11 origin populations within the overall habitat range in Japan.
DNA extraction and RAD sequencing
This study performed double-digest restriction site-associated DNA sequencing (ddRAD-seq) analyses on the saplings in the planting sites, not on the mother trees in each population.We chose this analysis procedure because we intended to analyze the relationships between the growth performance and its underlying molecular basis.Although such an investigation is now being undertaken as part of a separate study, here we conducted a preliminary investigation which provides necessary information on the growth performance analyses.In these cases, family structure may affect the results to some degree, however, we collected the seeds from 7 to 15 mother trees in each population and B. ermanii is windpollinated species and produces tens of thousands of seeds per mature tree per year.Thus, we believe the family structure effect in each population is very small in our study.
First, at each common garden planting site, leaf samples from surviving saplings were collected in summer 2021.Total genomic DNA was extracted from the leaves following a modified 2×CTAB (cetyltrimethylammonium bromide) protocol (Murray and Thompson 1980).The extracted DNA was assessed with a Varioskan LUX (Thermo Fisher Scientific, Waltham, MA, USA), and, as far as possible, a concentration of 20 ng/μl of DNA was extracted as a minimum from each sample.After purifying the extracted DNA using a Agentcourt AMPure XP (Beckman Coulter Life Sciences, Pasadena, CA, USA), its concentration was quantified using the Varioskan LUX.The ddRAD-seq libraries were prepared based on Peterson et al. (2012).Briefly, genomic DNAs were double digested using PstI and Sau3AI restriction enzymes (Invitrogen, Waltham, MA, USA), ligated with Y-shaped adaptors, and amplified using a polymerase chain reaction (PCR) with KAPA HiFi polymerase (KAPA Biosystems, Boston, MA, USA).After PCR amplification with adapter-specific primer pairs (Access Array Barcode Library for Illumina, Fluidigm, South San Francisco, CA, USA), an equal amount of DNA from each sample was mixed and size-selected with BluePippin 2% agarose gel (Sage Science, Beverly, MA, USA).Library fragments between 450 bp and 600 bp were retrieved.The quality of the library was checked using KAPA library quantification kits on a LightCycler 480 Instrument (Roche, Basel, Switzerland).Finally, nucleotide sequence libraries were sequenced using a high-throughput Illumina Hi-Seq X Ten platform (Macrogen, Inc., Seoul, South Korea) to generate paired-end reads that were 150 bp long.
SNP calling and filtering
The following SNP filtering procedure was performed separately for each planting site to obtain a sufficient number of SNPs as few missing data as possible.First, the raw data were trimmed using fastp in paired-end mode (Chen et al. 2018).Reads with a quality of below 20 within the sliding window of 5 bp, and reads shorter than 40 bp after trimming, were discarded.The dDocent pipeline (Puritz et al. 2014) was used for quality trimming (Trimmomatic v.0.33) (Bolger et al. 2014), read mapping (BWA mem v.0.7.12) (Li and Durbin 2009) and a single nucleotide polymorphism (SNP) calling (FreeBayes v.0.9.20).In the read mapping section, filtered reads for each sample were aligned to the whole genome sequence of B. pendula (Salojärvi et al. 2017).We followed the default settings of dDocent for mapping and SNP calling.We selected sites that were polymorphic within the samples, and achieved a higher sequencing quality and fewer missing genotypes by using VCFtools (Danecek et al. 2011) and vcflib (Garrison et al. 2022), following the procedure in the dDocent tutorial (Puritz 2022) except for the filtering steps based on an allele balance and the Hardy-Weinberg equilibrium which thought to be unsuitable for the polyploid genome.Specifically, for the first filter, sites with >50% missing data across all individuals, and sites with a minor allele count <3 and quality value < 30, were excluded.As a second filter, we removed sites with less than 4 reads, and removed individuals with more than 30% missing rate.Then, we removed poor coverage sites with a < 95% genotype call rate among remained individuals.We also removed sites that had reads from both strands.In addition, we removed sites with a <0.25 quality score, and sites that did not have quality scores that were 2 times the depth.
Population structure and genetic diversity
As well as SNP filtering, estimation of population structure and genetic diversity were performed individually for each planting site.First, we calculated the coefficients of linkage disequilibrium (LD) using squared allele-frequency correlations (r 2 ).Pairwise r 2 was calculated across all SNPs using the --geno-r2 function in VCFtools.In addition, the proportion of significant LD pairs (p-value of Chi-square statistics <0.01) was calculated using the --geno-chisq function in VCFtools.
LD-based SNP pruning was then implemented in PLINK v1.9 (Chang et al. 2015), to select only those SNPs that were generally uncorrelated with each other, by applying the following criteria: a window size in SNPs of 50, five SNPs to shift the window at each step, and a variance inflation factor (VIF) threshold of 1.To avoid error and bias, we also removed sites and individuals that had a missing rate >0.1.
Because B. ermanii is a tetraploid species, different partial heterozygous genotypes cannot be distinguished if the dosage information for the alleles is missing (Meirmans et al. 2018).We estimated the parameters that were independent of the ploidy level or regarded as having no polyploidspecific biases associated with the missing dosage information.We estimated the gene diversity (He) of each population using the program GenAlEx v.6.5 (Peakall and Smouse 2012).He is equal to the probability that two randomly picked alleles are not identical; Nei (1987) refers to this statistic as the "gene diversity" to illustrate its independence of the ploidy level (Meirmans et al. 2018).We also estimated nucleotide diversity (π) as a parameter of genetic diversity for each population, using the Hierfstat package (Goudet 2005) in R 4.2.2 (R Development Core Team 2022).π quantifies the mean ratio of nucleotide differences among all pairwise comparisons for a set of sequences.For quantifying genetic differentiation between origin populations, we estimated the ρ statistic (Ronfort et al. 1998), an analogue of the commonly used F ST , for the 11 origin populations using the program GenoDive v.3.0 (Meirmans 2020).ρ is a statistic for population structure and is designed to be comparable between ploidy levels (Meirmans et al. 2018).Based on the ρ values for the 11 populations, we performed a principal coordinate analysis (PCoA) using the cmdscale() function in R 4.0.4.We also analyzed the population structure of 11 origin populations for each planting site, using the ADMIXTURE v1.3.0 program (Alexander et al. 2009).We ran ADMIXTURE for K = 1-12, terminating the process when the log-likelihood change between iterations fell below 0.0001, and cross-validation error estimation was used to assess the most suitable value of K. Replicate runs were aligned and visualized with the pophelper package (Francis 2017) in R 4.0.4.To estimate genetic relatedness between individuals in each population, we used the relatedness coefficients (RI) from Ritland (1996) using the program GenAlEx v.6.5.Of the four relatedness coefficients, RI is least affected by missing dosage information (Meirmans et al. 2018), so estimates of RI were used.We also estimated the effective population size (Ne) using the molecular co-ancestry method of Nomura (2008), as implemented in NeEstimator V2.1 (Do et al. 2014).
Fitness assessment
At the eight planting sites, the survival of each sapling was recorded as 0 (dead) or 1 (alive) in spring 2020 and spring 2022.For each population, the survival rate was calculated by subtracting the number of saplings surviving between spring 2020 and spring 2022 from the number of surviving saplings in spring 2020.The height and stem diameter (at 10 cm above ground) of the surviving saplings was measured in autumn 2020 and 2021.The rates of height and diameter growth were then calculated as height and diameter increments relative to the initial measurements.These traits values were compared among the origin populations using a Tukey multiple comparison calculated by the emmeans package (Lenth 2023) in R 4.0.4.
Statistical analysis
The relationships between climatic position, genetic characteristics and fitness of the transplanted saplings from each origin population were investigated using a principal component analysis (PCA).As variables for the PCA, we used mean annual temperature (MAT), annual precipitation (AP), gene diversity (He), nucleotide diversity (π), ρ mean, the relatedness coefficient (RI) from Ritland (1996), survival rate, relative growth in height, relative growth in diameter, height in 2020, height in 2021, diameter in 2020 and diameter in 2021, for each origin population.A scaled value for each variable was used using the prcomp() function in R 4.0.4.We excluded the LD and Ne statistics as parameters because we could not obtain those values at some of the planting sites (Supplementary Table 2).
Climatic position of the origin populations
We obtained reasonably accurate predictions for the habitat suitability of B. ermanii in Japan using MaxEnt with an AUC value of 0.93 (Fig. 1; Supplementary Fig. 1).Based on the highest sum of sensitivity (true positive rate) and specificity (true negative rate), the threshold of suitability as potential B. ermanii habitat was 0.25.Within the potential habitat, some of origin populations were located above and below the 95% significance interval for climatic position (Fig. 2, Supplementary Table 1).The APW population was located below the 95% significance interval for the range of MAT (Fig. 2a), while the SHK population was located above the 95% significance interval for the range of annual precipitation (Fig. 2b).
Population structure and genetic diversity Total genomic DNA was extracted from the leaves of 1024 saplings planted at the eight sites.After SNP filtering, we obtained an average of 28,059 SNPs per planting site for 886 saplings (Supplementary Table 3).The SHK population showed a higher proportion of significant linkage equilibrium (LD) and higher r 2 values (Table 1) than the other populations.Although the CKS population also showed high r 2 values, the LD values were not significant because of the low sample size (Table 1).
After LD-based SNP pruning, we obtained an average of 2364 SNPs per planting site, from 886 saplings (Supplementary Table 3).Both gene diversity (He) and nucleotide diversity (π) values were extremely low in the SHK population (Table 1).The highest He value, 0.277, was found in the MKT and HKD populations.The highest π value, 0.335, occurred in the CKS population.The SHK population had higher (0.560) mean p values than the other populations (Table 1), and had high ρ values among every population (Supplementary Table 4).In the PCoA based on the p statistic, axis 1 explained 91.7% of the variance and the SHK population was distinct from other populations (Fig. 3).Axis 2 explained 8.42% of the variance, and populations originating from higher latitudinal locations, such as URU and AKS, fell on the higher side of axis 2, while populations originating from lower latitudinal locations, such as NGH, fell on the lower side of axis 2 (Fig. 3).The ADMIXTURE results indicated that the K = 2 or 3 model showed suitable clustering based on the cross-validation procedure of each run for the eight planting sites (Supplementary Fig. 2).The optimum value for K was 2 for sites CBA, HRZ and TAN and 3 for sites NYR, SDH, YGT and STR.When the K = 2, the SHK population formed a different cluster from the other populations except at the TKB site, which includes only three SHK saplings (Supplementary Fig. 3; Supplementary Table 2).In addition, HKD, GYS, CKS, BDS and MKT populations were admixed between northern populations, URU and AKS, and western populations, APW, NGH and APS (Supplementary Fig. 3).The northern populations are separated at K = 3 at every planting site, except TAN (Supplementary Fig. 3).The SHK populations also showed the highest mean relatedness coefficient (RI = 0.342) (Table 1).Because the relatedness coefficient between half siblings (offspring with one common parent) is 0.25, the saplings from the SHK population therefore shared at least one parent.Other populations except CKS had mean values lower than 0.0625 (=1/16, i.e., a level of relatedness corresponding to second cousins) (Table 1), thus the saplings were regarded as unrelated in these populations.On the other hand, certain individual pairs of all population had RI values with higher than 0.0625, and certain individual pairs of the APW population showed RI values with higher than 0.25 (Supplementary Fig. 4).In addition, the SHK population displayed the lowest effective population size (Ne) in the 11 origin populations (Table 1).Large Ne values evident in the AKS, GYS and CKS populations were likely the result of low sample size (Table 1; Supplementary Table 2).
We estimated above genetic characteristics individually for each planting site, and consistently observe the results that SHK populations showed high r 2 values, low gene diversity (He) and nucleotide diversity (π), high inter-population genetic differentiation based on the ρ statistic and the ADMIXTURE analysis, and high relatedness coefficient (RI) throughout the 8 planting sites (Supplementary Table 2).
Growth performance in common garden experiments
In total, we measured the survival rate for 1464 saplings, height in 2020 for 1166 saplings, height in 2021 for 931 saplings, diameter in 2020 for 1143 saplings and diameter in 2021 for 933 saplings.We obtained a relative height growth rate for 926 saplings and a diameter growth rate for 911 saplings.
The mean survival rate for saplings across the eight planting sites was 60.1%.APW, NGH, APS and SHK populations had lower survival rates (Fig. 4a; Supplementary Table 5), with APW having the lowest (Fig. 4a).
The mean height growth rate of the saplings between spring 2020 and 2022 across all eight planting sites was 0.66.AKS, HKD, CKS, BDS, APW, NGH and SHK populations had lower relative height growth rates (Fig. 4b, Supplementary Table 5).The mean diameter growth rate of the saplings between spring 2020 and 2022 across all eight planting sites was 0.82.All populations other Ritland (1996), Ne effective population size.Each statistic is the average 8 planting sites.than BDS, MKT and APS had a lower relative diameter growth rate than the mean (Fig. 4c, Supplementary Table 5).The SHK population had the lowest relative height and diameter growth rate (Fig. 4b, c).
The mean height of the saplings in autumn 2020 across all eight planting sites was 40.8 cm, and in autumn 2021 70.6 cm (Supplementary Table 5).CKS, BDS, APW, NGH and SHK populations had lower heights than the mean in both 2020 and 2021 (Supplementary Table 5).The mean diameter of the saplings in 2020 autumn across all eight planting sites was 5.31 mm, and in 2021 autumn 10.3 mm (Supplementary Table 5).The APW and SHK populations had lower heights than the other populations in both 2020 and 2021 (Fig. 4d, e).Similarly, APW and SHK populations had a lower diameter in 2020 and 2021 than the others (Fig. 4f, g).
Relationships between climatic position, genetic characteristics and fitness
The PCA showed that the APW and SHK populations were distinctly based on climatic position, genetic characteristics and sapling growth performance at the planting sites (Fig. 5).Axis 1 explained 61.2% of the variance and was associated with variables other than MAT.Axis 2 explained 20.9% of the variance and was associated with MAT.The APW population was characterized by low MAT, while the SHK population was characterized by high ρ, RI and annual precipitation, and low He, π, survival rate, height, diameter and growth (Fig. 5).We also performed PCA individually for each planting site, and we consistently observe that SHK and APW populations were distinct from the other populations (Supplementary Fig. 5).
DISCUSSION
This study attempted to understand why species range edge populations display reduced growth performance when transplanted elsewhere.The main question of our study was to investigate why patterns of growth performance in climatically marginal conditions differ from climatically marginal populations which have enough gene flow to maintain moderate genetic diversity (high-altitude edge population: APW) and populations without enough gene flow (low-latitude edge population: SHK).
Previous common garden studies of tree species also have revealed poor performance of individuals transplanted from their range edge.For example, a southern population of Scots pine (Pinus sylvestris) showed a large decline in growth compared to the other populations (Oleksyn et al. 1998).Similarly, in sessile oak (Quercus petraea) populations originating from their warmer thermal range-edge had low tree height compared to the cooler thermal range-edge (Sáenz-Romero et al. 2017).In European beech (Fagus sylvatica), saplings transplanted from three geographically marginal populations showed lower survival rates than saplings from four central populations (Kreyling et al. 2014).Picea glauca and Abies guatemalensis have also exhibited reduced survival rate, germination and growth of seedlings when transplanted from range-edge populations at higher latitudinal and altitudinal sites (Andersen et al. 2008;Lu et al. 2014).
This study focused on both, the low-latitude population and the high-altitude population of B. ermanii in Japan, and investigated the performance of range-edge populations with characterizing the climatic position and genetic characteristics.Although we could include only one population each for the low-latitude and the high-altitude population, the different patterns of reduction in survival, growth and individual size of saplings were identified among both populations across the eight transplanted sites.Transplanted saplings from high-altitude populations located at marginal climatic conditions displayed lower height, diameter and survival rates but no reduction in growth rate, while transplanted saplings from isolated low-latitude populations, at the southern limit for the species with extremely low genetic diversity and high genetic distinctness, showed low growth, height, diameter and survival rates.Our results demonstrated that accumulated genetic load rather than adaptive selection to niche-limited conditions could contribute to the reduced growth performance of range edge populations with reducing the growth rate.This was consistent with one of the suggestions made by Bontrager et al. (2021).Our findings shed light on the underlying mechanisms of reduced growth performance decline of range edge populations, which is critical for understanding the evolutionary factors contributing to a species' range limits.
Reduced survival and size of saplings from the high-altitude edge population
The APW population experienced marginal thermal conditions within the potential habitat range of B. ermanii, as evidenced by the MAT (Fig. 2a) and other climatic variables such as Bio 6 (mean daily minimum temperature of the coldest month) and Bio 10 (mean temperature of the warmest quarter) (Supplementary Table 1).This population was located above the timberline, defined as the upper limit of the closed forest (Körner 2012), and with an overall low tree species density (TA's personal observation).This population experienced low temperatures, strong winds and heavy snowfall in winter.From analyses of pollen records, from 8500-10,000 years ago, B. ermanii began to expand its habitat to the present subalpine and alpine regions in Japan (Morita 2000).Therefore, the APW population could be regarded as the leading edge of a range expansion.In population demographic theory, range expansion often leads to a reduction in genetic diversity and differentiation between populations because of the founder effect.Estimates of genetic diversity based on neutral markers in a variety of organisms often indicate a decrease from the range center to the range edge (Eckert et al. 2008;Pironon et al. 2017).However, despite residing at the tree line, at the leading edge of a range expansion, the APW population exhibited moderate genetic diversity (He, π) and relatedness between individuals (RI) (Table 1), and did not exhibit a distinct genetic composition within the 11 B. ermanii origin populations studied (ρ, ADMIXTURE) (Fig. 3; Supplementary Fig. 3).In addition, a certain proportion of individual pairs of the APW population had the lowest RI values in the origin populations (Supplementary Fig. 4).These findings suggest that this population receives sufficient gene flow from the surrounding B. ermanii populations to maintain genetic diversity reducing the effect of genetic drift and preventing inbreeding between individuals.Thus, the accumulation of genetic load is unlikely to be a significant factor driving its reduced individual size and survival rate of transplanted saplings.Therefore, the lower heights and diameters of transplanted saplings from the APW population (Fig. 4d-g) appear to be the result of adaptive selection for harsh alpine conditions such as cool temperatures.Strong positive selection increases the frequency of an advantageous allele, with the result that linked loci remain in unusually strong LD with that allele (Slatkin 2008).To compare the level of LD between B. ermanii origin populations, we calculated the coefficients of LD and the proportion of significant LD pairs (Table 1).However, we did not observe a high level of r 2 or high proportion of significant LD pairs in the APW SNPs (Table 1).In many cases, the height of a tree species is a highly polygenic trait, which, in turn, has a genetic architecture determined by the cumulative small effects of numerous loci (Savolainen et al. 2007;de Miguel et al. 2022).Because the height selection shifted subtle allele frequency at many loci, we might not observe clearly high level of LD in the SNPs of the APW population.
The climate experienced by the APW population was also characterized by a shorter growing season compared with the other populations, estimated at 198 days compared to 219-269 days, and transplanted saplings from this population showed a later bud break (Aihara et al. submitted).The growth of saplings with a lower photosynthetic rate in cooler temperatures and shorter growing seasons might have favored and selected for a lower height and diameter.Moreover, smaller trees may benefit from the relative facilitation of the microclimate, which includes a warmer air layer closer to the soil surface, particularly for trees embedded in low-alpine vegetation (Körner 2003;Yu et al. 2014).
In the tree line of the Japanese high mountains, factors such as strong winds and heavy snowfall in winter may have contributed to the selection of low heights and diameters.The pressure of accumulated snow causes physical damage to tree stems and branches (Homma 1997;Kajimoto et al. 2002).For example, Cryptomeria japonica in snowy regions is genetically differentiated from other populations (Tsumura et al. 2014), and the variety found in high snowfall areas has slender branchlets with soft leaves that may escape physical damage caused by snowfall (Yamazaki 1995).It is well known that various tree species in Japan, as well as their various subspecies or related species, display lower tree heights in regions with heavy snowfall (Hara 2022).Of the 11 B. ermanii populations, the CKS population also experienced heavy snowfall in winter, and fell on the higher side of the 95% significance interval for maximum snow depth (Supplementary Table 1); transplanted saplings from this population also displayed heights lower than the mean value (Fig. 4d-g).However, when snow depth was taken into account in the PCA, unlike the APW origin population, the CKS origin population could not be distinguished (Supplementary Fig. 6).Although exposure to strong winds is one of the factors leading to reduced tree height, wind-exposed trees often have larger diameters (Brüchert and Gardiner 2006;Gardiner et al. 2016).Unfortunately, our study comprised only one population at highaltitude, which concluded that adaptation to strong winds and heavy snowfall in winter is not a causal factor for the low heights and diameters of transplanted saplings from the APW population.
Although saplings from the APW population did not have a particularly low growth rate, APW had the lowest survival rate of the 11 B. ermanii populations (Fig. 4a).While a large height variance between saplings at each planting site could cause shading of smaller saplings by taller saplings, a low survival rate for APW saplings was commonly observed at the eight planting sites (Supplementary Fig. 7).We suspected that some traits resulting from the adaptation of APW origin population to the harsh alpine conditions might have caused the low survival rate at the planting sites.At higher elevations, tree species generally experience low levels of herbivory (Rasmann et al. 2014;Galmán et al. 2018), and plant species have few chemical and physical defense traits, such as trichomes, terpene and phenolic compounds in the leaves, in the absence of herbivory (Pellissier et al. 2014;Callis-Duehl et al. 2017;Descombes et al. 2017).If the APW origin population had fewer defensive traits and was more vulnerable to herbivory than those from other populations, this could be one reason for low survival rates of this population's saplings.Another possible reason is that the root traits of the APW saplings were disadvantageous at the planting sites outside the usual B. ermanii niche.In Betula species root traits have been reported to vary with elevation (Spitzer et al. 2023), and the root traits of the APW origin population may have been strongly adapted to the alpine environment.Further exploration of the low survival rate of saplings from harsh alpine conditions would be an interesting follow-up to this study.
Reduced survival, growth and size of saplings from the lowlatitude edge population Of the 11 origin populations, the SHK population had the lowest He and π values, 0.059 and 0.067 respectively (Table 1), and was highly genetically distinct from other populations based on ρ statistics (Fig. 3).The SHK origin population represented the southernmost population of B. ermanii sampled (Fig. 1; Table 1).In addition, the SHK population inhabits marginal conditions within the species' precipitation range (Fig. 2).Based on the climate-driven range dynamics, when temperature rises, populations of B. ermanii are likely to move their habitats northwards like other tree species with habitats in cool-temperate and alpine forests in Japan (Horikawa et al. 2009;Matsui et al. 2009).For this reason, we considered the SHK population was at the rear edge of the climate-driven range dynamics that have occurred over the past few thousand years.Rear-edge populations are typically restricted to particular habitat islands within a matrix of unsuitable conditions, and are often small and isolated (Hampe and Petit 2005).Their small population size and prolonged isolation have resulted in reduced withinpopulation genetic diversity and the preservation of high levels of inter-population genetic differentiation, as reported in a variety of taxa (Castric and Bernatchez 2003;Petit et al. 2003;Chang et al. 2004) and in some conifer tree species in Japan (Picea jezoensis: Aizawa et al. 2009, Abies sachalinensis: Kitamura et al. 2020, Thuja standishii: Worth et al. 2021).The SHK population is distantly separated from the other B. ermanii populations (Fig. 1; Supplementary Fig. 1), and genetic characteristics were consistent with those studies.Because of its low levels of genetic diversity, the SHK population is thought to have persisted in long-term isolation from other B. ermanii populations.The SHK population is located on Mt Shakaga-Take at altitudes above about 1700 m a.s.l., the summit being 1799 m a.s.l.(TA's personal observation).Therefore, the SHK population has probably experienced an altitudinal range shift during the Quaternary climatic oscillations and might be experiencing an ongoing decline in population size.
Small, isolated populations can ultimately lead to extirpation or even extinction because of the combined effects of genetic drift, inbreeding depression and inefficient purifying selection (Leroy et al. 2018;Mathur et al. 2023).The SHK origin population showed a high relatedness between individuals (0.342; Table 1), indicating that saplings from this population shared at least one parent.In addition, because the SHK population had a low effective population size (averaging 3.1 across the eight planting sites; Table 1), only a limited number of trees in this population could breed.Because of its low genetic diversity, high levels of genetic distinctness from other populations and high relatedness between individuals, the low survival and growth rates of the transplanted SHK saplings are presumed to be the result of an accumulation of deleterious mutations and/or biparental inbreeding depression.High levels of linkage disequilibrium were observed in the SNPs of the SHK population (Table 1), providing genetic evidence to support these claims.
The NGH and APS origin populations also experienced marginal conditions for Bio 18 (precipitation of the warmest quarter), consistent with the SHK population (Supplementary Table 1), but these two populations did not show any distinctive reduction in survival and growth rates when transplanted (Fig. 4, Supplementary Fig. 7).Therefore, we regard the low heights and diameters of the transplanted SHK saplings to be the result of a reduced growth rate driven by genetic load rather than climate selection.
CONCLUSION
This study provides important insights into the effects of marginal climatic conditions and isolated small populations on the survival, growth and size of B. ermanii saplings.Saplings from the highaltitude edge population (APW) exhibited adaptive selection for surviving harsh alpine conditions, resulting in lower heights and diameters compared with other populations.The low survival rates observed in the APW saplings suggest that traits associated with adaptation to harsh alpine conditions may also have a negative impact on survival.However, APW saplings exhibited moderate genetic diversity, and did not show a significant reduction in growth rate.This demonstrates the importance of genetic diversity in promoting resilience to marginal climate conditions.
Transplanted saplings from the southern-most origin population of B. ermanii, SHK, exhibited low genetic diversity, high levels of genetic distinctness, high relatedness between individuals, and a significant reduction in survival rate, growth rate, height and diameter.This suggests that deleterious mutations and biparental inbreeding depression may have a large effect on the reduced fitness of saplings from small, isolated populations.From these results, saplings of the SHK population were vulnerable to environmental fluctuations.To provide conservation guidelines, further studies in the SHK population such as in-natura fitness of surviving trees, purging effects of genetic load and the comparison of genetic characteristics among other southernmost populations of B. ermanii that this study could not considered is deemed necessary.
Fig. 2
Fig. 2 Climatic positions of 11 Betula ermanii populations within the range of its potential habitat.Density plots of the a mean annual temperature (°C) and b annual precipitation (mm) across the range of potential habitat for Betula ermanii in Japan.Areas outside the 95% significance interval are indicated by black shading.White circles indicate the position of each origin population.Abbreviations indicate the position of populations above and below the 95% significance interval for the potential habitat.
Fig. 3 A principal coordinate analysis (PCoA) plot based on ρ statistics.White circles indicate the position of each Betula ermanii origin population.
Fig. 4
Fig. 4 Growth performance of saplings from 11 Betula ermanii populations in common garden experiments.For each population the: a survival rate (%); b relative growth in height; c relative growth in diameter; height (cm) in d autumn 2020 and e autumn 2021; diameter (mm) in f autumn 2020 and g autumn 2021.Plots indicate the mean values; vertical bars indicate standard deviations; horizontal bars indicate the mean value of each trait.Different letters indicate statistically significant differences between populations based on a Tukey multiple comparison based on 95% confidence level.
Fig. 5 A
Fig.5A principal component analysis (PCA) of origin populations based on climatic position, genetic characteristics and fitness of the planted saplings.White circles indicate the position of the Betula ermanii origin populations.MAT, mean annual temperature (°C); AP, annual precipitation (mm); He, gene diversity; π, nucleotide diversity; ρ, mean of ρ statistics; RI, relatedness estimator, taken fromRitland (1996); survival, survival rate (%); Hgrow, rate of height growth; Dgrow, rate of diameter growth; H2020, height (cm) in autumn 2020; H2021, height (cm) in autumn 2021; D2020, diameter (mm) in autumn 2020; D2021, diameter (mm) in autumn 2021.Red letters and arrows indicate the principal component loadings for each variable (axis 1 and 2), calculated from the rotation and standard deviation of each variable.Each origin population plot is based on the principal component score.
Table 1 .
Genetic diversity and population structure of 11 Betula ermanii origin populations.N number of samples, mean N: mean number of saplings providing SNP sites after LD-based SNP pruning for each 8 planting site, r 2 coefficients of linkage disequilibrium using squared allele-frequency correlations, LD ratio (p < 0.01): the proportion of significant LD pair (p-value of Chi-square statistics <0.01),He gene diversity, π nucleotide diversity, ρ mean of ρ statistics, RI mean of relatedness estimator from
|
2023-11-10T06:17:36.233Z
|
2023-11-09T00:00:00.000
|
{
"year": 2023,
"sha1": "d6ac4c84e976a504bd9ab3256201c137dee45004",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41437-023-00655-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "558eeb53597884a8205ec9e8a7045830f72e1faf",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
225051995
|
pes2o/s2orc
|
v3-fos-license
|
Biomarker Research and Development for Coronavirus Disease 2019 (COVID-19): European Medical Research Infrastructures Call for Global Coordination
Abstract An effective response to the coronavirus disease 2019 (COVID-19) pandemic requires a better understanding of the biology of the infection and the identification of validated biomarker profiles that would increase the availability, accuracy, and speed of COVID-19 testing. Here, we describe the strategic objectives and action lines of the European Alliance of Medical Research Infrastructures (AMRI), established to improve the research process and tackle challenges related to diagnostic tests and biomarker development. Recommendations include: the creation of a European taskforce for validation of novel diagnostic products, the definition and promotion of criteria for COVID-19 samples biobanking, the identification and validation of biomarkers as clinical endpoints for clinical trials, and the definition of immune biomarker signatures at different stages of the disease. An effective management of the COVID-19 pandemic is possible only if there is a high level of knowledge and coordination between the public and private sectors within a robust quality framework.
The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic has generated a fast response from the global scientific community, governmental organizations, the life sciences industry, and healthcare providers. With unprecedented speed, a number of laboratory tests have been developed with the aim to facilitate easy and efficient detection of virus infection [1][2][3][4], and tests are emerging for the measurements of antibodies for identifying past SARS-CoV-2 infections.
As the pandemic evolves, it is becoming clear that there is a gap between the ambition and the usefulness of these tests. Evidence continues to accumulate on the limitations of the currently available diagnostic and prognostic approaches [2][3][4][5][6].
In particular, serological tests are becoming more relevant as they are able to detect past COVID-19 infections [8]. However, many open questions remain around each test's specificity and sensitivity, which represents its validity and usefulness in a clinical setting. The value of these tests, as with many other biomarker tests in healthcare and patient management, is one of today's major challenges.
In addition to SARS-CoV-2 detection and testing of immune response, there is an urgent need to predict which patients will develop specific disease characteristics. Indeed, some individuals develop mild symptoms and others very severe ones for unknown reasons, and patients can differ dramatically in the degree and speed of their response following hospitalization [7]. Recent studies showed how COVID-19 patients with comorbidities, such as hypertension or diabetes mellitus, are more likely to develop a more severe course and progression of the disease [9]. Differences in the immune response [10] or prior coronaviruses infections could also affect the COVID-19 clinical course [11].
This heterogeneity of manifestations of SARS-CoV-2 infection constitutes one of the greatest challenges in managing the clinical consequences of the pandemic. Biomarker profiles are of vital importance to clinicians when evaluating treatment options, for defining the clinical course, and for close monitoring and support of patients in their disease management and remission trajectory.
Tools should enable population screening and the identification of high-risk patients. Given the large interindividual heterogeneity, this can be achieved using biomarker signatures, composed of multiple analytes. Given their relevance in this context, robust and well-validated biomarkers are crucial to enable effective decision-making.
SARS-COV-2 TESTING: CURRENT SITUATION SARS-CoV-2 and COVID-19 testing kits are designed to be used in routine laboratories and also at the point-of-care setting, with the ambition of shortening the diagnostic time window and thereby facilitating rapid identification of COVID-19 positive patients and contacts. In order to be effective, these kits must be based on validated biomarkers and biomarker assay formats that yield high sensitivity and specificity results, for instance, to distinguish an infected person from a noninfected one.
Methods based on the viral genome detection, with their large range of applications, high sensitivity, and high sequence specificity, have become a routine and reliable technique for detecting [16].
To complement the viral genome tests, viral antigen tests have been developed. These tests allow the virus detection early in infection but display limitations on sensitivity and potential cross-reaction with other coronaviruses [17,18].
Despite the fact that COVID-19 is a severe pandemic, many governments are leaning toward "mitigation" and "containment" as strategies. The overarching goal is for all countries to control the pandemic by slowing down the transmission and reducing mortality associated with COVID-19. Indeed, in the absence of a vaccine, reaching group immunity is no straightforward path with major ethical considerations as the societal consequences of achieving it are devastating [19]. Mobility and travel restrictions, social distancing, and the use of personal protective equipment have been introduced in order to reduce human-to-human transmission. The use of face masks in particular is enforced widely within the general population, together with hand hygiene.
Stopping the spread of COVID-19 requires finding and testing all suspected cases so that confirmed cases are promptly and effectively isolated and receive appropriate care. It is important that the close contacts of all confirmed cases are rapidly identified, quarantined, and medically monitored for the virus incubation period of up to 14 days.
Next to the need for well-validated and reliable diagnostic tests, this scenario demands high quality and reliable serological tests, measuring the immune responses induced by past and new viral infection, in combination with tests addressing T-cell activity. These assays are important for understanding the prevalence of COVID-19 and whether the development of a humoral immune response to SARS-CoV-2 protects against the disease.
As the World Health Organization (WHO) clearly underlined, "Laboratory tests that detect antibodies to SARS-CoV-2 in people, including rapid immunodiagnostic tests, need further validation to determine their accuracy and reliability" (https:// www.who.int/news-room/commentaries/detail/immunitypassports-in-the-context-of-covid-19). Addressing these issues is crucial, as serological assays are critical for the patient care pathway and for the management and surveillance of the virus.
Limitations to the use and development of the tests described above include poor test sensitivity due to sample collection [14], poorly described reference material, low specificity, and lack of technical validation, and therefore a threat of false disease diagnosis.
Uncertainty in test sensitivity that lead to false-negative cases of COVID-19 likely constitutes a serious threat to the control of the pandemic. Indeed, false negative results are more weighty, because unrecognized infected persons may not be isolated and can infect others [4]. Because of this, some governments require RT-PCR test and quarantine for people who are considered close contacts of positive cases, with additional testing and isolation in case of negative results. Moreover, in presence of a strong epidemiological link to COVID-19 infection, paired serological tests (in the acute and convalescent phase) could support diagnosis [20].
Testing limitations are likely a result of combining several unknowns such as the lack of understanding of the biology of the disease, in particular its natural history and associated immune response, a relatively low number of samples, and the use of novel laboratory test kits whose quality and accuracy has not been rigorously tested. Furthermore, the lack of rigorous study design and methodology to robustly validate the tests before deployment affects the tests' reliability and ultimately the correctness of the clinical assessment.
URGENT NEED FOR VALIDATED BIOMARKERS
A collaborative global response for diagnostics, therapeutics, and vaccines development as well as the future management of the pandemic called the new Access to COVID-19 Tools (ACT) global accelerator has been launched in April. The ACT accelerator will require additional molecular tools to identify relevant COVID-19 related biomarkers that will have a critical role in: (1) assessing the efficiency of future vaccines and/or therapeutics; (2) preventing and identifying clinical complications, in particular those related to the deadly immunological storm reaction, vascular activation and hemostasis control, and, (3) stratifying patients to define therapy targets and identify individuals at risk of infection, suitable for preventive interventions. Due to the complexity of the immune response, in-depth phenotypic analysis is necessary in order to identify specific biomarker signatures, integrating omic and clinical data.
According to the GlobalData's Biomarkers database, a large number of different biomarkers have been utilized for COVID-19 trials for different purposes such as monitoring treatment response, predicting and monitoring treatment safety. However, only a few of them are validated for clinical application, with the risk that the results produced are not reliable and are not of much use for medical decision making.
Hematology laboratory and routine coagulation tests have made a significant contribution in the identification of useful prognostic markers as well as in predicting outcomes and recovery [21,22]. Moreover, in the era of personalized medicine, biomarkers can enable the selection of appropriate treatment for COVID-19 infected patients. Biomarkers of inflammation such as interleukin (IL)-6 and IL-10 [23], of cardiac injury [24], of liver and kidney function [25,26], as well as of coagulation measures [27], are significantly elevated in patients with both severe and fatal forms of COVID-19. Moreover, it has been assumed from studies in severe acute respiratory syndrome (SARS) and Middle East respiratory syndrome (MERS) patients that memory T cells, induced by the contact with previous pathogens from the coronavirus family, may have the potential to recognize SARS-CoV-2. Hence, features and distribution of preexisting T cells could be used as markers for explaining some of the differences in infection rates or pathology observed during this pandemic [11].
The mentioned markers could support patient stratification and represent objective and standardized criteria to guide therapy and allocate resources.
Although the postrecovery course of the COVID-19 disease is not clear yet, limited observations demonstrated that they are at risk of psychological and physical complications of the disease itself, as well as treatment-related lung damage and other organ injuries [28][29][30]. Biomarker signatures can play a key role in the management of the post-COVID-19 patients, predicting medium and long-term clinical outcomes.
More insights on the biological processes and replication studies are crucial before the adoption of any molecule and parameter in the clinical setting.
The coming months will be critical for accelerating a COVID-19 biomarker pipeline that will enable diagnostic and prognostic profiles, provide reliable end points for clinical trials, assess treatment response and allow vaccine candidate selection, together with supporting healthcare systems with tailored strategies and patient-centred interventions.
HOW TO OPTIMIZE THE RESEARCH PROCESS: THE POINT OF VIEW OF EUROPEAN MEDICAL RESEARCH INFRASTRUCTURES
In order to structurally address the issues above, a long-term vision is necessary. The optimization and the acceleration of the research process requires a high level of knowledge and coordination, as well as the application of standards and quality to reduce uncertainty along the biomarker pipeline [31]. This is possible only if there is an effective interaction between privatepublic networks. In Europe this collaboration is facilitated by research infrastructures. In particular, the Alliance of Medical Research Infrastructures (AMRI, https://bit.ly/2FitLu9), including EATRIS-ERIC (focused on translational medicine, https://eatris.eu/), ECRIN-ERIC (focused on clinical research, https://www.ecrin.org/) and BBMRI-ERIC (focused on biobanking https://www.bbmri-eric.eu/) provides resources and services for the medical research communities to conduct research and foster innovation. During the present health crisis, substantial public funding has become available for research in diagnostics, treatments, and vaccines for the new coronavirus disease, and AMRI has worked to accelerate and manage international research collaborations on COVID-19, acknowledging that the challenge is a global one. The medical research infrastructures have made a significant effort to share knowledge and to ensure robustness of the COVID-19 related project outcomes [32]. In addition, thanks to its expertise as well as its culture in quality standards and reproducibility, the Alliance identified several action lines (Table 1) to optimize the research process and to address issues related to diagnostic/prognostic tests and biomarker development. Their main strategic objectives are: (1) to establish a European taskforce for validation of novel diagnostic products; (2) to define and promote criteria for COVID-19 sample handling, data collection, and biobank management including ethical considerations; (3) to validate novel diagnostic approaches; (4) to identify and validate biomarkers as clinical endpoints for clinical trials; and (5) to define the biomarker profile determining the innate and acquired immune response to the infection and establish immune signatures at different stages of the disease. These actions are highly relevant for an effective response from the research community to the COVID-19 pandemic. Only with close collaboration in these key areas will we be able to efficiently support the biomarker R&D process, helping to understand antigen response mechanisms, inform vaccine development, and enable antiviral drug design. CONCLUSIONS COVID-19 research is still in its early stages, and we need further research worldwide to better face this pandemic. We still need to learn about the biology of the disease and the variable response that patients display in their disease manifestation and recovery. We expect that the process of biomarker discovery and validation will largely guide an accelerated translational strategy to address this global health crisis. A standardized pathway approach toward the biomarker validation process is thus becoming increasingly important. Quality and reproducibility are essential for translating basic findings into concrete clinic interventions and only following this approach is an effective response to the pandemic guaranteed. Significant efforts and resources have been invested in the development of biomarkers for COVID-19 and AMRI urges that research must be of good quality, providing robust, ethical evidence that stands up to scrutiny and can be used to inform policy making. For COVID-19 management, structural use of the relevant research infrastructures is strongly advised, as they play an important role in centralized management of biomarkers R&D pipelines, biobanking, and clinical trials. The collective efforts of AMRI and collaborative actions of the scientific community will create high-quality knowledge that is openly available and will bring a better understanding of SARS-CoV-2, with benefits for all.
|
2020-10-24T13:05:44.658Z
|
2020-10-22T00:00:00.000
|
{
"year": 2020,
"sha1": "babaccc1e3bddc31fc88c70f9977037cec5230cb",
"oa_license": null,
"oa_url": "https://academic.oup.com/cid/article-pdf/72/10/1838/37950643/ciaa1250.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "41fcbbcc0d8cb4ce57bf7433feba9c9fe5c5ede4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
250060235
|
pes2o/s2orc
|
v3-fos-license
|
Effect of Repairing Tendon and Ligament Injury of Wushu Athletes by Medical Image
Medical imaging can be used as a medical aid for diagnosis and treatment, and color Doppler ultrasound can also be used in life science research as a scientific research method. Wushu is a traditional sport in China, which has a long history of development. Martial arts are a very good fitness project, but different from ordinary people, professional martial arts athletes are often accompanied by a variety of sports injuries, and tendon ligament injury is one of the most common injuries. At present, there are many treatment plans for tendon and ligament injury, but there are few researches on the repair effect of tendon and ligament injury. This paper will take this as the main research purpose for in-depth study. In view of the problem that ligament injury is not easy to observe, this paper will use GE Lightspeed 64 row spiral CT as the main observation tool and use the method of hospital image observation to compare and analyze the repair effect of tendon and ligament injury of Wushu athletes. In this experiment, 88 professional Wushu athletes were gathered as experimental samples. After preliminary screening, 110 cases of ligament injury were counted. After analyzing the abnormal changes of tissue or structure, Lysholm, and IKDC treatment effect score data, this paper believes that, for type I patients, only conservative treatment can achieve good results. However, in the more serious and complex type II patients, local fixation is used after the onset of the disease, and very serious patients can achieve good results through surgical treatment. Postoperative care is also important, which helps reduce complications. This experiment has achieved ideal results and has played a blank role in the research of the repair effect of tendon and ligament injury of Wushu athletes at home and abroad.
Introduction
It was officially named "Chinese Wushu" in 1926. After the founding of New China, it was recognized as "Martial Arts" as a national sports event. Since entering modern times, the continuous development of Chinese Wushu has rushed out of the country. In 1952, the national sports After the establishment of the committee, Wushu was listed as a promotion item. In 1957, Wushu was listed as a national competition item for the first time. Medical imaging refers to the technology and process of obtaining internal tissue images of the human body or a certain part of the human body in a noninvasive manner for medical treatment or medical research. Chinese martial arts have a long history, are extensive and profound, and are deeply loved by people all over the world. People with high martial arts are often respected. In the twentieth century, the world's competitive sports developed rapidly. In 1992, Wushu was listed as the official event of Beijing Asian Games for the first time. In 1995, the International Wushu Federation was officially accepted as a member of the IFS. e World Wushu Championship, world cup Wushu Sanda, and other largescale competitions held a stable position in the international competitive sports martial arts. In June 2012, the International Olympic Committee (IOC) announced that martial arts will be the choice of official events for the 2020 Summer Olympic Games.
Injury occurs when a ligament is stretched beyond its capacity when subjected to violence and nonphysiological activity. ere are many reasons for ligament injury, mainly seen in strenuous sports, such as sports competitions, dancing, martial arts acrobatics, and ligament injuries caused by accidental injuries such as car accidents or falling from high altitudes in daily life. Wushu is a method used by the Chinese nation to keep fit and defend itself for thousands of years. It has a long history and a strong social foundation. As one of the most representative traditional sports of the Chinese nation, it is widely recognized for its unique education, competitive performance, and health care value. It is a sport that combines six fighting skills, such as kicking, beating, falling, taking, beating, and stabbing, and combines skills, eyes, body, steps, jumps, and other technical actions. It is difficult to achieve beauty. In the fierce regular season, there are many difficult jumping and flying movements. e difficult movements bring great aesthetic feeling to the audience, but at the same time, they also bring potential injury risks to Wushu players. In the teaching and training of Wushu routine, the requirement of jumping and jumping is to have a high height and stable landing. In addition, when landing, it is often formed by bending knee, squatting, straight knee, crossing knee, or even splitting, which is easy to cause tendon and ligament damage. In addition, from the current development trend of Wushu, with Wushu gradually accepted by the world, the difficulty of Wushu routine competition is also increasing. Every movement of high difficulty and super-difficulty challenges the limit of physical ability, technique, and tactics of Wushu routine athletes. erefore, Wushu routine athletes need to have "comprehensive" ability in the competition, which requires that all joints and muscle groups of athletes have high ability and cannot show any weakness; otherwise, it will affect the completion of technical action. Of course, as the difficulty increases, so does the chance of injury. However, due to the lack of attention to potential injuries and effective preventive measures, sports injuries seriously interfere with the development of Wushu teaching and training. e knee joint has a relatively strong toughness and tensile strength, and it is easy to be injured during exercise. After the knee ligament injury occurs, it will seriously affect the functional recovery of the affected limb. After the knee ligament injury occurs, functional recovery exercises can be taken. During the recovery period, moderate exercise of the lower limbs can enhance the endurance of the lower limbs. When exercising, you should maintain a stable mood and avoid large emotional fluctuations.
Tendon and ligament injuries are among the most common sports injuries in Wushu athletes. e injury of tendon and ligament and the repair after fracture are directly related to the function of the sprain site. For ordinary people, this treatment is satisfactory as long as the continuity and general function of ligaments and tendons can be restored. For athletes, it is necessary to restore the good function of tendons and ligaments, and the biomechanical properties of tendons and ligaments must reach the level before injury. Only in this way can the damaged parts complete the complex movement and bear the extreme pressure. After partial rupture of tendon and ligament, most clinicians still advocate different forms of external fixation and local immobilization for 3 weeks. For complete rupture of tendon and ligament, the local brake time is longer after surgical suture. In this way, the integrity and continuity of damaged tendons and ligaments can be restored. But in the process of tissue repair, muscle waist and ligament will inevitably attach to the surrounding tissue, which will affect the function of tendon and ligament. However, there are few studies on the repair of tendon and ligament injury in the past.
In order to supplement the lack of research in this field, this paper will make an in-depth study on the repair effect of tendon and ligament injury of Wushu athletes based on the observation of medical images. First of all, this paper analyzes the main causes of sports injuries of martial arts athletes. It is believed that, in general, the ligament injuries of professional martial arts athletes will occur in many parts, such as the fat gut muscle and the lateral collateral ligament of the knee. In order to further observe the repair process and effect of tendon and ligament injury, this paper uses Ge Lightspeed 64 row spiral CT as the main observation tool of this study. As an image processing workstation, workstation preprocesses DICOM image before image processing and observes ligament injury under clear and complete conditions as much as possible. 88 professional Wushu athletes were selected as volunteers. In order to ensure the quality of this experiment, this paper has developed the corresponding operation scheme and treatment effect judgment standard. After the analysis of the abnormal changes of the tissue or structure, through the data, this paper believes that although ligament injury is very common in the movement, if not treated in time, it may lead to persistent pain of the joint, traumatic arthritis, osteonecrosis, staggering bone sinus syndrome, and other serious complications. But generally speaking, for the light injury, as long as it is treated in time, it can achieve good repair effect and has little impact on the function of the ligament after healing [1-3].
Basic Concepts of Martial Arts and
Hospital Image
Origin and Development.
At the end of the primitive society, tribal wars occurred frequently. In tribal wars, weapons such as throwers were used at a distance, and weapons such as sticks were used at close range, which greatly promoted the development of martial arts [4]. Chinese Wushu is a pearl in the rich cultural heritage of the Chinese nation. ey are extensive and profound in China and have a long history. e origin of Wushu can be traced back to the ancient human production. In the primitive society with very low productivity, people gradually acquire the skills of chopping, chopping, stabbing, and unarmed fighting in the production activities of hunting, which creates certain conditions for the formation of Wushu. Before the Ming Dynasty, martial arts were mainly based on military alliance technology, and training was mainly based on weapons and actual weapon skills, while boxing was less. Until the Ming Dynasty, there was a technical system with 18 kinds of weapons as training tools, forming a more systematic martial arts theory. After entering the Qing Dynasty, military martial arts gradually declined. In the 27th year of Guangxu, the military talent selection system was abolished, and the Folk Wushu continued to develop. In the period of the Republic of China, people with foresight advocated Wushu and Wushu education in order to build a great country. Wushu has been introduced into the physical education curriculum of the school. Various Wushu groups have been established all over the country. Wushu competitions are held at national or local sports conferences. In 1925, the first Chinese martial arts conference was held in Shanghai. Later, the National Games and martial arts will also be included in the competition. Wushu has entered the modern sports competition [5,6].
Concept and Classification.
Martial arts routine is a set of exercises formed according to the contradictory laws of attack and defense, movement and retreat, movement and movement, combination of hardness and softness, and weakness and reality. According to the different forms of practice, the routine actions can be divided into three types: single dance, duet dance, and group dance. Individual training includes unarmed boxing and equipment training; boxing is divided into unarmed boxing, equipment boxing, and equipment unarmed boxing; group training includes unarmed boxing or equipment.
Main Causes of Damage.
Martial arts routines are mainly divided into two categories: unarmed and instrument. Solid basic skills are the basis of good daily practice. is kind of Kung Fu must be practiced from children, that is, "boy skills." However, the development of human body at this age is not perfect: for example, there are many cartilage components and organic matters in bone, underdeveloped muscles, joint capsule and ligament, and poor joint stability if not reasonable, the training arrangement is too large, the local burden is too heavy, and it is easy to cause sports injury. Whether the athletes' sports level is up to the standard is an important index to measure their sports level. In order to meet the requirements of action specification, we must master the corresponding technical essentials and have certain special quality. For example, the action requirements of "positive kick" are "three straightness and one hook," "toe touch forehead," without special leg flexibility training, and it is easy to cause "osteomyelitis of ischial tubercle" and "pull injury of posterior femoral muscle group." e posture of many movements in Wushu routine is special and complex. If the training level is not high, it is easy to cause "cross legged whirlwind feet" and other injuries. During takeoff, the knee joint is in the state of flexion and pronation, which is easy to damage the meniscus of knee joint. Cross legged landing requires crossing legs, hips, and leaning forward.
is sleeping position causes deep hip muscles to stretch, leading to piriformis syndrome. Martial arts routine generally includes dozens of movements with different structures, types, and difficulties, which are required to be completed in about 1 minute and 30 minutes. To complete the whole set of movements, athletes should not only have general physical quality, but also have the special quality of martial arts. is kind of comprehensive physical quality can only be obtained through long-term and gradual training [7,8]. e meniscus is a device at the knee joint of the human body. It is between the femur and the tibia. ere are two menisci in each knee joint. Its main function is to increase the stability of the knee joint and play a role in cushioning. Effect [9].
Development of Medical Image.
Medical imaging technology is to use modern high-performance imaging hardware equipment, using special imaging technology to scan the internal structure of the human body, in order to obtain the physiological structure and pathological information of a part of the human body. X-ray imaging, CT, MRI, ultrasound imaging, and other medical imaging technologies have been widely used in clinical practice, providing a solid technical support for the development of modern medical technology, the improvement of medical imaging technology, and the improvement of disease diagnosis. At the same time, with the advent of the modern digital image era, the development of image big data has also changed from simple medical diagnosis to the mixed development of a variety of medical diagnosis technologies, becoming the combination of doctors and medical equipment to jointly diagnose patients' diseases. At the same time, how to overcome the adverse factors in the diagnosis process, reduce the degree of physical and mental pain and physical injury of patients as much as possible, and quickly and efficiently obtain medical images that can truly reflect the disease conditions of patients has become a hot topic of modern medical technology research.
Advantages of MRI examination are as follows: no radiation damage, multiparameter imaging with high contrast, improved molecular biology and histological diagnosis, and no bone artifacts [10].
At present, digital imaging technology develops rapidly. Most of the economically developed areas in eastern China and many large public hospitals have completed the transformation to the digital era, but due to the age structure and education level of the overall practitioners, the overall practitioners are still deepening the transformation from the concept of thinking. Traditional image technology and various concepts are not suitable for the development of modern medical treatment in the era of simulation. erefore, the improvement of single function cannot promote the overall development of traditional camera technology, and the related new medical technology cannot be applied. e emergence of digital imaging brings new technological innovation to X-ray photography [11,12].
Imaging Characteristics of Medical Images.
Medical imaging refers to the technology and processing of obtaining the internal tissue image of human body or a part of human body in a noninvasive way for medical or medical research. Medical imaging techniques include X-ray, magnetic resonance, ultrasound, and many other imaging techniques. However, due to the lack of various imaging technologies and the influence of imaging principle, environment, equipment, and other factors, medical images have the Computational Intelligence and Neuroscience characteristics of low contrast, blurred boundary, and unrecognizable to the naked eye. ese characteristics make the early processing of medical image segmentation very difficult. Good segmentation results can lay a good foundation for the following image registration and fusion. In recent years, medical image segmentation has been widely concerned.
2.6. Image Classification Method. Image classification is an image processing method that distinguishes objects of different categories according to the different characteristics reflected in the image information. It uses computers to quantitatively analyze images and classifies each pixel or area in an image or image into one of several categories to replace human visual interpretation. e earliest image retrieval and classification are achieved by adding keyword labels to images. is method needs to add a label to the image to describe the content of the image and then use the label to retrieve and classify the image. is method is simple to use, but now with the increasingly obvious shortcomings of medical image data, it is mainly manifested in the following: first, rich image content is difficult to be fully expressed with simple labels; second, it needs to determine the label and text description; three manually added labels have strong subjective factors; and then image content-based image retrieval (CBIR) is mainly through the extraction of image color, texture, shape, and other features of classification retrieval. To some extent, this method overcomes the shortcomings of traditional image classification retrieval, such as strong subjectivity and long time consuming. However, due to the high requirements of image processing technology, this method needs to design corresponding features according to different types of images, which brings some difficulties to its wide application. Traditionally, ROI is selected manually. If ROI is processed in frequency domain, then Fourier transform is used to process ROI. If geometry, shape, and edge features of medical image are extracted by image processing in spatial domain and combined in mathematical statistics, statistical items have different characteristics; these features are classified in use. ese methods have achieved certain results and provide theoretical basis for the diagnosis and classification of computer-aided medical images in the future.
For image analysis and recognition, the most important problem is how to separate the structure of image content or ROI of an area from image. Separate different attributes (such as gray level, texture) based on the original medical image. After separation, ROI area was extracted, and it is shown that the effect was the closest to the result of biological anatomy. e initial classification process was done by hand, and soon semiautomatic manual and machine classification methods appeared.
Image recognition refers to the process in which graphic stimulation acts on the sensory organs, and people recognize that it is a certain graphic that has been experienced, which is also called image recognition. In image recognition, there is not only the information that entered the senses at that time, but also the information stored in the memory.
Objects and Methods.
Objects are 88 martial arts athletes in a province, including 53 male athletes and 35 female athletes, with an average age of 23 years. According to statistics, the injury of leg muscles and ligaments accounted for 58.3% of all injuries, including 3 cases of rectus abdominis, 5 cases of rotator cuff, 4 cases of ligament, 11 cases of medial collateral ligament of knee, 15 cases of lateral collateral ligament of knee joint, 5 cases of intestinal muscle and adipofascial, 5 cases of tibia skeleton, and 19 cases of lateral ligament mania. e injury of the medial ligament of mania occurred in 12 cases of anterior ligament and 7 cases of inflammation around the leg on the 13th day. 110 cases were reported.
Diagnosis and Treatment
Methods. Medicine will do well together. e traditional use is to paste clearly with eggs. Now, it is paste with alum stone forest. According to the scope of damage, the ointment application program is in the wound, about 3 mm thick. en, cover it with two layers of black hemp paper, then use thin gauze, and finally use bandage. Change medicine twice a day, the third day detumescence pain. On the fourth day, you can attend the training. On the seventh day, you can take part in intensive training and competitions. If the patients were treated with ice compress after injury, the effect of drug treatment would be better.
Effect Standard.
Cure: symptoms disappear, and physical signs return to normal within one week after treatment and can participate in training and competition; effect is obvious: symptoms and physical signs disappear within 12 days after treatment and can participate in high-intensity sports training. Effectiveness: the symptoms and signs are normal within 16 days after treatment and can participate in general training; no effect: after 4 weeks, the symptoms and signs have no obvious change.
Computer Configuration.
e image acquisition of tendon and ligament in this project is carried out with Ge 64 slice spiral CT, the most advanced imaging equipment in the world. e advantages of CT machine are as follows: wide coverage, detector width up to 40 mm; short scanning time, tube rotation time only 0.45 seconds. Each rotating tube can complete up to 64 levels of scanning and data collection, thus reducing the X-ray radiation dose of patients. Submillimeter slice thickness (0.720 mm) volume scanning can obtain isotropic voxel data, which can be reconstructed with any slice thickness and different algorithms, further improving the image quality, reducing the artifacts generated by motion respiration, and improving the clarity of anatomical structure display. e image processing of this project is carried out in advantage workstation, which is developed by GE Company for image acquisition of medical imaging system. In this study, AW4.3-5 image postprocessing software is used for workstation configuration analysis. Analyzer AW4.3-5 is a standard image processing platform, which receives standard images (DICOM format) from imaging equipment. Viewers can display and analyze scanned images, zoom, scroll, adjust window width/level, add comments on text and graphics, and compare images from different angles at the same time. By using reformat technology, the original slice can be reconstructed at any thickness and interval to generate two-dimensional images in different directions of coronal plane, sagittal plane, oblique plane, and curved surface. e resolution is the same as the original cross section image. In addition, it can carry out 2D measurement and annotation in any plane and save the image in DICOM format. e 3D module can be used for 3D reconstruction of continuous fault image in different ways and can be used for arbitrary rotation, translation, cutting, and 3D measurement (area, volume, angle, distance, CT value, etc.) on 3D reconstruction model [13,14].
Image Preprocessing.
Because of the complexity of medical imaging equipment, the quality of DICOM image will inevitably bring noise in the process of image output, transmission, and conversion, which will lead to the reduction of image information, and even some disease information and misinformation. erefore, before 3D visualization analysis, it is necessary to preprocess medical images to improve image quality. e purpose of medical image preprocessing is to suppress the noise in the image by filtering or smoothing, so as to enhance the image. e image can be filtered in time domain and frequency domain. According to the characteristics of DICOM image, this paper adopts the methods of Gauss filter, median filter, and mean filter to selectively process DICOM Image [15,16]. Gaussian filtering is a linear smoothing filter, suitable for removing Gaussian noise, and is widely used in the noise reduction process of image processing.
3.6. Image Acquisition. Basic functions of image acquisition system: the image acquisition system is a combination of software and hardware based on a wireless network to achieve a series of functions such as image acquisition, transmission, storage, management, and display [17]. Tendons and ligaments have special and complex structures, and the shape of each ligament is irregular and overlapped. It is difficult for traditional X-ray and CT to show its complete shape. Using spiral CT continuous scanning method and collecting collective data synchronously, 3D information can be obtained, and 3D reconstruction can be realized. However, the ability to generate high-quality 3D images must be based on obtaining excellent fault images. In order to obtain a clear three-dimensional image of tendon and ligament in this study, the following points should be paid attention to in the process of scanning and reconstruction: (1) Scanning range: in order to avoid the loss of previous work caused by the incomplete coverage of the area of interest after scanning, first determine the appropriate scanning range. (2) ickness and pitch: e volume of tendon and ligament and the pitch between them are very small, so we should choose smaller thickness and pitch. If the pitch is increased, the quality of the lateral image will decrease, the resolution of the vertical axis will decrease, and the quality of the 3D reconstruction image will decrease.
(3) Reconstruction interval: after the data acquisition of spiral CT scan, the reconstruction interval can be selected retrospectively without increasing the X-ray dose. e reconstruction interval should be less than the layer thickness, and it is recommended to use 1/2 of the layer thickness. e more the images overlap, the smoother the 3D image will be.
In this study, Ge 64 slice spiral CT was used. e scan range is from the fingertips of both hands to the end of the forearm. Scanning field: 45 × 45 cm; tube voltage: 125 kV, tube current: 280 mA; detector: 64 × 0.635 pitch: 0.975 : 1; bed speed: 9.75; scanning layer thickness: 0.5 mm, scanning interval: 0.2 mm.
3D Reconstruction Method.
e three-dimensional reconstruction and measurement process of this study was completed in the image postprocessing workstation of CT room of imaging department of General Hospital of a group by using GE analyzer 4.3_software. e general 3D reconstruction process is as follows: (1) Scan the two groups of damaged parts to obtain the axial plane image of thin-layer reconstruction. (2) According to the nature of CT image and the difference of CT value of tendon, ligament, and soft tissue, set the minimum value range for segmentation. Select the appropriate interpretation value, such that the CT value is too high, and the parts with thinner or lower density of ligament cannot be included, forming false holes or irregular cracks; too low will make other structures of ligament edge also included in the imaging range, and the edge will be blurred, and some structure layers cannot be distinguished. In this study, the 3D reconstruction tool in the software is used to segment the axial image manually until the ligament and surrounding tissue are clearly separated [18]. (3) Because the separation of tendon and ligament from surrounding tissue is separated by threshold, there is fat tissue in tendon and ligament, and the threshold of fat tissue is very low, which may leave a cavity when tendon and ligament are divided. is will affect the volume measurement, so the hole needs to be filled. (4) e 3D structure of ligament is obtained by VR reconstruction of threshold image. Because the space between tendon and ligament is small, threshold segmentation cannot separate ligament. However, in Computational Intelligence and Neuroscience order to measure the volume and density of each ligament, it is necessary to separate each ligament, so artificial cutting is also needed in three-dimensional space. In addition to measured tendons and ligaments, tightly connected ligaments will be manually cut [19,20].
Statistical Treatment.
By comparing the volume, relative volume, and CT value of tendon and ligament in the control group, this paper studies the influence of Wushu on ligament morphology and structure. e experimental data are expressed as mean standard deviation (x ± s). e significance of the difference between the two groups was calculated by using the independent significance difference criterion as the sample t-test, P < zero point zero's five. All statistical calculations are based on SPSS11.5 Package.
Comparative Analysis of AOFAS Ankle Posterior Foot
Score before and after Treatment. It can be seen from Table 1 and Figure 1 that AOFAS scores of ankle hind foot in the two groups before and after treatment were significantly improved compared with that before treatment, and the difference was P < 0.05 through t-test. After treatment, the score was 15 points higher than that before treatment, the average score was (42.75 ± 3.13), and the excellent and good rates were 92.16%. e results showed that the type I and type II injuries of the lateral ligaments of the ankle joint can only heal the injured ligaments by braking the ankle joint; the exercise of ankle joint muscle function can restore the range of motion and function of the ankle joint, while some type I and type II ligament fractures with joint dislocation need active treatment. ere are many methods to treat the injury of lateral ligament of ankle. Because of the lack of ligament tissue and muscle, the original ligament tissue cannot be used. Tendon transplantation is needed to treat the lateral ligament injury of ankle joint. Methods of tendon transplantation include autogenous tendon transplantation, allogeneic tendon transplantation, allogeneic tendon transplantation, and tissue engineering materials. When the extent of tendon defect is large, autograft materials are limited, and autograft cannot meet the clinical needs, which increases the difficulty of tendon reconstruction and repair. Allogeneic tendon transplantation is rich and convenient. It has no damage to normal structure and can keep the original physiological structure characteristics. It can meet the size and quality requirements of reconstruction and repair operation and reduce the risk time of foot operation. At present, allogeneic tendon transplantation is the most ideal method to repair the injury of lateral ankle ligament.
Ankle ligament is an important structure to maintain the stability of the ankle joint, and ankle ligament injury is often an integral part of the trauma pathology of ankle fracture and dislocation.
Analysis of Abnormal Changes in Organization or
Structure. Ligament injury usually involves the change of ligament thickness.
is study measured the thickness of injured ligament and healthy ligament, and the difference was statistically significant, suggesting that ligament thickening can be used as one of the important indexes to judge the acute injury of lateral ligament of ankle. At the same time, it will cause a series of abnormal changes in adjacent tissues or structures, the most common of which are fracture, bone damage, cartilage damage, joint capsule effusion, and tendon sheath effusion.
According to Table 2 and Figure 2, there are 43 cases of type I injury, 36 cases of anterior talofibular ligament, and 5 cases of calcaneus ligament, among which 13 cases (28%) are accompanied by bone injury, joint capsule effusion, and 28 cases and 6 cases of peroneal tendon sheath effusion. No obvious cartilage damage was found.
ere were 45 cases of type II injury, and 15 cases of simple injury, all of which were anterior talofibular ligament injury, 30 cases of composite injury, including 37 cases of bone injury, 44 cases of joint capsule effusion, 9 cases of cartilage injury, and 25 cases of peroneal longus tendon sheath effusion.
ere is a certain correlation between the above-mentioned complications and the location and degree of ligament injury. e long and short tendons of fibula are attached to the surface of calcaneus ligament and are accompanied by it. When calcaneus ligament is injured, the sheath of the long and short tendons of fibula may also be involved. erefore, the effusion of tendon sheath with fibula length can be used as one of the criteria to diagnose the injury of common fibular ligament, which is consistent with this group of cases. In addition, the effusion of the anterior capsule outside the articular capsule often indicates the bone injury of the anterior talofibular ligament and the attachment point of the ligament, indicating the corresponding ligament injury.
Comparison of the Repair Effect of MRI in the Treatment
Group and the Nontreatment Group. As can be seen from Table 3 and Figure 3, 38 cases (86.36%) in the 4-week treatment group and 27 cases (61.6%) in the nontreatment group were well repaired. ere was no abnormal signal on MRI of the injured ligament, and the shape was basically the same as that of the opposite ligament. In patients with poor repair, MRI only slightly thickened the contralateral ligament, a small amount of joint cavity effusion, less clinical discomfort, basically normal walking, 8-week reexamination showed that the ligament repair was good. erefore, it can be considered that grade I ligament injury does not affect the stability of ankle joint and can be treated conservatively without external fixation.
e repair time is about four weeks. In this paper, ligament injury is very common in life and sports. If we ignore the diagnosis of these injuries and do not treat them in time, it may lead to persistent joint pain, traumatic arthritis, osteonecrosis, crisscross Ossal sinus syndrome, and other serious complications. rough MRI examination, we can observe the signal, morphological changes, and repair process of the injured ligament, so as to accurately and comprehensively evaluate the process of ligament repair and guide the selection of clinical treatment plan. Most of my injuries are passive stretching of ligaments. e swollen and injured ligaments are thickened or can be combined with micro-bleeding, but they are completed continuously.
ere is no macroscopic manifestation of ligament rupture. e degree of injury is relatively mild, the stability of ankle joint is not affected, and the effect of repair is reasonable.
As shown in Table 3, when the treatment time was four weeks, the good maintenance rate of the treatment group was 86.36%, and the good maintenance rate of the nontreatment group was 61.36%; at the eighth week, the good maintenance rate of the treatment group was 100%, and the good maintenance rate of the nontreatment group was 79.54%.
Ysholm and IKDC Treatment Effect
Score. 88 patients were followed up for 10-11 months, with an average of 10.3 months.
According to Table 4 and Figure 4, t-test was used to analyze Lysholm score, IKDC score and knee joint mobility in the first year after operation, and the difference was statistically significant compared with that before operation. e motor function of 80 patients was close to normal, and 6 patients were stiff due to joint adhesion. After closed and loose operation and active guidance of rehabilitation training, it was significantly improved. Within 3 to 5 months after treatment, the sensory and motor functions of the affected limbs recovered. Computational Intelligence and Neuroscience 7
Conclusions
Once the ligament injury occurs, it will definitely cause the injured person, and a local pain, swelling, and congestion will be affected. If the ligament injury cannot get one, very reasonable and effective treatment, it will also cause some sequelae harm. Ligament injury is a common sports injury. Patients with mild disease usually only need to fix the injured part locally and recover to normal function through the selfhealing ability of tendon and ligament. But for the more serious sprain, even ligament tear, breaking this situation, it should be timely, active medical treatment. Martial arts athletes are the high incidence population of tendon and ligament injury. For those with poor treatment effect, they often bring serious complications, such as osteonecrosis and staggering bone sinus syndrome. In view of this, this paper further studies the repairing effect of tendon and ligament injury in martial arts. Based on the data analysis of 88 experimental samples in this observation experiment, this paper considers that the best treatment for patients with ligament type I is local fixation and drug treatment. For patients with severe tear and fracture, surgery is recommended. At present, allogeneic tendon transplantation is the most ideal treatment for ligament tear and fracture. In Lysholm and IKDC treatment score, the average repair cycle of ligament injury is 8-12 weeks. e Lysholm score of 88 patients before treatment was 16.85, and the Lysholm score after 1 year of treatment was 89.33, which shows that this treatment method is extremely effective. For severe patients, active rehabilitation training can basically restore the biological function of ligament. rough the study of this paper, we have a new understanding of the repair mechanism of tendon and ligament injury, which is helpful for clinical diagnosis and treatment.
Data Availability
No data were used to support this study.
Conflicts of Interest
e authors declare that they have no conflicts of interest.
|
2022-06-27T17:39:39.418Z
|
2022-06-24T00:00:00.000
|
{
"year": 2022,
"sha1": "0d198304d82f558c81fd2ce59b310894bed3f08e",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/cin/2022/8494734.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "09446a340d23324741ff25db5a00b5449ea82f10",
"s2fieldsofstudy": [
"Education",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
18427883
|
pes2o/s2orc
|
v3-fos-license
|
Systemic factors of errors in the case identification process of the national routine health information system: A case study of Modified Field Health Services Information System in the Philippines
Background The quality of data in national health information systems has been questionable in most developing countries. However, the mechanisms of errors in the case identification process are not fully understood. This study aimed to investigate the mechanisms of errors in the case identification process in the existing routine health information system (RHIS) in the Philippines by measuring the risk of committing errors for health program indicators used in the Field Health Services Information System (FHSIS 1996), and characterizing those indicators accordingly. Methods A structured questionnaire on the definitions of 12 selected indicators in the FHSIS was administered to 132 health workers in 14 selected municipalities in the province of Palawan. A proportion of correct answers (difficulty index) and a disparity of two proportions of correct answers between higher and lower scored groups (discrimination index) were calculated, and the patterns of wrong answers for each of the 12 items were abstracted from 113 valid responses. Results None of 12 items reached a difficulty index of 1.00. The average difficulty index of 12 items was 0.266 and the discrimination index that showed a significant difference was 0.216 and above. Compared with these two cut-offs, six items showed non-discrimination against lower difficulty indices of 0.035 (4/113) to 0.195 (22/113), two items showed a positive discrimination against lower difficulty indices of 0.142 (16/113) and 0.248 (28/113), and four items showed a positive discrimination against higher difficulty indices of 0.469 (53/113) to 0.673 (76/113). Conclusions The results suggest three characteristics of definitions of indicators such as those that are (1) unsupported by the current conditions in the health system, i.e., (a) data are required from a facility that cannot directly generate the data and, (b) definitions of indicators are not consistent with its corresponding program; (2) incomplete or ambiguous, which allow several interpretations; and (3) complete yet easily misunderstood by health workers. Taking systemic factors into account, the case identification step needs to be reviewed and designed to generate intended data in health information systems.
Background
Management of local health systems, especially in lowresource settings, requires relevant indicators and quality data from routine health information systems (RHIS) [1]. However, problems have been repeatedly reported on RHIS, such as (a) unreliable data [2,3], (b) incomplete and delayed reports [4,5], (c) putting too much burden on health workers [6,7] and (d) low use of data [8]. This situation hampers evidence-based decision-making, especially in local health systems that depend on data available mostly on RHIS. Although several efforts have been made to improve the performance of RHIS (e.g. computerization of data processing, simplification of definitions of indicators, revision of recording and reporting forms, and re-structuring of health information systems), most developing countries still lack sufficiently strong and effective health information systems [9].
A similar situation is seen in RHIS in the Philippines. The Field Health Services Information System (FHSIS) is a national RHIS that has been operated by the Department of Health (DOH) since 1990 [6]. The system was updated in 1996 to accommodate the use of the system under the devolution [10,11]. The FHSIS 1996 version was expected to solve persistent problems in the FHSIS 1990 version [12]. However, there are still claims that (a) data are unreliable and (b) reports are delayed [13]. DOH-National Epidemiology Center has made continual interventions, such as computerization of data-handling processes, simplification of indicators and revision of standardized forms. Despite these efforts, effective ways to improve the quality of data in FHSIS remain unclear [13].
Quality of data can be compromised in each step of data handling such as case identification, data transmission, data processing and data analysis. In other countries, inconsistencies of data have been observed in each data-handling step in existing health information systems [2,3]. However, the mechanisms of committing errors in those systems are not fully understood, thus limiting further discussion on development of datahandling steps that may help to reduce the occurrence of errors.
Quality of data is compromised, especially in the case identification step, even if data are appropriately handled through the subsequent steps. Although tasks in the case identification step require health workers to understand the definition of each indicator, prior to this, health workers' understanding requires consistent settings of case definitions that assure that eligible cases are appropriately identified when health workers follow the definitions. Otherwise the definition itself may induce health workers to commit errors by systemic reasons. Such causing factors are known as systemic factors of human errors [14]. Improvement opportunities of RHIS can be identified through responses from health workers who actually work on it. Efforts to reflect health workers' response to achieve quality data have been made by assessments of level of understanding. These efforts have pointed to further trainings for health workers. However, this approach alone cannot identify improvement opportunities regarding systemic factors of human errors in RHIS.
In the education field, instructions to reach a given goal or standard are assessed by a combinational use of level and disparity of understanding of items in a criterion-referenced test [15]. RHIS provides instructions of its standard through an operational manual, various trainings and recoding & reporting forms to achieve a full understanding of the standard among health workers to generate intended data in RHIS. This method of assessment of instructions can be applied to the responses of the health workers to RHIS standard to further understand the mechanisms of committing errors in the case identification step of RHIS.
This study aimed to investigate the mechanisms of committing errors in the case identification step in the existing routine health information system (RHIS) by measuring the level and disparity of health workers' understanding of health program indicators used in the Field Health Services Information System (1996 version) in the Philippines, and characterizing those indicators accordingly.
Data Collection
To gauge health workers' understanding of indicators in the FHSIS 1996 version, a structured questionnaire was administered to 132 health workers who were in charge of the case identification step for the first quarter (January, February and March) of 2006 from 14 selected municipalities, 11 from the mainland and three from nearby islands, in the province of Palawan. These 14 municipalities were selected as priority municipalities of the study because they contributed more than 80% of the reported data to the Provincial Health Office for each of the 12 indicators in the first quarter of 2006.
Health workers, including midwives and public health nurses, are responsible for generating data for FHSIS at the field health facilities such as health posts (Barangay Health Station, BHS) and health centers (Rural Health Unit, RHU) operated by the local government units. They record individual cases and identify eligible cases following the definitions of indicators in the official guideline of FHSIS. Cases seen in private health facilities that did not use public health facilities (BHS and RHU) are not supposed to be included in FHSIS. The eligible cases are then counted and reported monthly to the RHU using standardized reporting forms. In the first quarter of 2006, 132 health workers, including three public health nurses, 128 midwives and one medical technologist, prepared monthly reports in 166 out of 216 BHSs and 14 out of 22 RHUs under the Provincial Health Office of Palawan. Some of these health personnel were assigned in more than one health facility in the target municipalities.
The structured questionnaire including 12 items on definition of 12 selected indicators of FHSIS was developed based on (1) document review of FHSIS report to identify regularly-reported indicators; (2) two separate focus group discussions conducted by local program managers, one for 12 public health nurses conducted by a program manager in the Regional Health Office and another for 14 midwives conducted by the Provincial Health Office, from 13 municipalities in the province of Benguet to find out difficulties of health workers in the case identification step; and (3) key informant interviews with program managers in both the Provincial Health Office of Palawan and Benguet to identify indicators that are perceived of doubtful data. The program managers in both distant provinces -Palawan is located in the southwest of Luzon Island, which is closer to Malaysia, and Benguet is located in the mountainous area of Luzon Island -similarly perceived eleven indicators as generating a doubtful report. These eleven indicators included two for maternal health care, two for family planning, three for child health, one for nutrition and three for infectious diseases. An indicator for malaria control has been added because it is not only perceived as a doubtful indicator but the disease is also highly endemic in Palawan. Thus 12 indicators were selected from a list of 78 indicators in different health programs in the services accomplishment component of FHSIS. Table 1 shows the selected 12 indicators and their definitions in the official guideline of FHSIS.
Twelve items were developed for the 12 selected indicators. The questionnaire is provided as an additional file 1. Nine items asked to identify eligible cases from a list of eligible and ineligible cases. Two items asked for tasks to identify eligible cases on simulated individual records, and one item asked for a task to calculate the simulated number of cases. Choices in each item were developed based on actual individual cases identified on patient records in BHS and RHU in the provinces of Benguet and Palawan.
Among the cases observed in field visits and identified in focus group discussions, some were difficult to judge whether they are eligible or ineligible under the definitions in the official guideline of FHSIS. These doubtful cases were asked the DOH National Epidemiology Center to judge their eligibility.
Practical utility of the items was checked in a pre-test with 26 health workers in the province of Benguet. To ensure applicability of the items to Palawan, each item was reviewed and modified by each health program manager in the Provincial Health Office and a public health nurse in one RHU in Palawan. A pre-test in Palawan was conducted with three midwives in the city health center of Puerto Princesa, Palawan. The city health center was chosen for the pre-test because the FHSIS report was required for the city health center but had an independent reporting line from that of the province. Expressions of items likely to misguide the respondents were corrected based on interviews with respondents immediately after the pre-test.
The language used in the questionnaire was English, the official language of the Philippines, because (1) all official documents are in English, including all the documents related to FHSIS; (2) high school education is in English and (3) the national board exams for midwives and nurses are conducted in English.
To avoid sharing the contents of the questionnaire among the health workers, the questionnaire was administered by the staff of the Provincial Health Office between April and July 2006, when no training was scheduled in the province. All 132 health workers were asked to visit their respective RHU for orientation regarding the study and instructions for filling out the questionnaire.
Ethical Consideration
The participation of health workers on this study was on a voluntary basis. The name of the health worker was required but such individual information was used only for research purposes and not for individual assessment. The survey was approved by the Provincial Health Officer and all 14 Municipal Health Officers in the targeted areas.
Measures and Analysis
For each of the 12 items in the questionnaire, a discrimination index and a difficulty index were calculated, and the patterns of wrong answers were abstracted.
The difficulty index, also known as the proportion of respondents who got the correct answer in an item [16], was used to measure the level of understanding of indicators among the health workers. It ranges from 0 to 1; the closer the index to 1, the higher the number of respondents who correctly understand the content being measured by the item. However, the difficulty index alone does not tell whether an item was equally or unequally understood among respondents, and subsequently cannot identify characteristics of an item that may point to an area of instruction. Thus, in addition to the difficulty index, a generalized upper and lower discrimination index was used to measure the disparity in the level of understanding of indicators among the health workers. It calculates the item's ability to discriminate between those who scored high on the total test and those who scored low by subtracting the difficulty index of the lower group from that of the upper group [15]. It ranges from -1 to 1; the closer the index to |1|, the higher the ability of the item to discriminate between those who scored high on the total test and those who scored low. To calculate the discrimination index for each of the 12 items, the respondents who ranked within the top quartile point of the total number of correct answers was considered the upper group and those who ranked within the bottom quartile point of the total number of correct answers was considered the lower group.
Using the discrimination index, each item on the questionnaire is classified into one of three categories: (a) a positively discriminating item, which means that a significantly larger number of respondents in the upper group than in the lower group answered the item correctly, (b) a negatively discriminating item, which means that a significantly larger number of respondents in the lower group than in the upper group answered it correctly, (c) a non-discriminating item, where the percentage of correct answers in the upper group and the lower group were approximately equal.
Difficulty and discrimination indices are typically used in norm-referenced and criterion-referenced tests in the field of education. The norm-referenced test aims to measure the position of the tested individual in a population who took the test, thus an item with a positive discrimination index against a difficulty index within a certain range is preferred according to the test objective [17]. The criterion-referenced test, which was applied in the present study, aims to measure the level of achievement of respondents against given goals or standards, thus an item with non-discrimination against a higher difficulty index is considered a better achievement of instruction measured by the item [18].
Quality of items in criterion-referenced tests is assessed by sensitivity to the corresponding instructions [19]. We applied this method to the assessment of quality of the existing indicators in FHSIS based on health workers' understandings. Since (1) definitions of indicators in FHSIS were used for the criterion of correct answers in the items and (2) distracters of choices in the given items were developed based on actual cases found on patient records, a positive discrimination index here may indicate that the instruction needs to be revised to be more effective for the lower group and non-discrimination against a low difficulty index may Table 1 Definitions of 12 selected indicators on the official guideline of FHSIS (1996)
Selected indicators in FHSIS
Definition on the official guideline of FHSIS
Pregnant women with 3 or more prenatal visits
Pregnant women who had 3 or more pre-natal visits during the month such that at least one visit occurs during the first trimester, one during the second trimester and at least one during the last trimester.
Family Planning: Current users The number of FP clients who have been carried over from the previous month after deducting the drop outs of the present month and adding the new acceptors of the previous month. Changing clinic, changing method and restart are included under current users.
Family Planning: New acceptors Clients who were using a contraceptive method for the first time or new to the program.
Severely Underweight Children (6-59 months) Children 6-59 months old who were found to be severely underweight. Cases identified should only be reported once during the year.
Pneumonia cases seen (0-59 months) The number of 0-59 months old children seen at the health facility during the month for consultation due to pneumonia. Severe, very severe pneumonia and non-pneumonia are not included in this indicator.
Infant given BCG* All concerned are hereby requested to report on a monthly and quarterly basis the number of infants given BCG, DPT1, DPT2, DPT3, OPV1, OPV2, OPV3, HepaB1, HepaB2, HepaB3 and Measles vaccine on a per antigen basis.
Fully Immunized Children (9-11 months) Children from 9-11 months old who have been given BCG, 3 doses of DPT and OPV and measles vaccine. The child is counted FIC as soon as all the required vaccines are administered without waiting for the child to reach 1 year of age.
Rabies: Animal Bite Cases Seen
Person who were bitten by animal (dogs, cats, and others) during the month.
Malaria: Confirmed Cases
Malaria cases identified throughblood smear.
TB symptomatics with sputum examination
Individuals with symptoms compatible to TB who had sputum examination during the month.
New sputum positive cases initiated treatment New cases found positive through sputum examination and initiated anti-TB treatment. New cases refer to those who have never taken any anti-TB drugs or who have never taken more than one month of anti-TB drugs.
indicate that not only instruction but also revision of the setting of indicators itself may need to be reviewed. Ideally, to ensure the quality of data in the health information system, all health workers must be able to equally identify eligible cases for indicators at the case identification stage, which means that the discrimination index of each item is not significant while its level of difficulty index is high. Assuming the probability of a correct answer in the upper group (p 1 ) and lower group (p 2 ) is equal (p 1 =p 2 = 0.50), the discrimination index (B) which gives P (B) < 0.05 for a two-tailed test of significance was identified by the following formula [15]; where the sets of values (U, L) that generate any given B = k are available by all integral solutions for U and L of n 2 Un 1 L = k; U = 0, 1,..., n 1 ; L = 0, 1,..., n 2.
All statistical procedures were done with SPSS version 14.0 and Microsoft Excel 2003.
Results
The response rate of the questionnaire survey was 94.7% (125/132). 113 valid responses (85.6%) were analyzed after excluding 12 incomplete responses. Table 2 shows the relevant profile of the respondents.
Difficulty index and discrimination index
The difficulty index and the discrimination index of each of the 12 items are shown in Figure 1. The discrimination index (B) of 0.216 which gives P (B) < 0.05 for a two-tailed test of significance was determined based on the number of the upper group (n 1 = 42) and that of the lower group (n 2 = 39). No significant difference was found in the number of the upper group and the lower group by the type of assigned facility (p = 0.203), the location of the assigned municipality (p = 0.783) and the source of FHSIS knowledge (p = 0.756).
None of the items reached a difficulty index of 1.00. An average of the difficulty indices of the 12 items was 0.266. According to the cut-off discrimination index (B) of 0.216 and the average difficulty index, 12 items were grouped into three categories ( Figure 1). Six items showed non-discrimination against lower difficulty index of 0.035 (4/113) to 0.195 (22/113), two items showed positive discrimination against lower difficulty index of 0.142 (16/113) and 0.248 (28/113) and four items showed positive discrimination against higher difficulty index of 0.469 (53/113) to 0.673 (76/113).
The difficulty index of each item was examined by characteristics of respondents such as an assigned facility, a location of the assigned municipality and a source of FHSIS knowledge. Significant differences were observed on the item on "New acceptors" (p = 0.042) when health workers from RHU (80.0%, 8/10) were compared to those from BHS (42.7%, 44/103), the item on "Fully immunized children (9-11 months)" (p = 0.038) when health workers from mainland municipalities (13.5%, 12/89) were compared to those from island municipalities (41.7%, 10/24), and the item on "TB symptomatic with sputum exam" (p = 0.002) when health workers who received a formal training on FHSIS from the Department of Health or the Provincial Health Office (0%, 0/60) were compared to those who obtained the knowledge of FHSIS through an informal training by a trainer in a municipality who was trained by the Provincial Health Office or the Department of Health (6.3%, 3/48) and those who obtained the knowledge through self-learning (40.0%, 2/5).
Patterns of wrong answers
An additional file 2 provides 12 items and the frequency of choices in each item.
(1) Items with non-discrimination against lower proportion of correct answers of respondents answered the number of "Current Users of the reporting month" that was available when the number of "New Acceptor of the reporting month" was used instead of the number of "New Acceptor of the last reporting month" to add to the number of "Current Users of the reporting month" in the calculation. The item on "Fully Immunized Children (9-11 months)" showed a difficulty index of 12.4% (14/113
Discussion
Errors in coding and classification in case identification, such as mistakes in data entry [20] and misinterpretation of the information in the original documents [21], are typically observed as human errors. It is known that human errors in coding and classification tasks have characteristics, such that (1) many errors are non-random [21] and (2) more complex cases are more prone to errors [22]. These characteristics imply underlying unique causes of errors in the case identification step in health information systems.
Regarding the causes of human errors in the case identification step of RHIS, confusion about the definition of indicators among the health workers was reported by the low percentage of respondents who correctly explained the definitions of the indicators [23]. However, lack of training has tended to be considered as the main factor that may cause such confusion but the mechanism of the confusions had been not wellunderstood.
The present case study of the FHSIS 1996 version in the Philippines demonstrated that indicators can be characterized by how the health workers understand them. The discrimination index and difficulty index for each item show the current understandability of the definition of indicators. When the unique setting of each indicator and pattern of wrong answers are considered, systemic factors behind health workers' confusion are highlighted and their possible countermeasures can be identified.
Characteristics of indicators (1) Indicators that are unsupported by the current conditions in the health system
Four of six indicators with low difficulty indices that led to non-significant differences in the level of understanding highlight characteristics of indicators that are unsupported by the current conditions in the health system. These indicators can be improved by first ensuring consistency between the definition of the indicator in the FHSIS and the current condition of health systems.
The definition of "TB symptomatics with sputum exam" seems to have been practically simplified by health workers according to the available function of generating data since most of the time health posts were not equipped to do the sputum microscopy. Although this item showed a low difficulty index, those who selected the correct choices may have established a feedback mechanism with their RHUs. Another similar example is "New sputum positive initiated treatment" for tuberculosis. This requires initial treatment but the treatment is given to a patient as soon as the patient is confirmed as tuberculosis at RHU, not at BHS. For both indicators, BHS cannot directly collect data for reporting. Indicators that require data to a facility that cannot directly generate the data was also reported in Chad [24]. On the contrary, the indicator of "Malaria: Confirmed cases" similarly requires blood smear examination at BHS. However it showed a relatively high difficulty index with a positive discrimination index. This can be explained by the availability of barangay malaria microscopists at health posts for blood smear examination in the province of Palawan. Unlike the other malaria endemic areas of the Philippines, Palawan has its own malaria control project called Kilusan Ligtas Malaria that trained and deployed malaria microscopists in the barangays (villages).
There seems to be a gap between the definition and practice of data collection for the indicator on "Rabies: Animal bite cases seen" because animal bite cases seen in RHU and BHS are often transferred to animal bite centers in hospitals and some RHUs, where a post exposure immunization is available and given if necessary. As a result, what health workers know is "animal bite cases transferred" when there was no feedback from animal bite centers. Also, ambiguous definition can be another reason because "others" in the definition of the indicator allows health workers to consider several interpretations. Even "snake bites" was considered as suspected cases of rabies. Since DOH intended "others" to mean animals with risks of transmission of the rabies virus, such as mammals or canines, an additional definition specifying the meaning of "others" would be another approach to make the definition more understandable.
"Pneumonia cases seen (0-59 months)" showed that severe pneumonia cases were considered eligible cases to report while the services accomplishment component of FHSIS excludes severe pneumonia cases from its definition of "Pneumonia cases seen (0-59 months)". This can be partly explained by the inconsistency between the definition in FHSIS and the case definition of notifiable diseases. Although the definition of this indicator in FHSIS covers children aged 0-59 months, there are only two categories for children less than 2 months -severe pneumonia and no pneumonia -according to the case definition of notifiable diseases. Confusions among health workers created by inconsistencies between definition of RHIS and its corresponding program were also known in other countries [25]. Another reason for the confusion can be a limited role of health workers because final diagnosis was often given by doctors while health workers have already counted them as pneumonia cases.
On the contrary, "Severely underweight children (6-59 months)" showed a relatively high difficulty index with a positive discrimination index although there were inconsistencies of two different definitions of weighing in FHSIS and the corresponding program. This indicator had the characteristics of covering cases in both daily health services and biannual special campaigns such as Operation Timbang [26]. In the 2004 update of the nutrition program, a new indicator was introduced for underweight cases reported from special campaigns of Operation Timbang [27], yet there was no update in the corresponding indicator in FHSIS in 2006. Both of these indicators used different criteria following a different standard of growth monitoring. Consequently, both of these criteria must be chosen to identify the eligible case in FHSIS. The health workers with the correct answer seem to have been informed of the situation through the Municipal Nutrition Office since the Nutrition Program gave an explicit instruction of difference between Operation Timbang and FHSIS in the implementation guideline [27].
(2) Indicators with incomplete or ambiguous definitions Two indicators with low difficulty indices that led to significant differences in the level of understanding highlight characteristics of incomplete or ambiguous definitions. These indicators can be improved by clarifying their definitions first.
The item on "Pregnant women with 3 or more prenatal visits" showed that health workers may have at least two other interpretations for the pregnant woman who had several visits in different months in her 3rd trimester, such as reporting the same pregnant woman more than once or reporting the pregnant woman late by waiting until the woman completes the delivery. Even though DOH intended that the eligible pregnant woman be reported once during the pregnancy as soon as she met all criteria in the definition, an explanation of timing and number of reports was lacking in the definition. This would require an additional definition of the timing of report and a definition that the same pregnant woman should be reported only once during her pregnancy.
The item on "Infants given BCG" showed that the definition can allow health workers to consider several interpretations of eligible cases based on different target populations and eligible cases of FHSIS reporting. Since this indicator was added after implementation of the official guideline of FHSIS 1996, the department circular No. 289 s.2000 was distributed to health administrative offices and field health service units to inform them of its definition. Even though DOH intended the case to include infants who received BCG vaccine at field health service units regardless of infants' residence, such as within or outside the catchment area of the assigned field health service unit, there is no explanation of such specification in the definition. This situation allowed another interpretation of reporting infants who received BCG vaccine at the assigned field health service unit only when the infants resided within its catchment area.
The original intention of FHSIS was to address the short term data needs of DOH staff managerial or supervisory functions in DOH facilities and in each of the program areas [26]. FHSIS was updated in 1996 to accommodate the situation under the devolution. Thus, Local Government Units were included as users of FHSIS for their management role of field health services unit [11]. These intentions led to a conclusion that the eligible population of all indicators in FHSIS was identified from a population who received health services at the assigned field health service unit. However, identification of eligible cases like an indicator of Fully Immunized Children (9-11 months) requires records of all given antigens even if some of them were not given at the assigned field health service unit. Furthermore, the number of the target population of each of required antigens and Fully Immunized Children (9-11 months) were projected based on total population from census. These practices seem to be a source of the confusion because they imply that eligible cases of reporting in FHSIS are identified within the target population of the responsible field health facility.
(3) Indicators with complete definition yet easily misunderstood by health workers
Two indicators with higher difficulty indices that result in a significant difference in the level of understanding among respondents highlight characteristics of indicators with complete definition yet are easily misunderstood by health workers. These indicators may need improved instructions so that all health workers will have the same level of understanding of the indicator. For example, a misunderstanding found in the item on "Pregnant women with TT2 plus" is explained by the understanding of the meaning of "TT2 plus"; "TT2 plus" could be interpreted as more than TT2. The item on "Family Planning: New Acceptors" indicated that most misunderstandings were explained by understanding the meaning of "New Acceptors". "New Acceptors" could suggest "new to methods" or "new to a clinic". Since definitions of these terms are clearly described in the official guideline of FHSIS, these items may show a need for further training. However, specification of terms can be considered further improvement of instructions, such as "TT2 to TT5" instead of "TT2 plus", and "New to program" instead of "New Acceptors". Differences in understanding of "New Acceptors" among health workers in RHU and those in BHS could be partly explained by the existing protocol of the Family Planning Program. Clients who are new to the program are recommended to visit RHU first for an examination, and are then followed up by BHS. Not all clients may visit RHU first but the frequency of meeting such clients will still be higher in RHU.
Even if the definition of the indicator in the FHSIS is consistent with the current condition of health systems, when an item shows a low difficulty index against a non-significant discrimination index, the indicator seems too difficult for health workers and the definition itself may require re-definition to simplify it.
For example, the definitions of the indicators for "Family Planning: Current Users" and "Fully Immunized Children (9-11 months)" were consistent with the current condition of health systems and clearly described in the manual of FHSIS. However, these indicators showed a relatively low difficulty index with a non-significant discrimination index. "Family Planning: Current Users" may be more difficult for health workers even though its definition is clearly described. To fully understand the indicator, health workers need to understand not only the term definitions but also a formula before its actual calculation unlike the other indicators which only require understanding of term definitions.
"Fully immunized children (9-11 months)" demonstrated the confusion among health workers between children who were given all antigens less than 1 year old (Fully Immunized Children) and those given all antigens regardless of being less than 1 year old.
If the instructions do not promote understanding of these indicators, the indicators themselves may require re-definition, such as asking for a report of the components of the formula or introducing a new indicator.
In addition to the systemic reasons described above, weakness in the current mechanism to achieve a full understanding of the FHSIS standard seems to influence the confusions among health workers. For example, the case definition "children given all antigens less than 1 year old" is easy to understand once there is a chance to learn about it. According to staff of the Provincial Health Office of Palawan, it is difficult to keep the right knowledge in their health system because of frequent staff changes and limited opportunities for training. Even though a training of trainers approach has been applied by the Provincial Health Office of Palawan to transfer knowledge and skills to municipalities, a reality of training under municipality level is still unknown. Better understandings of "Fully immunized children (9-11 months)" among health workers in the island municipality may be partly explained by the efficiency of training due to relatively smaller number of health workers. In such a situation under limited opportunities for training, the definition of indicators must be simple, and recording and reporting forms used for routine work need to contain instructions for data handling procedures [28].
Even if these 12 indicators were technically important for the management of local health systems, they are impractical because they risk inducing health workers to commit errors. Figure 2 summarizes a link between behavior of a difficulty index, a discrimination index and possible countermeasures to be considered to achieve full understanding of definition of indicators. It also provides a guide to investigate potential causing factors of errors in case identification. For example, when one indicator shows non-discrimination against low difficulty index, in addition to investigations on ambiguous or incomplete aspect of the definition of indicators and those on needs for further training, investigations may be required for the consistency of the definition with the current condition of health system. Also when the consistency is assured and the definition was found to be complete, an investigation on difficulty among health workers of the indicator may be required. Such investigations are expected to provide the most probable and influencing factors of health workers' confusion. Furthermore, a characteristic of a discrimination index of not independent of the achievement level of the examinee population [29] implies that the combination of discrimination and difficulty indices can be used for tracking the efforts of continuous improvement in the system. For example, with keeping the original intention of the system, when the system design was modified and new instruction was given to health workers, and then health workers' level of understanding was increased, newly calculated discrimination index would tell again newly identified improvement opportunities. Also it would guide investigations into the most probable and influencing factors of health workers' confusions.
In low-information and low-communication technology (ICT) settings, manual and paper-based data-handling are still being used in RHIS, especially in local health systems. In such systems, most data-handling stages depend on human capacity. Even in computerized health information systems, the data entry step cannot be done without human effort. Therefore, control of human errors is one of the key issues to overcome in order to produce quality data in RHIS.
Errors are seen as consequences rather than causes, having their origins in "upstream" systemic factors [14]. The results of our study show that even in an integrated RHIS such as the FHSIS 1996 version in the Philippines, the quality of data for each indicator can be influenced by its characteristics in the case identification stage. More general factors influencing RHIS performance and their hypothetical relationships are described as three determinants, including technical, organizational and behavioral factors [1]. Negative influences of these factors need to be minimized, especially when data-handling of each indicator is done by human effort.
Continuous training is known as an effective method of improving the accuracy of data [30]. However, in low-resource settings, one limitation of a sustainable system operation is the small number of training opportunities [5]. This situation was applicable in our study site in the Philippines. If this is a limitation we could not avoid, the system needs to be designed as simple as possible for health workers while keeping its original purposes. Also, even if enough training opportunities exist, effective training cannot be designed when the standard of the health information system is neither coherent nor continually updated to the current situation of health systems [31]. When the system standard itself contains contradiction or imposes impossible tasks on health workers, we cannot expect that a continuous training approach alone will produce quality data.
In industry, it is well accepted that "quality is built in process [32]". "Process" (a set of interrelated or interacting activities which transform inputs into outputs [33]) is a unit of management that is designed, controlled and improved continuously to produce its intended output. Taking systemic factors into account, the case identification stage needs to be designed to produce its intended output. Further studies are needed of the systemic factors that may affect data quality and process designs that could control them.
Certain limitations of the study should be borne in mind. First, the findings from the questionnaire survey may not be entirely applicable to other areas in Palawan and the Philippines. Data were collected from health workers in municipalities in the mainland and nearby islands of Palawan. These municipalities are more readily accessible to the capital city of Palawan, thus they would be expected to have better access to a formal instruction of FHSIS. Second, the identified possible errors through focus group discussions and field visits, and related systemic factors may be a part of all existing errors and factors, since not all distracters of each indicator may be identified or reflected in the items. Although multiple-answers were applied for asking eligible cases in the questionnaire, difficulty and discrimination indices may change according to the standard of the RHIS and existing cases of distracters in the actual field. Third, the impacts of identified possible errors on quality of data remain unclear. Quality of data may not be influenced even when health workers misunderstood indicators if there was a place with no case of distracters Figure 2 Behavior of difficulty and discrimination indices and possible countermeasures to be considered. It provides a guide for a view of investigation of causing factors. For example, when one indicator shows non-discrimination against a low difficulty index, in addition to investigations on ambiguous or incomplete aspect of the definition and a need for further training, investigations for consistency of the definition with the current condition of health system may be required. When the consistency is assured and the definition is found to be complete, an investigation on the difficulty among health workers of the indicator may be required. These investigations are expected to provide the most probable and influential factors for the health workers' confusion.
used in each item and the respondent was assigned in such place. Fourth, program assignment, number of years of experience and experience of the specific formal trainings were asked in the questionnaire. However answers were not adequately available. Health workers had difficulty to distinguish the formal training and informal training. Also answers for the assigned program were omitted because the answers were obvious for health workers. In most of the cases, one midwife is assigned for at least one catchment area. Seven health workers refused to provide their year of experience. However, the length of experience did not appear to correlate with the health workers' total score. Coefficient of correlation of year of experience of the available 106 health workers and their total score was -0.063. Fifth, the study focuses on conformance of the current condition to system standard rather than fitness of the standard to use of data. In order to assess the RHIS design in relation with the needs of users, further investigation to gain a better understanding of use of data may be required based on the reality of practices in local health systems. Nevertheless, our results show that indicators can be characterized by how the health workers understand them.
Conclusions
Taking systemic factors into account, the case identification step needs to be reviewed and designed in order to generate intended data in health information systems. The present study described three characteristics of definitions of indicators in the case identification step of RHIS such as those that are (1) unsupported by the current conditions in the health system, i.e., (a) data are required from a facility that cannot directly generate the data and, (b) definitions of indicators are not consistent with its corresponding program; (2) incomplete or ambiguous, which allow several interpretations; and (3) complete yet easily misunderstood by health workers. These characteristics highlight the existence of upstream systemic factors that can induce health workers to commit errors.
When attention is given to systemic factors of human errors, health workers' current capability in the case identification step helps to enhance further understanding of improvement opportunities of health information systems. This attention would lead to further discussions of appropriate levels of authorities and concrete countermeasures that could more effectively and efficiently control systemic factors of human errors in RHIS. This implication would also be applicable in developed countries as well as developing countries, where tasks in a case identification step of health information systems depend highly on human effort. carried out the data collection and participated in the interpretation of the results. NU participated in the interpretation of the results and helped to draft the manuscript. All authors read and approved the final manuscript.
|
2017-06-22T18:16:26.812Z
|
2011-10-14T00:00:00.000
|
{
"year": 2011,
"sha1": "c2bc31dbb4d34ae342565194a479986db1076f23",
"oa_license": "CCBY",
"oa_url": "https://bmchealthservres.biomedcentral.com/track/pdf/10.1186/1472-6963-11-271",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b168c243b0f3cb2f53f1b17f11ecdb67c81841d1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
118547711
|
pes2o/s2orc
|
v3-fos-license
|
Delocalization of two interacting particles in the two-dimensional Harper model
We study the problem of two interacting particles in a two-dimensional quasiperiodic potential of the Harper model. We consider an amplitude of the quasiperiodic potential such that in absence of interactions all eigenstates are exponentially localized while the two interacting particles are delocalized showing anomalous subdiffusive spreading over the lattice with the spreading exponent $b \approx 0.5$ instead of a usual diffusion with $b=1$. This spreading is stronger than in the case of a correlated disorder potential with a one particle localization length as for the quasiperiodic potential. At the same time we do not find signatures of ballistic FIKS pairs existing for two interacting particles in the one-dimensional Harper model.
Introduction
The Harper problem describes the quantum dynamics of an electron in a two-dimensional potential (2D) in a perpendicular magnetic field [1]. It can be reduced to the Schrödinger equation on a discrete quasiperiodic one-dimensional (1D) lattice. This system has fractal spectral properties [2] and demonstrates a Metal-Insulator Transition (MIT), established by Aubry and André [3]. The MIT takes place when the amplitude λ of the quasiperiodic potential (with hopping being unity) is changed from λ < 2 (metallic phase) to λ > 2 (insulator phase). A review of the properties of the Aubry-André model can be found in [4] and the mathematical prove of the MIT is given in [5].
The investigation of interaction effects between particles in the 1D Harper model was started in [6] with the case of Two Interacting Particles (TIP). It was found that the Hubbard interaction can create TIP localized states in the noninteracting metallic phase. Further studies also demonstrated the localization effects in presence of interactions [7,8]. This trend was opposite to the TIP effect in disordered systems where the interactions increase the TIP localization length in 1D or even lead to delocalization of TIP pairs for dimensions d ≥ 2 [9,10,11,12,13,14,15,16,17,18].
Thus the results obtained in [19] on the appearance of delocalized TIP pairs in the 1D Harper model, for certain particular values of interaction strength and energy, in the regime, when all one-particle states are exponentially localized, is really striking. In [19] the delocalization of TIP appears at a relatively strong interaction being the reason why this effect was missed in previous studies.
The recent advanced analysis [20] showed that so called Freed by Interaction Kinetic States (FIKS) appear at various irrational magnetic flux values being ballistic or quasiballistic over the whole system size N used in numerical simulations (up to N = 10946). At certain flux values the FIKS pairs appear even at a moderate Hubbard interaction U = 1.75 (hopping is taken as t = 1), also the effect of FIKS pairs becomes stronger for long range interactions [20]. Up to 12% from an initial state, with TIP being close to each other, can be projected on the FIKS pairs escaping ballistically to infinity [20]. This observation points to possible significant applications of FIKS pairs in various physical systems and shows the importance of further investigations of the FIKS effect. Indeed, as shown in [20], the recent experiments with cold atoms on quasiperiodic lattices [21,22,23] should be able to detect FIKS pairs in 1D.
For the TIP effect in disordered systems the dimension plays an important role [10,14,15,17,18] and it is clear that it is important to study the FIKS effect in higher dimensions. We start these investigations here for the twodimensional (2D) Harper model where the (noninteracting) eigenstates are given by the product of two 1D Harper (noninteracting) eigenstates so that the MIT position for noninteracting states is clearly defined at λ = 2. We note that 2D quasiperiodic lattices of cold atoms have been realized in recent experiments (even if the second dimension was a repetition of 1D lattices) [24] so that there are new possibilities to investigate the FIKS effect with cold atoms when the interaction is taken into account.
The paper is composed as follows: the model description is given in Section 2, the main results are presented in Section 3, discussion of results is given in Section 4. High resolution figures and additional data are available at the web site [25].
Model description
We consider particles in a 2D lattice of size N 1 × N 2 , 0 ≤ x < N 1 and 0 ≤ y < N 2 . The one-particle Hamiltonian h (j) for particle j is given by: +|x, y > j < x, y + 1| j + h. c., The point (x 0 , y 0 ) = (N 1 /2, N 2 /2) is the "center point" of the lattice and the offsets x−x 0 or y −y 0 in the arguments of V 1 ensure that the potential has locally the same structure for the region close to the center point when varying the system size N 1 × N 2 . The kinetic energy T (j) is given by the standard tight-binding model in two dimensions with hopping elements t = −1 linking nearest neighbor sites with periodic boundary conditions, i. e. x + 1 (or y + 1) in (2) is taken modulo N 1 (or N 2 ). Note that the potential is of the form where V 1 (x), V 2 (y) are effective one-dimensional potentials. In this work we study essentially the quasiperiodic case with V 1 (x) = λ x cos(αx + β), V 2 (y) = λ y cos(αy + β) and here mostly λ x = λ y = λ = 2.5. Furthermore we choose α = 2π( √ 5 − 1)/2 ≈ 0.61803 as the golden ratio and β = 1/ √ 2. For these parameters the one-dimensional eigenfunctions (with the V 1 potential) are localized with a one-dimensional localization length = 1/ log(λ/2) ≈ 4.48 (see e.g. [4,20]). For the purpose of comparison we also study the disorder case with a random potential V 1 (x) uniformly distributed in [−W/2, W/2] and the same random realization for V 2 (y). For this case we choose W = 5 corresponding to the localization length ≈ 105/W 2 ≈ 4.2 which is quite close to the localization length of the quasiperiodic case for λ = 2.5. The particular structure of V implies that for both cases the eigenfunctions of h (j) are products of one-dimensional localized eigenstates in x and y with the potential V 1 (x − x 0 ) or V 2 (y − y 0 ).
We note that for the disorder case the potential V (x, y) is due to the particular sum structure in Eq. (3) very different from the standard Anderson two-dimensional disorder model. In the latter case V (x, y) would be independent random variables for each value of (x, y) while in our case V (x, y) is a sum of two one-dimensional disorder potentials providing certain spacial correlations in the potential which are crucial for the value of the quite small localization length.
We now consider two interacting particles, each of them submitted to the one-particle Hamiltonian h (j) , and coupled by an interaction potential U (x 1 , y 1 , x 2 , y 2 ) which has a non-vanishing value U only for |x 1 − x 2 | < U R and |y 1 −y 2 | < U R [26]. Here U denotes the interaction strength and U R is the interaction range. The total two particle Hamiltonian is given by whereÛ is the interaction operator in the two-particle Hilbert space with diagonal entries U (x 1 , y 1 , x 2 , y 2 ). In this work we consider two cases with U R = 1, corresponding to Hubbard on-site-interaction, and U R = 2 corresponding to a short range interaction with 9 neighboring sites coupled by the interaction.
The eigenfunctions of H are either symmetric with respect to particle permutation (boson case) or anti-symmetric (fermion case) corresponding to a decomposition of the Hilbert space in a boson-and fermion-subspace. However, in this work we prefer to work on the complete space (of dimension N 2 1 N 2 2 ) due to the employed numerical method to determine the time evolution of the wave function. The evolution is described by the time-dependent Schrödinger equation (with = 1) The symmetry of the state |ψ(t) > is simply fixed by the symmetry of the initial condition which is conserved by the Schrödinger equation and which we choose corresponding to both particles being localized on the same center point with x 0 = N 1 /2 and y 0 = N 2 /2. As already noted, in absence of the interaction, i. e. U = 0, the eigenstates are localized with a typical localization length (in each direction). Thus, our aim is to study if interaction leads to a delocalization of TIP during the time evolution or to some kind of diffusion of TIP in coordinate or Hilbert space.
To solve (6) numerically we write H = H x + H p as a sum of two parts which are either diagonal in position space H x = V (1) + V (2) +Û or in momentum space H p = T (1) + T (2) and evaluate the solution of (6) as: using the Trotter formula approximation: with two unitary operators O p and O x . The integration time step ∆t is supposed to be small as compared to typical inverse energy scales and the value of t is chosen such that t/∆t is integer. Formally, Eq. (9) becomes exact in the limit ∆t → 0. However, a finite value of ∆t implies a modification of the Hamiltonian with H →H withH defined by O p O x = exp(−iH ∆t) and related to H by a power law expansion in ∆t where the corrections are given as (higher order) commutators between H x and H p . In this work we choose the value ∆t = 0.1 but we have verified for certain parameter values that the results presented below do not change significantly if compared with ∆t = 0.05. The efficiency and stability of this type of integration methods have been demonstrated in [9,20,27,28].
The operators O x and O p are either diagonal in position representation or momentum representation. In order to evaluate (8) using (9) we first apply the operator O x to the initial state given in position representation which can be done efficiently with N tot = N 2 1 N 2 2 operations by multiplying the eigenphases of O x to each component of the state. Then the state is transformed to momentum representation using a fast Fourier transform in the four dimensional configuration space (corresponding to two particles in two dimensions) with help of the library FFTW [29] which requires about N tot (log N 1 + log N 2 ) operations. At this point we can efficiently apply the operator O p to the states, again by multiplying the eigenphases to each component of the state and finally we apply the inverse Fourier transform to come back in position representation. The eigenphases of O x and O p can be calculated and stored in advance.
We determine the time evolution of |ψ(t) > using Eq. (9) for different square and rectangular geometries with system sizes up to 128 × 128 (i. e. N 1 = N 2 = 128) or 1024 × 8 (i. e. N 1 = 1024, N 2 = 8). At N 1 = N 2 = 128 the Hilbert space of the whole system becomes as large as N H = N 4 1 ≈ 2.7 × 10 8 . In order to analyze the structure of the TIP state we introduce different quantities and densities described below.
First let us denote by the (non-symmetrized) two particle wave function and for simplicity we omit the argument for the time dependence. Then the one-particle density ρ 1 (x, y) in 2D is defined as We note that the normalization of the state |ψ > implies x,y ρ 1 (x, y) = 1. Using this one-particle density we define the variance with respect to the center point (x 0 , y 0 ) by and also the inverse participation ratio (IPR) "without center" by: where the sums run over the set (14) containing only lattice sites (x, y) outside the center rectangle of (linear) size 20% around the center point (x 0 , y 0 ). This kind of definition for the IPR allows to detect a particular kind of partial delocalization where only a small fraction of probability diffuses to large distances with respect to the center point while the remaining probability stays strongly localized close to the center point. This quantity was already used with success in our studies of FIKS pairs in [20] for the 1D TIP Harper problem. Using the standard definition for the IPR (where S would be the set of all lattice sites) allows only to detect a strong delocalization of the full probability. For the variance r 2 the contribution of the probability at the initial state is not so pronounced and thus we compute this quantity for the whole lattice. We furthermore introduce the following densities The density ρ x (x) (or ρ y (y)) is simply the one-particle density integrated over the y-direction (or x-direction). ρ xx (x 1 , x 2 ) is the two particle density integrated over both y-directions giving information about the spatial correlations of both particles in x-direction. Here ρ lin (s) is the linear density obtained from the one-particle density by summing over all sites with same (1-norm)-distance s = |x − x 0 | + |y − y 0 | from the center point and is well defined for 0 ≤ s < (N 1 + N 2 )/2. This density is similar in spirit to a radial density obtained by integrating over all points with the same distance from the center point. However, using the 1-norm (and not the Euclidean 2-norm) to measure the distance is both more convenient for the practical calculation and actually physically more relevant for the case where is similar to a product of two exponentially localized functions in x and y with the same localization length l.
Time evolution results
As in [20] we first determine the most promising values of the interaction strength U by computing r 2 and ξ IPR at a certain large t. Here we use a moderate system size since computations should be done for many values of U at U R = 1 (Hubbard interaction) and U R = 2 (9 nearest sites coupled on a square lattice). The results are presented in Fig. 1. We see that there are regions of U where the values of r 2 are by a factor 4 − 10 larger than in the case of U = 0 where r 2 ≈ 10 (see Fig. 2). However, in contrast to the 1D TIP Harper model [20] there are no sharp peaks in U except maybe at U = 3.5 for U R = 2. In the following, we choose this value for a more detailed analysis at larger sizes N 1 , N 2 and larger times t. However, we have also studied some other U values, e. g. U = 6 with qualitatively similar results but typically with less delocalization than the most interesting value U = 3.5.
In Fig. 2, we show for U = 3.5, the two values U R = 1 and U R = 2 and different geometries the time dependence of r 2 and ξ IPR . All the cases with a square geometry N 1 = N 2 show an unlimited growth of these two quantities up to largest times t = 10 5 reached in our numerical simulations. For the Hubbard case at U R = 1 the system size is sufficiently large to avoid saturation effects due the finite system size and the change of size from N 1 = N 2 = 96 to 128 does not affect the values of r 2 and ξ IPR at U = 3.5. For U R = 2 we have larger values of r 2 and ξ IPR and it is clear that the size N 1 = N 2 = 96 is sufficiently large only up to t ≈ 10 4 while for N 1 = N 2 = 128 the size is sufficient only up to t ≈ 3 × 10 4 with a finite size induced saturation of growth for 3 × 10 4 < t ≤ 10 5 .
For comparison, we also present in Fig. 2 the same quantities for the case of the particular disordered potential described in Section 2. For this we use the same interaction strength U = 3.5 and the disorder parameter W = 5 which gives approximately the same localization length in 1D as for the 1D Harper model at λ = 2.5 (however, for the usual 2D Anderson model we would have a significantly larger value of the one-particle IPR ξ ≈ 150, see e.g. Fig. 2 in [30]). For U R = 2 and t > 10 2 both the absolute values and the growth rates of r 2 and ξ IPR for the disorder case are significantly lower as compared to the 2D Harper model. For U R = 1 the disorder values of the variance are above the variance values of the 2D Harper model, for the time interval 10 ≤ t ≤ 10 5 shown in the figure, but the curve for the Harper case has a stronger growth rate (larger slope).
Actually, according to Fig. 2 the two curves for r 2 seem to intersect at a certain time t int and therefore we expect the variance of the 2D Harper model to become stronger than the variance of the disorder case for t > t int . From the figure it seems that t int is close or slightly below 10 5 but this is only due to the rather thick data points and the logarithmic scale. A careful analysis of the data (higher resolution figure and more precise extrapolation of both curves using power law fits for 10 4 ≤ t ≤ 10 5 ) shows that the intersection point is likely to be close to the value t int ≈ 2.4×10 5 . For U R = 1, the another quantity ξ IPR for the disorder case is clearly below the curve of the Harper model. Our interpretation is that apparently for TIP in the disorder case there is a relative strong initial spreading at short times and a modest length scale but for a strong weight of the wave packet while for the Harper case there is a slower but long range delocalization for a smaller weight of the wavepacket which is better visible from the IPR ξ IPR without the center rectangle. (This kind of "long range small weight" delocalization was also found for the FIKS pairs of the TIP 1D Harper model [20] but there the growth rate is actually ballistic, corresponding to power law exponents b 1,2 ≈ 2, and not sub-diffusive.) The lower growth rate for the disorder case at both values of U R is also clearly confirmed by the power law fits which provide (for the same time and size ranges as for the Harper case) the exponents: b 1 = 0.218 ± 0.005, b 2 = 0.404 ± 0.035 for U R = 1 and b 1 = 0.181 ± 0.007, b 2 = 0.302 ± 0.009 for U R = 2.
In Fig. 2 we also consider the case of two rectangular geometries with N 1 = 1024 or N 1 = 512 and N 2 = 8. In this case there is a clear saturation of growth of the considered variables independent of the system size. These data show that for N 2 ∼ we have a localization of TIP in the quasi-1D Harper model at the considered interaction strength. However, this result does not exclude the possibility of appearance of FIKS pairs in the quasi-1D limit at other interaction values, even if our preliminary tests indicate similar localization results.
The time evolution of the projected one-particle probability distribution ρ x (x) is shown in Fig. 3. For the square geometry N 1 = N 2 = 128 the width of the distribution is growing with time and it becomes practically flat at maximal times t = 10 5 for both values U R = 1 or U R = 2. In the case of disorder we have also a significant spreading of probability over lattice sites which is somewhat comparable with those of the 2D Harper case. For the rectangular geometry we have a significantly larger probability on the tails for the 2D Harper model as compared to the disorder case. This is in agreement with the data for r 2 in Fig. 2 (bottom left panel).
These results show that there are no ballistic type FIKS pairs propagating through the whole system as it was the case for TIP in the 1D Harper model [19,20]. Such a conclusion is confirmed by the analysis of the time evolution of the linear density ρ lin (s) defined in (18) as shown in Fig. 4. The typical width of this density does not increase linearly in time in contrast to the 1D Harper case (see e.g. Fig. 3 in [20]) and we have in Fig. 4 (for the square geometry cases) curves in the (s, t)-plane, corresponding to a subdiffusive spreading s 2 ∼ t b with an exponent b ∼ 0.5. For the disorder case (with square geometry) the corresponding curves of Fig. 4 are also in a qualitative agreement with the reduced exponent b ∼ 0.2 found above by the fit of r 2 . Concerning the rectangular geometries the curves visible in Fig. 4 show saturation also in agreement with Fig. 2 even though for the quasiperiodic potential the tails of the distribution (visible by light blue zones) still continue to increase which is also quite in agreement with the bottom panel of Fig. 3. The one-particle density ρ 1 (x, y) for the square geometry 128×128 and U R = 1 (or U R = 2) is shown at different moments of time in the left column of Fig. 5 (Fig. 7) for the 2D Harper case and of Fig. 6 (Fig. 8) for the disorder case. The relative distribution of TIP probability in the (x 1 , x 2 )-plane, i. e. the quantity ρ xx (x 1 , x 2 ) defined by (17), is shown for the same parameters in the right columns of these figures.
There is a clear spreading of probability in the (x, y)plane growing with time. At largest times t = 10 5 this spreading starts to saturate due to the finite system size and a part of probability returns back due to the periodic boundary conditions. This is especially visible in the (x 1 , x 2 )-plane with significant contributions in the corners x 1 = 0, x 2 = N 2 − 1 and x 1 = N 1 − 1, x 2 = 0 while at shorter times t ≤ 10 4 the distribution has a well pronounced "cigar" shape corresponding to TIP remaining Fig. 4. Density plot of the time evolution of the linear density ρ lin (s). The vertical axis corresponds to the iteration time 0 ≤ t ≤ 10000 and the horizontal axis corresponds to 0 ≤ s < (N1 + N2)/2. The left column corresponds to the quasiperiodic potential (λ = 2.5) and the right column to the disorder case (W = 5). All panels correspond to the interaction strength U = 3.5. Top (center) panels correspond to UR = 1 (UR = 2) and the square geometry N1 = N2 = 128. Bottom panels correspond to UR = 2 and the rectangular geometry N1 = 512, N2 = 8. The color codes of the density plot correspond to red for maximum, green for medium and blue for minimum values.
close to each other. We note that for the Harper case the probability distribution inside this cigar is more homogeneous while for the disorder case there is well visible crossstructure which we attribute to the fact that we have the same disorder structure in x and y directions. In principle, the same is true for the 2D Harper case but is is possible that there the localization seems to be better preserved (the cigar is more narrow). Indeed, for the usual 2D uncorrelated disorder the one-particle localization length at W = 5 is significantly larger as compared to the case of the particular correlated disorder considered here (see e.g. [30]). In presence of interactions the separability of correlated disorder is broken that can lead to an additional ) and ρxx(x1, x2) (right column) with x (or x1) for the horizontal axis and y (or x2) for the vertical axis. All panels correspond to U = 3.5, UR = 1 and the square geometry N1 = N2 = 128 with the quasiperiodic potential (λ = 2.5). The different rows correspond to the iteration time t = 100 (first row), t = 1000 (second row), t = 10000 (third row) and t = 100000 (fourth row).
increase of TIP spearing. Indeed, the width of the cigar in the above Figs. is larger for the disorder case.
The comparison of Figs. 5 and 6 also confirms the above observation that for U R = 1 the quantity r 2 is initially (for t = 100 and t = 1000) significantly larger for the disorder case (Fig. 6) than for the Harper case (Fig. 5). However, the cross structure visible in Fig. 6 clearly shows that this stronger initial delocalization for the disorder case is mostly due to stronger individual propagation of one particle in one direction and the coherent propagation of TIP sets in at later times while for the Harper case the coherent TIP propagation is already important at the beginning and dominates the spreading of r 2 . We believe that the stronger statistical fluctuations of the oneparticle 1D localization length for the disorder case are partly responsible for this observation. We remind that for the Harper 1D model the one-particle 1D localization length is really quite constant for all eigenstates while for the disorder case there are considerable statistical fluctuations, even for one-particle 1D eigenstates of similar energy.
The probability distributions for the rectangular geometry are shown in Fig. 9. In this case the width of the cigar is also smaller in the case of the 2D Harper potential as compared to the disorder case. The density at t = 10 4 gives some weak indication on presence of far away probability at large x 1 = x 2 ≈ N 1 distances, which would be expected for ballistic FIKS pairs. However, the probability there is very small and also at t = 10 5 both cases show similar probability profiles corresponding to localization of the wave packet.
Finally in Fig. 10 we consider an asymmetric case of the 2D Harper model with λ x = 2.5, λ y = 3.5, N 1 = 128 and N 2 = 48. Here we have a significantly stronger localization of non-interacting particles in the y-direction with y = 1/ log(λ y /2) ≈ 1.79. Thus we could expect appearance of 1D ballistic FIKS pairs in such a case. However, this scenario is not confirmed by the data which still give a subdiffusive spreading with the fit exponents b 1 = 0.563 ± 0.004 and b 2 = 0.431 ± 0.016 for the time range 10 ≤ t ≤ 1000 and the power law fits r 2 ∝ t b1 and ξ IPR ∝ t b2 . The probability distribution in x becomes rather broad at large times t = 10 5 and it is possible that even larger system sizes are required to firmly state if this subdiffusion continues on longer times. Furthermore the density ρ y (y) does not show a strong localization in the ydirection in presence of interaction, despite the very small value of y , and there are quite large tails of ρ y (y) for y being close to the transversal boundaries. Therefore the scenario of an effective 1D-situation in x due to strong y-localization does not really happen thus explaining that we have no visible indications for FIKS pairs in such an asymmetric situation.
Discussion
We presented here the study of interaction effects in the 2D Harper model where the two-dimensional quasiperiodic potential is given as the sum of two one-dimensional quasiperiodic potentials for the x and the y direction. Our results show that in this system the interactions induce a subdiffusive spreading over the whole lattice with the spreading exponent being approximately b ≈ 0.5 for the second moment and IPR. Such a delocalization takes place in the regime when all one-particle eigenstates are exponentially localized. In this 2D TIP Harper model we do not find signs of ballistic FIKS pairs, which are well visible for the 1D TIP Harper case [19,20].
It is possible that the physical reason of absence of FIKS pairs in 2D Harper model is related to the fact that for TIP in 2D we have a much more dense spectrum of non-interacting eigenstates [see e.g. Eg. (29) in [20] where the indexes m 1 , m 2 of non-interacting eigenstates of two particles now become vectors in 2D]. Due to this there are practically no well separated energy bands typical for the one-particle 1D Harper model and thus there is little chance to have an effective Aubry-André Hamiltonian with λ eff and the interaction induced hopping matrix elements t eff generating a metallic phase with λ eff < 2t eff . Of course, there is still a possibility that we missed some FIKS cases at specific U values but for all studied cases of TIP in the 2D Harper model we find a subdiffusive spreading being qualitatively different from the FIKS effect in the 1D Harper case. For a rectangular geometry with a narrow size band in one direction we even obtain a localization of TIP spreading.
When the quasi-periodic potential is replaced by a disorder potential of the particular form (4) we also find a subdiffusive spreading but with a smaller exponent b ≈ 0.25 (on available time range and system size). In prin- Fig. 9. Density plot for the density ρxx(x1, x2) with x1 for the horizontal axis and x2 for the vertical axis. All panels correspond to U = 3.5, UR = 2 and the rectangular geometry N1 = 512, N2 = 8. The left column corresponds to the quasiperiodic potential (λ = 2.5) and the right column to the disorder potential (W = 5). The different rows correspond to the iteration time t = 100 (first top row), t = 1000 (second row), t = 10000 (third row) and t = 100000 (fourth bottom row). Both quantities are shown by the blue line and the green line shows for comparison a power law ∼ t 1/2 . The bottom left (right) panel shows for the same parameters the density ρx(x) (or ρy(y)) versus x (or y) in a semilogarithmic representation. The color labels correspond to different iteration times: t = 100 (red curve), t = 1000 (green curve), t = 10000 (blue curve), t = 100000 (pink curve).
ciple, for TIP in the 2D disorder potential we expect to have localized states for short range interactions [10,15,17]. However, here we consider a particular correlated disorder (with a potential being a sum of two one-dimensional potentials in x and y) and in such a case the one-particle localization length at W = 5 ( 1 ≈ ξ ≈ 5) is significantly smaller than for the usual 2D disorder potential (see e.g. [30] with ξ ≈ 150). We think that in presence of interactions and sufficient iteration times such correlations of disorder are suppressed and we have a situation similar to the TIP case of the usual 2D Anderson model where at W = 5 the one-particle localization length 1 is rather large and thus the TIP localization length 2 , expected to be an exponent of 1 [10,15], is also very large (ln 2 ∼ l 1 ) and is not reachable at time scales and system sizes used in our studies. In any case the smaller value of b ≈ 0.25 for the disorder case, compared to the 2D Harper case with b ≈ 0.5, indicates that some residual effects of FIKS pairs give a stronger delocalization of TIP for the 2D Harper model.
It is interesting to note that a somewhat similar subdiffusive spreading appears in the 2D Anderson model with a mean field type nonlinearity (see e.g. [28]). However, there the value of the spreading exponent b ≈ 0.25 is smaller (the value b ≈ 0.5 found here is more similar to the 1D Anderson model with nonlinearity studied in [27,31]). However, the physical origin of a certain similarity of these nonlinear mean-field models with the TIP case studied here remains unclear since here we have a linear Schrödinger equation while the models of [27,28,31] are described by classical nonlinear equations (second quantization is absent).
We think that the 2D TIP Harper model provides us new interesting results with subdiffusive spreading induced by interactions. This model rises new challenges for advanced mathematical methods developed for quasiperiodic Schrödinger operators [32,33]. It is also accessible to experimental investigations with ultracold atoms in 2D quasiperiodic optical lattices which can be now built experimentally [24]. Thus we hope that the TIP problem in 1D and 2D Harper models will attract further detailed theoretical and experimental investigations.
This work was granted access to the HPC resources of CALMIP (Toulouse) under the allocation 2015-P0110.
|
2015-10-05T11:22:04.000Z
|
2015-10-05T00:00:00.000
|
{
"year": 2015,
"sha1": "0a3ee69b4e3822850bb3cd559549f6aeff46628b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1510.01104",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "0a3ee69b4e3822850bb3cd559549f6aeff46628b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
120439626
|
pes2o/s2orc
|
v3-fos-license
|
Errors in Translation: A Tool for Linguistic and Socio-cultural Competence
One of the important insights in recent translation studies research is that errors are a consequence of both linguistic and cultural misconceptions. In recent years translation studies have become increasingly involved in a quest for explanations of all phenomena associated with meaning interaction and a lot of detailed research has been attempted at most translation stages. Whole solid books have been written on specific topics but whole solid books can not include all the variations that different manifestations of language might adopt. As far as we understand, these theoretical ideas have not normally been applied to translations so far and when they have, explanations and descriptive interpretations given do sound rather artificial and unsatisfactory. We intend to propose an analytic approach to solve problems on translation based upon the principles of identity or equivalence, the main ideas of which might be suitable both for research and tuition purposes. In this paper, clear and cut distinctions between canonical and non-canonical expressions, collocations and idiomatic expressions are summed up succinctly both for language explanations and translation analyses, due to the fact that a good number of realizations belonging to the Gray Areas (GA) of language may arise from recurrent combinations of specific types of combined lexical items. The resultant lack of interaction between L1 propositions and L2 representations, identical or not, is often explainable and clarified by the Error Analysis (EA) method. Data collected and analysed here have been chosen at random.
Introduction
Numerous debates have been produced as far as translation proposals are concerned along the history.It has often been said that no text can be analysed by itself, far from the culture in which it has been conceived, far from the period in which it has been written.Thus, contemporary scholars debate on the fact that at least these three factors are to be taken into account: linguistic, cultural and time dimensions.However, of the three the first two were given more attention from a theoretical point of view and this gave rise to two different approaches upheld in the first case by Nida, Wilss and Steiner among others and the second by A. Lefevere, Even Zohar, Vermeer, Nord, Höning, etc.The first group, following the 'universals' linguistic theory insist on the fact that there exist a set of universal syntagmatic structures applicable to any language indistinctly.The second seeks support to resolve formal linguistic problems due to the fact that there exist intercultural factors which are far different from one culture to others i .This dichotomy, however, as usual as far as human production is concerned, though perhaps more elaborated, is nothing new as for devoted scholars to this field of research such as St. Jerome, Luther, Dryden, Montaigne, Tytler, Schleiermacher, Beatriz de Luna, Isotta Nogarola or illustrated persons who have also dedicated a good part of their lives to the art of translation such as Goethe, Mallarmé, Cortázar, Menéndez y Pelayo, Gómez de la Serna, among others, have had this double conception in their minds as the most relevant concerns within this field of studies (Oro: 2001:1-12).
During the XIIth and XIIIth centuries it was the cosmopolitan Toledo's Translators School (a set of studiers who have translated scientific and philosophical Arabic works ii to Latin and Spanish) a prestigious scientific center grouping translators from the three relevant religions of the period: Islamic, Hebrew and Christian.The work done there at that time embraces science, literature iii but also religious works; for example, one important achievement was the translation of Mi'ray or Mahoma's Scale iv both to Castilian and Latin, or the Coran's translation by Herman the Dalmat, the English Robert of Chester and Spanish Pedro de Toledo.It is compulsory to mention Abū l-Walīd Muhammad ibn Ahmad ibn Muhammad ibn Rushd (Arabic: en ﻮ أﺑ ﺪ اﻟﻮﻟﯿ ﻣﺤﻤﺪ ﻦ ﺑ أﺣﻤﺪ ﻦ ﺑ ﻣﺤﻤﺪ ﻦ ﺑ ﺪ ,)رﺷ known as Averroes v (1120-December-10,198), the best studier of Aristotle at that period, but there were also other important Arabic scholars such as the Persian, Avicenna vi , al-FärÄbï vii or the Jewish, philosopher and physician, Moses Maimonides.This important work had a double dimension: the translation of works of the Islamic culture proper and the recuperation of classical works lost in Occident which has been recuperated through Arabic translations.
One of the main contributors to the double dichotomy and other translation principles through which contemporary scholars try to set their approaches to this hard discipline of studies was Saint Jerome (4 th century).His methodology has set the fundamental laws, especially that concerning sense.However, even though for him the act of translation especially for general texts would not consist only in transposing words from one language to another (i.e. this process needs to go further than that, translation becomes an act of interpretation -'ut interpres, ut orator'), he would not apply this principle to Sacred texts (for example, the Sacred Writings, in this case as they perform an act of faith the possibility of interpreting is not permitted; thus, he claims for identity.Perhaps the weakest point of his doctrine was to distinguish between two types of texts for translation analysis: sacred and profane, and attributes different principles to their design.
No doubt, this was an important contribution as for he proposes that there is a need for an adequation of the translation process to text types.His influence in posterior scholars and translators is a fact, for example the principle of literalness or literalism for sacred texts was the basis approach that Fray Luis de León in his translation of the 'Cantar de los Cantares' or Martin Luther's Bible version has followed.
It was not until the XVth.century when another interesting proposal for this field arrives.We are referring to the principles set by Juan Luis Vives viii , which in spite of being left soon aside, were retaken by German Romanticists during the XIX century.Vives makes a clear distinction between translation theoretical approaches and the practical situation when translating; besides, he realises that different languages are at the same time very similar and very different and he feels that even though translation is a maxim fixed and tacitly accepted, he understands that languages with peculiar and specific characters do present certain barriers impossible to pass through for translation interaction.He is referring to the canonical and non-canonical language representations proper.He concurs with San Jerome and Cicero on the fact that the translation process is a thoughtful and meditative activity in which the domain of liberty moves between the language to which meaning is transferred and the sense of the text which one pretends to translate.Luis Vives (1492-1540) is, no doubt, aware of some limitations which often might appear in this process: lack of expressions between the languages in question and a possible deficiency to understand the original sense proposed.He is also aware of the fact that proposals based on ideal norms for translation purposes do not exist in spite of the existence of certain general norms and conditions attributed to the field, such as: knowledge of languages and technical language, knowledge of the topic, culture, etc.
Sampling of Errors
The central core of this analysis will be essentially based in the comparison of certain units from 1961 Penguin edition of Brave new world and 1969 Hernández's translation into Spanish.Posterior editions to the above mentioned, (for example, 1980 BARCELONA, PLAZA & JANES) will be referred to from time to time due to the fact that it includes various changes, noticeable changes, from other editions.
As far as Hernández's translation is concerned, it is interesting noticing that there are variations from one edition to another and that there are variations comparing with other editions as well As can be observed in the figs.below differences between L1 and L2 have a double appearance: formal (i.e. they allow to recognise linguistic alteration and the typical and atypical world referents' representations) and as a result semantic (i.e.They affect meaning as being transferred to L2).
Formal differences have often been categorised into 3 main types of errors: (O) omissions (omitting a unit of meaning); (A) additions (adding one unit of meaning) and (S) substitutions (substituting one unit of meaning).This often involves a lack of linguistic interaction which provokes a certain prochain or remote distance between the two languages and the world referents or mental representations (i.e concrete or abstract references) often referred to as grammatical (GR).This misuse of grammatical devices between the two languages is often the result of: (M) morphological (changing word-classes); (L)Lexical variation (word, phrases, verbal tense confusion, etc) from conceptual of basic meaning to denotative meanings, or even from hyponym to hyponym; from super-ordinate to hyponymy, or on the other way round; SY) syntactic alterations from one language to another (changing unnecessarily any piece of syntax; for example, unnecessary word order change); (P) punctuation; (G) Graphological (Printed errors) and (SE) semantic (using units semantically related instead of the proper ones: for example a superordinate term instead of a hyponym).
All these aspects as a result would lead the interaction between L1 and L2 representations to grammatical and semantic distortion.
In general, the above categories are referred to by various scholars, following House (1977) and her model intent for translation quality assessment, as substitutions, which together with omissions and substitutions would constitute one of the three subtypes of the second part of her dichotomy errors classification (covert ix and overt) x usually found in translations according to her and her followers from a formal point of view at first sight.
To make a succinct comparison with various editions: the original writes cardinal points beginning with a small letter, but the Mexican DIANA, the Spanish Plaza & Janes and Collección Millenium (published by the Journal 'El mundo' use capital letters to translate them and the Galician Edicións Xerais uses small letters like the original.The same happens with the word 'Sauvage [salvaje]' which appears with a Capital all along the original version.The following tables exemplify some errors at the beginning of the novel which might alter the message given in L1.Variations also exist between or among different editions; some are clear misspellings, others change from one edition to other, some for worse and some for better and yet others remain unalterable edition after edition.
A Linguistic Approach. Canonical versus Non-Canonical Expressions
Both translation and linguistics are condemned to be integrated, as for translation cannot survive on sociocultural justifications.
On the one hand, linguistics is the study of our knowledge of language: what it is, and how we acquire and use it; the study is pursued through the construction of grammars; that is hypothesis about this knowledge and how we come by that knowledge and use it to think or communicate.The knowledge of language is not monolithic.It is usually divided into our knowledge of vocabulary and our knowledge of how we combine that vocabulary into sentences: id est. the lexicon and the rules of formation.Some of this knowledge is easily accessible in most languages.Translation, on the other hand, must be understood as a linguistic mechanism to process meaning identity from one language to another, through the study of their linguistic behaviour as a whole but out of the sum of the different parts that compose the pieces of language to be translated.It is our intention in order to integrate both fields of research to combine information from several different sources distinguishing between canonical patterns and non-canonical patterns of lexical domain.The former tend to be universal and the latter tend to be more exclusive.To make a proposal concerning translation we will include brief references to different expressions embraced under the super-ordinate term 'grammatical expressions', also referred to as 'the gray areas of language', idiomatic expressions, set phrases, etc..This creates a fabric of ideas which range from formal canonical appearance in its diverse varieties to semantic uniqueness far from the meaning of the parts that very often compose these pieces of language object of linguistic analytic and descriptive interest.It is also our aim to combine linguistic behaviour with the translation process in order to see if adequacy is possible and if diversity should be avoided.Even though the different states are still in a situation of development, we can conclude that only some of these expressions are to be considered lexical items proper.The majority of them follow normal formal canonical patterns and meaning may rouse raging from the transparent or semitransparent meaning of their constituents to the totally opaque, or they keep on going with their conceptual meaning as transparent realisations, i.e. a maxim which seems to be constant in great many linguistic processes (cf., for example, put at its simplest, verbs acting as full verbs or as auxiliaries).However a good number of expressions behave as lexical items proper and thus they are to be treated independently for language internal understanding but especially for translation comparative development, especially opaque realisations.Opaque realisations cannot be deduced adding together the meaning of the parts, i.e., they cannot be deduced linguistically, as the meaning goes beyond its conceptual meaning or through other extra-linguistic patterns, which are to be delimited.The collocational process is a general linguistic process in Language behaviour that can be split into: The associational process which ranges from loose to tight and derives into proper collocations or idiomatic phrases, normally set phrases, and a previous step for compounding.Then, it is important to have in mind that the main problem a translator has to face, once he/she has decided the stuff he is going to deal with and the appropriate materials and methodology to be employed are the linguistic differences at different levels.In general, most of them should not present any problems but some do, for example expressions considered to be unique or expressions the semantic extension of which has evolved differently in both languages due, for example, to socio-cultural inferences or to technological advances.That is a real fact, and of course we are not thinking of the awareness of expressions based upon canonical patterns, well, by canonical patterns we mean, for example, -as far as simple sentences are concerned: those following the structure: subject, predicate and International Journal of Applied Linguistics & English Literature complement, -as far as lexis is concerned those lexical units under their conceptual meaning, not presented under other types of meaning that are the result of semantic extension and/or linguistic arrangement.
In sum, taking this into account, for understanding translation problems it is useful to consider two types of expressions which are to be treated independently when translating.
The first, canonical expressions xi should not present problems for the translation process neither with lexical items nor with grammatical ones as it is illustrated in fig. 1 below, which corresponds to sentence pattern IV.
Opaque realisations cannot be deduced adding together the meaning of the parts, i.e., they cannot be deduced linguistically, as the meaning goes beyond its conceptual meaning or through other extra-linguistic patterns, which are to be delimited.
Canonical and Non-Canonical Expressions
The collocational process is a general linguistic process in language behaviour that can be split into: The associational process which ranges from loose to tight lexical units and derives into proper collocations or idiomatic phrases, normally set phrases or compounding.In most occasions they constitute the first step for the compounding resultant state (well known -àwell-known à wellknown; hard workingà hard-working à hardworking; book caseà book case à book-case, etc. mother in lawà mother-in-law; bull's eye), which is the result of combining two or more words to form a single unit.Then, it is important to have in mind that one of the main problems a translator has to face, once he/she has decided the stuff he is going to deal with and the appropriate lexicon and rules of formation to be employed, is the typical and atypical deviations from one language to another.In general, the use or selection of canonical expressions should not present any problem if the translators should have enough linguistic background and control of their native languages.
That is a real fact, and of course we are not thinking of the awareness of expressions based upon canonical patterns.By canonical patterns we mean, for example, one the one hand, those following the most common structural clausal or sentence patterns; i.e. subject, predicate and complement; or the normal phrase structure, i.e. head and modifier(s); in general, processes which are recognised in most languages, which tend to be universal, at least from a semantic point of view and which are not far from their formal representations.On the other hand, those referring to those lexical units xii under their conceptual meaning in their surface structure representation or other clear and cut types of meanings which tend to be universal to realise linguistically most world concrete or abstract objects or ideas, which are not considered to be unique or exclusive of one culture or of a particular community.We can probably exclude those lexical units which have acquired a very specific meaning by semantic extension and/or linguistic arrangement, full or idiomacities and irregulatiries.
In sum, taking this into account, one can deduce that there are two types of expressions which are to be treated independently for linguistic and translation theoretical and practical analysis.The first, canonical expressions, i.e. those expressions -from a lower to a higher rank-which follow general formation, realisation and functional patterns, either alone or in combination, should not present problems for the translation process neither with lexical items nor with grammatical ones as it is illustrated in fig. 1 below, which corresponds to sentence pattern IV, due to their straightforward reference to world objects, concrete ideas or abstract ideas world-wide developed.They normally range from basic and transparent communicative messages to semi-transparent interpretable realisations.They constitute essential basic communicative constructions for all or most communities.The second, non-canonical expressions are far more complex, they are affected formally, morphologically and syntactically, but essentially semantically.Put at its simplest, this specific behaviour of language can be viewed as a special type of indirect speech, and it constitutes a violation of the behaviour of formal canonical expressions.This violation signals that literal meaning and real meaning are different.This does not mean that both can be explained, interpreted and translated.However, linguistic treatment of concrete manifestations of language cannot be thoroughly analysed using only one linguistic level by itself.Languages are far more complex and independent that it is easy to show that there exists an inherent contradiction in the application of principles proposed along the history of linguistic in certain situations.However several principles are fundamental assumptions for the development of language interpretation and the understanding of language's realisations.
Non-Canonical Expressions and some Translation Problems
Within the gray areas of language xiii , units that form lexical units proper on their own right at least from a semantic point of view do often range from a transparent or semitransparent interpretability to the totally opaque are said to behave in non-canonical manner, and specially the latter constitute the units which very often present problems for translation, due to the fact that are normally fixed expressions, not very often used, referring to very specific and the concrete actuation of a community or of the individuals of a community.
In general, these expressions are referred to in linguistic treatment as follows: non-canonical expressions proper or inner terms, collocations, idioms (special phrases, sayings, etc.) and clishes, among others.Non-canonical expressions include expressions xiv of what about's type, as in What about the financial assistance?; expressions like if only (...), as in If only we haven't lost our way; here one needs a type of tense control, but one is free to fill the gaps very freely; For example, Spanish and Galician languages would follow different patterns to represent the identical lexical meaning, a canonical expression in the former and a subjunctive mode in the latter; or even expressions like 'The more (....) the more', whereas, certainly you expects a comparative form as the second constituent as in: The more you ask the less you get or Better for women better for men (S.T. Title of the article: Teen girls urged to admire Role Model Spice); Collocations xv , a problematic linguistic term is interpreted as a nominalization or verbalization of two lexical and/or grammatical items put together.The concept of collocation, which plays an important role in British linguistics where it originated, seems to be vague and neutral in dealing with word classes and to which element acts as to modifier or head.This term, however, is one of the key concepts of functional grammar proposed by Firth xvi and developed by Halliday.Probably we could, even say that it has its origins in word-association of the syntactic type in spite of the class, due to the fact that they are paradigmatically linked by this process.Besides, the idea of collocation is extremely far reaching, and furthermore, the users must realise that some language is deliberately eccentric and creative in that kind of way.Not all languages would use the same formal correlates to represent these semantic lexical units In sum, the idea of collocation xvii is extremely far reaching, and furthermore, the users must realise that some language is deliberately eccentric and creative in that kind of way.Not all languages would use the same formal correlates to represent these semantic lexical units.
Other clear types of expressions which can be included under this specific linguistic area are 'Clichés'.These are ready-made expressions but not necessarily idiomatic.From a formal point of view they are usually built up with canonical constituents.However, some are non-canonical expressions in the sense that they block the general principles of grammar, as in A little knowledge is a dangerous thing [knowledge or learning] or in The Devil can quote Scripture for his purpose [quote for cite] though it can be argued that quote is common in AE, where they don't say cite).Due to semantic variation by extension and movement of meaning a normal canonical expression, or a minor sentence can very easily become clichés when they lose their conceptual meaning and they are applied for a different purpose as in Can I help you?or Good morning!.In general, a cliché is a metaphor characterised by its overuse.
A very common type to be included here is referred to as 'Idioms'.These are ready-made expressions with a proper meaning; i.e. apart from the meaning of the term 'idiom' the linguistic connotation is that they are independent units in the sense that in most occurrences one cannot deduce the meaning of the whole adding together the meanings of the parts.Thus, the whole expression has a lexically independent existence apart from the parts with which it is made up.A great number of them are found in the field of phrasal verbs, such as to give up (to stop), to account for (to explain), to look into (to examine), etc.The use of certain idioms (for example, sayings, informal phrases, etc.) depends particularly on style, however semantic idiomacity ranging from the semitransparent to the totally opaque is another thing, both for internal interpretation and for translation, as for semitransparent are not far from basic conceptualisations but the meaning of opaque expressions goes beyond the conceptual one.For a simplified representation of the expressions briefly described above, see fig. 1 Fig. 1.Some types of Non-canonical expressions (from Oro, 2005)
Diversity and Adequacy
Lexical diversity xviii must be understood as a varied number of language constituents some of which might be more similar to one another; thought one might need the context as a whole in order to show which one fits better.
In comparative studies, we would dare say, for instance, that to translate steed into the Spanish caballo, would be far from the basic meaning according to style.Thus, the use of this word in Spanish would cause some of the primary accuracy of the world referent to be lost for this the context.As Dresler (1981:141) has asserted, 'translation alters and redistributes the orders of informativity of a text'.
So does the semantic process of expanding the meaning of lexical units by extension and movement from the basic or conceptual meaning to other types of meaning in any language; the meaning would become rather obscure within and specially out of context.Nonetheless, if formal redistribution is required, due to divergent structural systems, alteration of meaning must be avoided when we seek identical meaning representations, as required in true translations.Hence, it is quite important to control the various aspects linguistic theory provides us with.On occasion, the second constituent or 'substitutor' may be blocked at the linguistic surface lexical form representation.In spite of the existence of probable identical constituents, there might be instances in which they have followed different structural dimensions, as in the case of those belonging to the GAs of language.In these cases, a suitable word or expression is to be found that allows for maintaining complete meaning when playing with formal variation.
In comparative studies, some scholars would insist on the fact that the network of associations embedded in L1 text cannot be duplicated in the L2.To say this is to deny the possibility of naming world referents and the linguistic capacity to perform rules of actuation within the comparative field.However, if very complex units of meaning are being performed through looser associations, which make them difficult to be understood in L1, there will be a need, as Newbert and Shreve (1992: 91-92) point out, for the translator to intervene by inserting footnotes xix or creating explanatory paraphrases.The same occurs, albeit to a different extent, when comparing internal comparable structures of a language X.
What we can say with confidence is that even if human languages do not differ in essence from each other as for they behave systematically in spite of their formation arbitrariness, they certainly differ in degree, both formally and semantically.Perhaps nothing in the world even approximates to human language due to its capability for flexibility, complexity, precision, productivity and sheer quantity, under the appropriate circumstances.Humans have learnt to make infinite use of finite formal linguistic means.Thus from a lower to higher degree, the results during the translation process can be less accurate.
It appears to be true that the choice of words and some structures is arbitrary; it varies from individual to individual; it is non predictable to use one or other possibility, but speakers of Galician, Arabic, Spanish, French, Gaelic etc. regularly and habitually use one word, for example, from their language to express any concrete worldwide representation and most abstractions, with the exception of those evolved independently or which are considered to be unique.
It is worthwhile noting that the translation procedure is purely compositional, and thus, it can be viewed as assigning meanings to all the expressions of any language (though it only indirectly assigns senses and denotations to them.In this process there must be an assignment function of meanings to basic expressions, as illustrated in Table 1, but complex meaning has to be treated under specific linguistic principles which range from finding equivalent expressions to interpreting, as illustrated in Tables 2 and 3.
Conclusion
As it can be seen through the analysis of the examples above it is not difficult to deduce that the basic requirement for this discipline is to provide the agents with an exhaustive knowledge of both languages, as for identical representations of L1 and L2 texts must provide an analysis of the linguistic situational peculiarities of both linguistic corpuses, as well as employ equivalent means for achieving any function.Moreover, the evaluation of cultural problems must be considered since differences in cultural presuppositions may need the application of a cultural filter.In this sense, means and mechanisms might be considered equivalent but the resultant state of both linguistic affairs has to be identical, that is, understandable, referential to the world elements that are being represented, and co-referential to each other.Typical formal deviations are accepted if necessary but not as a general rule.To determine when typical deviations are allowed is restricted to fidelity, loyalty and linguistic dominion.
The following linguistic principles showing similarities and dissimilarities between the languages analysed are to be taken into account in order to better both theoretical issues on translation studies and what is far more important for the practical activity to get an interaction among the three elements forming up the triadic phenomenon which essentially constitutes the translation process as such: world referents à language one constituents à language 2 constituents.
-Typical and atypical deviation structures xx followed in languages.
-Uniqueness of some world referents in certain communities -Linguistic and socio-cultural community interactive processes -Psychological effects upon linguistic development (i.e. the language of the mind) -Grouping interaction of levels versus individualistic action.
-Linguistic linking and cohesion features -The referential function (pronominal or anaphoric or both).
-The gray areas of language versus canonical realisations.
-The interrelation among the points mentioned above.
In translation, however, it is dangerous to try to eliminate in language 2 certain aspects of informality, poor grammar, verbose phrasing and any other features that contravene good abstracting practice in language 1.
Translating from one language to another can be seen as a language-game, following Wittgestein, for example, as for this includes many kinds of definitions, projections, correlations, transcriptions, decipherings, etc.It is also obviously related to such activities as reading, comparing, note-taken, indexing, cataloguing, briefing, reviewing, etc.
We cannot translate from one language-game into another, because language games are independent to each other, but we can translate from one language into another in the many different ways in which we do.(Finch, 1977:86) Put at its simplest, translation is not a creative linguistic process as a linguistic corpus already exists.It is a combination of formal and semantic processes of a second language representing world concepts or ideas through the grammatical and lexical units and patterns of a target one either in the written medium or in the spoken medium, as shown in Table I. viii The Spanish Humanist, Luis Vives (1531) was also interested in the difficulties of translating and he probably is the precursor of the textual translation method when he writes in Versiones seu Interpretationes: 'Si un hombre quisiera traducir los discursos de Demóstenes o Marco Tulio (Cicerón), o los poemas de Homero y Virgilio a otras lenguas, tendría que prestar atención primero y ante todo a la forma en que el texto se conforma y a las figuras del habla que contiene.'He also says that there is a third kind of text in which both the substance and the words are important, in which the words bring power and elegance to senses, so as to speak, whether taken singly, in conjunction with other words, or in the text as whole.
ix By covertly erroneous errors House understands those that occur by any mismatch produced under one of the dimensions listed from Crystal and Davy's system of situational dimensions' adapted version (dimensions of the language user: space, social class and temporal; and the language use type: medium, participation, social role relationship, social attitude and province).
x House does not draw a very definite line between covert and overt erroneous mismatches and so one does not know much about the nature of errors in spite of the ideational component inference.xi Canonical expressions are regular expressions.A regular expression is a pattern that describes a set of strings.Regular expressions are constructed analagously to arithmetic expressions, by using various operators to combine smaller expressions xii As Wittgestein would put it 'the essence of a propositional sign is very clearly seen if we imagine one composed of spatial objects (such as tables, chairs, and books) instead of written signs.xiii Due to the fact that some realisations have acquired an elevated realisation state in most concrete manifestations of language we are not going to propose an analysis of different types of figures of speech such as: similes, metaphors, euphemisms, hyperboles, litotes, irony, apostrophe, personification, metonymy, synecdoche, etc. as for most figures of speech as well as all idiomatic expressions of all kinds might follow identical procedures in different manifestations of languages, at least from a semantic point of view.xiv Here there can also be included expressions which block the general syntactic principles in relation to canonical expressions, as in: "Historians will look back on this project as most important thing we did (S.T. Chronicle Future, p.12) (...) but it is a fraud on a consumers.(S.T. 26 TH DECEMBER) xv Even though the invention of the term collocation as applied in linguistics was attributed to Firth and extended by Halliday, the process itself has worried many scholars as mentioned above since classical times.For example, Mellville's Grammar for foreign students.(originally designed as a manual of English Grammar for Dutch students), includes a good number of examples with collocates: I have never seen him so out of temper (angry) The violinist is out of the tune (discordant, not in harmony) In fact he concentrates on collocates of various kinds, combined with prepositions, compound conjunctions and verb combinations.Neither must one forget the non-canonicity of certain verbs in English, commonly known as irregulars.xvi According to Firth, it seems to be the case that we know a word by the company it keeps and he considers collocation to this 'relationship between words', to be part of its meaning (see, Palmer 1976: 94ff, Carter 1987: 36 ff.And 48 ff.) xvii Benson and Ilson (1986:253) refer to them as 'loosely fixed combinations.Lipka (1972) mentions that the idiomacity of collocations is such that some scholars have chosen to include them as a subtype of idiom.xviii Chafe and Tannen (1987) review the literature on the differences between written and spoken language searching for examples of differences in internal interpretation.An example can be lexical diversity.A writer can increase lexical diversity by simply playing with conceptual meanings and their relations.For a translator this process is somewhat different.He might increase lexical diversity providing several alternatives for one L1 term, limiting the number of function words or increasing the number of content words.Moreover, not adjusting L1 constituents to L2 constituents or on the other way round would lead to atypical meaning deviations, misinterpretations and errors.xix As, Givon, T. 1993:1 remarks in his grammar book: žGrammar is not a set of rigid rules that must be followed in order to produce grammatical sentences.Rather, grammar is a set of strategies that one employs in order to produce coherent communication.
Table 1 .
Sample or errors in Hernández's translation of Brave New World
Table 2 .
Comparison among English and two Spanish Editions
Table 4 .
English non-canonical expressions.Interpretation and translation
|
2018-12-11T11:46:38.394Z
|
2012-05-31T00:00:00.000
|
{
"year": 2012,
"sha1": "60fc0ce87b4541d487bab4a287e314d13e40b942",
"oa_license": "CCBY",
"oa_url": "http://www.journals.aiac.org.au/index.php/IJALEL/article/download/695/625",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "60fc0ce87b4541d487bab4a287e314d13e40b942",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
10546362
|
pes2o/s2orc
|
v3-fos-license
|
Genetic and antigenic evolution of H9N2 subtype avian influenza virus in domestic chickens in southwestern China, 2013–2016
H9N2 avian influenza virus (AIV) has caused significant losses in chicken flocks throughout china in recent years. There is a limited understanding of the genetic and antigenic characteristics of the H9N2 virus isolated in chickens in southwestern China. In this study a total of 12 field strains were isolated from tissue samples from diseased chickens between 2013 and 2016. Phylogenetic analysis of the Hemagglutinin (HA) and Neuraminidase (NA) nucleotide sequences from the 12 field isolates and other reference strains showed that most of the isolates in the past four years could be clustered into a major branch (HA-branch A and NA-branch I) in the Clade h9.4.2 lineages. These sequences are accompanied by nine and seven new amino acids mutations in the HA and NA proteins, respectively, when compared with those previous to 2013. In addition, four new isolates were grouped into a minor branch (HA-branch B) in the Clade h9.4.2 lineages and two potential N-glycosylation sites were observed due to amino acid mutations in the HA protein. Three antigenic groups (1–3), which had low antigenic relatedness with two commonly used vaccines in China, were identified among the 12 isolates by antigenMap analysis. Immunoprotection testing showed that those two vaccines could efficiently prevent the shedding of branch A viruses but not branch B viruses. In conclusion, these results indicate the genotype of branch B may become epidemic in the next few years and that a new vaccine should be developed for the prevention of H9N2 AIV.
Hemagglutinin (HA) protein is the receptor-binding and membrane fusion glycoprotein of AIV and the predominant inducer of a neutralizing antibody to virus infection [11]. Neuraminidase (NA) protein is also critical in the generation of progeny virions, which play a crucial role during the late stage of viral replication [12]. Both HA and NA proteins have a vital role in viral pathogenicity, antigenicity, and the host range of AIV [13]. Due to the incomplete proofreading mechanism of RNA polymerase in RNA virus, the generation of diversity in HA and NA genes may frequently occur and give rise to the emergence of new variant strains.
Genetic analysis showed that most of the H9N2 virus strains isolated since 2010 were clustered into genotype 57 in Clade h9.4.2 lineages (Y280-like). The genotype 57 strain, a reassortant of several strains, has become prevalent in vaccinated chickens in China and caused widespread outbreaks since 2010 [14][15][16]. However, since most of the commonly used vaccine strains, including A/chicken/Shandong/6/96 (SD696), A/chicken/Guangdong/SS/94 (SS) and A/chicken/Shandong/F/98 (F98) were isolated before 2000, the outbreak of H9N2 avian influenza in vaccinated commercial chickens is not surprising as a result of infections with field strains that antigenically differ from the vaccine strains [17,18]. Since genetic and antigenic characteristics of H9N2 vary according to time, the characteristics of H9N2 strains from China since 2013 are not well known. In this study, the genetic and antigenic characteristics of H9N2 AIV strains circulating in commercial flocks in southwestern China in recent years were analyzed, and the protective efficacy of the currently used vaccine strains against H9N2 viruses of different antigenic groups were also evaluated. This result may provide critical insight for vaccine strain selection and vaccine development.
Eggs and virus
Specific pathogen free (SPF) chicken embryos and one-day old SPF chickens were obtained from Beijing Merial Vital Laboratory Animal Technology Co., Ltd (Beijing, China). The 28-day old commercial Cobb broilers were obtained from Wenjiang Chia Tai Co., Ltd (Chengdu, China). The inactivated vaccine of SD696 and SS strains were selected for analysis of immunoprotection in this study. The inactivated oil vaccine of SD696 strain was obtained from the Qianyuanhao Biological Corporation Limited [Approval number: (2010)160132076, Beijing, China], and the inactivated oil vaccine of SS strain was obtained from the Guangdong Wens Dahuanong Biotechnology Co., Ltd [Approval number: (2011)190032080, Guangdong, China]. The inactivated antigen was also purchased from the same company. embryos were incubated at 37˚C and examined twice daily for their viability. The allantois fluids were harvested after 48 h incubation and three blind passages were conducted [19]. The presence of H9N2 in tissue supernatants or allantois fluids were verified by reverse transcription-polymerase chain reaction (RT-PCR) analysis of the HA gene using the primers H9F (5ʹ-GGAAGAATCCTGAAGACTGA-3ʹ) and H9R (5ʹ-TCAAGCAGCACTAGCAATTC-3ʹ). The hemagglutination activity test was also used for the titration of the H9N2 virus in allantois fluids. The existence of other respiratory pathogens including Newcastle disease virus (NDV), infectious bronchitis virus (IBV) and infectious laryngotracheitis virus (ILTV) in tissue samples were verified by RT-PCR or PCR by following previously published methods [20][21][22]. Bacteria such as Escherichia coli and Salmonella were also isolated by blood agar culturing.
Phylogenetic analysis of the HA and NA genes
Total RNA extraction and reverse transcription (RT) reaction was performed as previously reported [19]. PCR amplification of the HA gene was carried out using the primers HAF (5ʹ-TCTATCTGCTGCCATACCAACCC-3ʹ) and HAR (5ʹ-AGTAGAAACAAGGGTGTTTT TG-3ʹ). PCR amplification of the NA gene was carried out using the primers NAF (5ʹ-TGAATCCAAAT CAGAAGATAATAGC-3ʹ) and NAR (5ʹ-CCCTAAAATTGCGAAAGCT-3ʹ).The cloning of the HA and NA gene was performed as previously reported [20]. The recombinant plasmids containing the target gene were sequenced by Shanghai Sanggong Biological Engineering Technology & Services Co., Ltd (Shanghai, China).
Nucleotide sequences of the HA and NA genes obtained from the H9N2 AIVs were aligned using the Editseq program in the Lasergene package (DNASTAR Inc., Madison, WI, USA) and compared to the sequences of other reference H9N2 isolates using the MegAlign program. The reference isolates included strains from the past ten years, strains from four primary lineages (Clade h9.1-h9.4), strains from two secondary lineages (Clade h9.4.1-h9.4.2), and three vaccine strains. A phylogenetic tree of the HA and NA genes was created using the neighborjoining method in MEGA version 7.0.14. Bootstrap values were determined from 1000 replicates of the original data.
The potential N-linked glycosylation sites of HA and NA genes were predicted using the online software NetNGlyc 1.0 Server. Predictions were performed only on the Asn-Xaa-Ser/ Thr sequons.
Antigenic analysis the H9N2 isolates and commercial vaccine strains
To investigate the antigenic relationship between the 12 H9N2 strains and two commonly used vaccine strains, antiserum for each strain was generated. In brief, after being propagated in embryonated chicken eggs, concentration of the field isolates was adjusted to 10 7 EID 50 /0.2 ml and inactivated by incubating them with 0.1% formalin at 20˚C for 10 hours. The inactivated field isolates were then emulsified with oil adjuvant (Montanide ISA 70 SEPPIC, France) at a ratio of 3:7, and inoculated subcutaneously twice (at a 2-week interval) into three 6 week old SPF chickens (n = 3). Chickens were held in separate biosafety level 2 (BSL2) isolators in the Laboratory Animal Center of Sichuan Agricultural University (Ya'an, Sichuan, China) with ad-libitum access to feed and water and maintained under uniform standard management conditions. Antisera from vaccinated chickens were collected at 12 days after the final immunization and stored at -20˚C.
The HI test was performed using a 1% chicken red blood cell suspension according to the Manual of Diagnostic Tests and Vaccines for Terrestrial Animals 2016 (OIE, http://www.oie. net). The HI titer was expressed as the reciprocal of the highest serum dilution in which hemagglutination was completely inhibited. An antigenic cartography was performed by using the program AntigenMap (http://sysbio.cvm.msstate.edu/AntigenMap), which uses matrix completion multidimensional scaling to map HI titers in two dimensions [23]. The detailed settings were set as follows: Low Reactor Threshold: 20; Normalization Method: N1; Temporal Model: No; Rank: 2; Number of Iterations: 2000. Antigenic map analysis can display the antigenic differences between viruses, such as viruses with high antigenic relevance cluster closely on the map, while viruses with low antigenic relevance stay far away from each other.
Immune protection analysis of commercial vaccines against H9N2 isolates
To evaluate the protection efficacy of the commercial inactivated vaccines SD696 and SS against the representative H9N2 field isolates, three representative field strains A/chicken/ Chongqing/LP/2015 (Group 1), A/chicken/Guizhou/QZ/2015 (Group 2) and A/chicken/Sichuan/LB/2013 (Group 3), which are located in different regions on the antigenMap, were chosen as the challenge virus. Virus re-isolation from the trachea was the main index to evaluate the protective rate of the vaccines.
Commercial 28-day-old Cobb broilers (n = 100) without HI titer in sera were randomly divided into ten groups (named A-J). Group A, B and C were subcutaneously injected with 0.3 mL of SD696 vaccine; group D, E and F were subcutaneously injected with 0.3 mL of SS vaccine; while group G, H, I, and J were unvaccinated. At 21 days post-immunization (d.p.i), chickens in group A, D and G were challenged with 2×10 6 EID 50 of A/chicken/Chongqing/LP/ 2015 strain in 0.2 mL by intravenous injection; chickens in B, E and H were challenged with 2×10 6 EID 50 of A/chicken/Guizhou/QZ/2015 strain and chickens in C, F and I were challenged with A/chicken/Sichuan/LB/2013 strain as the same way, respectively. Chickens in group J were mock infected with 0.2 mL PBS. The birds in each group were held in separate biosafety level 2+ (BSL2+) isolators under negative pressure in the Laboratory Animal center of Sichuan Agricultural University (Ya'an, Sichuan, China) with ad-libitum access to feed and water and maintained under uniform standard management conditions. Birds were monitored and recorded daily for appetite, activity, fecal output, conjunctivitis, cyanosis of the cumb, ruffled feathers and dyspnea.
Tracheal swabs from each group were sampled at 3, 5 and 7 days post-challenge (d.p.c) and placed in 2 mL PBS (pH 7.0-7.4) supplemented with 5% new-born calf serum (NCS) (Zhejiang tian-hang Biological technology stock Co., Ltd, Zhejiang, China). After filter sterilizing with a 0.22 μm filter membrane, 0.2 ml of each sample was inoculated into the allantoic cavity of 9-to 11-day-old SPF chicken embryos for virus re-isolation and examined twice daily for their viability. Each sample was injected into three eggs. The allantoic fluids were harvested after 72 h incubation and were tested with the HA test. The negative HA allantoic fluid was passaged one more time in SPF eggs. At 14 d.p.c, all remaining chickens were euthanized and dissected for pathological observation.
Ethics statement
All animal experiment such as generation of antiserum from SPF chickens and immune protection tests of commercial vaccines were conducted complying with protocols approved by the Sichuan provincial Laboratory Animal Management Committee [Permit Number: XYXK (Sichuan) 2014-187] and the Ethics and Animal Welfare Committee (EAWC) of Sichuan Agricultural University. Humane endpoints were observed and utilized over the entire experimental period. Birds that were either unable or unwilling to eat and/or drink during the animal experiments and all the remaining birds at the end of animal experiments were euthanized immediately by cervical dislocation or by the administration of intravenous sodium pentobarbital (100 mg/kg) by a trained technician and approved by the EAWC.
Viral isolation
A total of 71 clinical samples, including trachea, lung, and kidney, were collected from dead or diseased chickens displaying respiratory symptoms and/or reduction in egg production from chicken flocks located in different areas of southwestern China including Si-chuan, Yun-nan, Gui-zhou, and Chong-qing areas. Twelve H9N2 AIV strains were isolated. RT-PCR detection of the clinical samples with H9N2 virus showed that only one sample had co-infection of infectious bronchitis virus (IBV) (1/12, 8.3%). Bacterial isolation showed that E. coli or Salmonella were also found in the clinical samples (3/12, 25%). The case histories of the local strains are listed in Table 1.
Phylogenetic and molecular analysis of the HA gene Table). The results showed that Chinese strains in the h9.4.2 lineages isolated from 2013 to 2016 could be mainly clustered into two independent branches labeled A and B. Branch A included the major lineages and most of viruses isolated from 2013 to 2016. Branch B included new small lineages and isolates which had a unique HA protein (Fig 1). In this study, eight field isolates including A/chicken/ Strains from branch A and B shared a high amino acid sequence similarity (95.4-97.1%), but shared a low identity (89.7-90.8%) with the commonly used vaccine strains F98, SS and SD696. Nine amino acids mutations including H66Q, G90E, S145D, D153G, Q164R, N167G, A168D, E181G, and T200R were observed in strains from branch A and B when compared with the Chinese isolates identified previous to 2013. All of these mutations were located at the head of the HA protein (Fig 2), and three of them including S145D, N167G and A168D were located at the critical HA antigenic sites identified previously [24][25][26]. Except for these nine mutations, ten additional amino acids mutations including L51I, T72N, D86G, E90D, N112H, T198K, D201E, T206N, K246R and R283K were observed in four strains from branch B when compared with the Chinese isolates identified previous to 2013 and seven of them were located near the RBS and left-right edge of receptor-binding pocket. Two new potential N-glycosylation sites (T72N and T206N) were observed in strains from branch B.
To validate whether these special mutations in strains from branch A and B could affect the advanced structure of the HA protein, the HA protein of strains from branch A and B and representative strain A/duck/Hong kong/Y280/97 were predicted with an online modelling tool (http://www.swissmodel.expasy.org/interactive) [27] and the predicted model were shown by the Pymol 0.99 software (created by Warren Delano). No differences were observed and the mutations were marked on the secondary structure of the HA protein (Fig 2). , five reference isolates and three vaccine strains were analyzed (S1 Table). Results showed that all 12 field isolates were grouped into Y280-like lineages and shared a high nucleotide identity (92.7-100%) with other Y280-like lineages strains isolated from 2013 to 2016. They also shared a low identity (89.7-93%) with commonly used vaccine strains F98, SS and SD696 as shown in Fig 3. The Chinese strains from h9.4.2 lineages isolated from 2013 to 2016 could be mainly grouped into two branches. Branch I was the major lineage and included most of the viruses isolated from 2014 to 2016, while the branch II strains included most of viruses isolated in 2013. For the 12 field strains isolated in this study, only one isolate (A/chicken/Sichuan/LB/2013) was grouped into branch II and the other 11 strains were grouped into branch I. Sequence comparison showed that seven new amino acid mutations (T10T, F22L, V51M, K72R, G124D, I251V and V299I) were observed in strains from branch I when compared with strains from branch II. Two (T10T and F22L) of them were located in non-polar transmembrane regions, two (V51M and K72R) were located in the stalk, and three (G124D, I251V and V299I) were located in the head of the NA protein. Hemadsorbing
Antigenic cross-reactivity analysis
Result of a reciprocal HI test showed that there is low reactivity between vaccine strains and recent viruses, while relatively high reactivity was observed between field strains ( Table 2). Further antigenMap analysis showed that all 12 field isolates and two vaccine strains were divided into four independent antigenic groups (1-4) (Fig 4). The antigenicity differences between antigenic group-1 or group-2 strains and vaccine strains were more significant than those between group-3 strains and vaccine strains. Group-1 or Group-2 strains had at least 4-fold lower HI titers of the vaccine strains in reactions with the antisera to the vaccine strains. Group-1 contained seven strains which were grouped into branch A in the phylogenetic tree of the HA gene and branch I in the phylogenetic tree of NA (HA-branch A and NA-branch I); Group-2 included four strains which were grouped into branch B in the phylogenetic tree of the HA gene and branch I in the phylogenetic tree of the NA gene (HA-branch B and NAbranch I); Group-3 included one strain that was grouped into branch A in the phylogenetic tree of the HA gene and branch II in the phylogenetic tree of the NA gene (HA-branch A and NA-branch II); Group-4 included two vaccine strains SD696 and SS. These groups generally corresponded to the phylogenetic relationships of these viruses and were also correlated with the year of collection.
Immune protection analysis
There were no unexpected deaths observed during the study. Three birds in group H and one in group I showed clinical signs such as lethargy, cough, dyspnea while no clinical signs were observed in the other groups. At 14 d.p.c, all animals were euthanized and dissected for pathological observation. Slight intestinal congestion and hemorrhage were observed in all of the Note: * means the HI titer against the homologous strains, the abbreviation for each virus was the same as in Fig 4. groups except in group J, and slight tracheal congestion and hemorrhage was observed in A/ chicken/Sichuan/LB/2013 challenged groups C, F and I. For the chickens challenged by A/chicken/Chongqing/LP/2015 (Group-1 in antigenMap), percentage of chickens shedding the virus in the SD696 vaccine group, SS vaccine group and the control challenged group was 80%, 100% and 100% at 3 d.p.c, respectively. At 5 d.p.c., the percentage of virus shedding was 0%, 30% and 80% in the SD696 vaccine group, SS vaccine group and the control challenged group, respectively. There was no virus shedding in both of the two vaccine groups at 7 d.p.c., in contrast to the high virus re-isolation rate (80%) in the control group.
For the chickens challenged by A/chicken/Guizhou/QZ/2015 (Group-2 in antigenMap), the percentage of chickens shedding the virus in the SD696 vaccine group, SS vaccine group and the control challenged group at 3 d.p.c was 80%, 80% and 100%, respectively. At 5 d.p.c., the percentage of virus shedding was 50%, 80% and 100% in the SD696 vaccine group, SS vaccine group and the control challenged group, respectively. While there is still a relatively high virus re-isolation rate in SD696 (20%), SS (40%) and control group (60%) at d.p.c. the control challenged group was 80%, 90% and 100% at 3 d.p.c, respectively. At 5 d.p.c., the percentage of virus shedding was 20%, 80% and 100% in the SD696 vaccine group, SS vaccine group and the control challenged group, respectively. There was no virus shedding in the SD696 vaccine group at 7 d.p.c, in contrasted to the relatively high virus re-isolation rate in the SS vaccine group (40%) and control group (60%) ( Table 3).
Discussion
H9N2 sequences were isolated recently in Chinese poultry farms and live-poultry markets [29]. Even though H9N2 is lowly pathogenic to chickens, it has played an important role in public health since 1998. Vaccination is an effective way to prevent AIVs outbreaks, but the vaccine should provide effective protection against the current field strains. The H9N2 vaccine strains currently used in China were selected from viruses isolated in the 1990s, but H9N2 strains isolated from 2009 to 2013 had undergone a significant antigenic drift from the vaccine strains (SD696 and F98) in China [14,15,26]. To isolate and identify the genetic and antigenic character of the current epidemic field strains it is very important to observe the evolutionary character of novel emerging variants and select the appropriate vaccine strains [30]. In this study, 12 H9N2 AIVs were isolated from vaccinated chickens from 2013 to 2016. Phylogenetic and antigenic analysis of those 12 isolates and other references were conducted, and an immune protection test of currently used inactivated vaccine against the representative strain of different antigenicity was also performed.
Previous studies on the phylogenetic analysis of the HA gene of H9N2 showed that Clade h9.4.1 lineages have been prevalent in China since the mid-1990s [15,16,[31][32][33], and more than 74 genotypic groups in h9.4.2 lineages have been classified [28,34]. In this study, phylogenetic analysis on the HA gene showed that all of the 12 isolates were correlated with genotype 57 (G57) of Clade h9.4.2 lineages strains. G57 strain was reassorted from six H9N2 strains in 2007 and has demonstrated improved adaptability to chickens and an increased host range such as domestic aquatic birds, wild birds, and swine [28]. All of the 12 isolates had a distant genetic relationship from the vaccine strains SD696 and SS, and had a L234 residue (H9 numbering) in the HA protein, which is also responsible for human-virus-like receptor specificity [35][36][37]. The cleavage site of the HA gene still contains the single and discontinuous basic amino acids (R) which conformed to the character of lowly pathogenic influenza viruses. Most of the analyzed Chinese strains isolated after 2013 formed a new major branch (Branch A) and a new minor branch (Branch B) in the phylogenetic tree. In addition, there are several amino [28]. In addition, it was interesting to observe that the branch B strains had two new potential N-glycosylation sites in the HA protein. HA glycosylation can mask antigenic epitopes and therefore is an important process in the generation of escape mutants [38]. Sun and coworkers [39] showed that the increase in glycosylation site numbers mainly occurred with high frequency in the early stages of evolution of the influenza virus. The increase of potential N-glycosylation sites in the HA protein of Branch B strains implied that the genetic and antigenic evolution of Branch B strains should be paid more and more attention. Phylogenetic analysis on the NA gene showed that all new isolates were grouped into Y280-like lineages similar to other research reported from China in recent years [15,40]. Additionally, the NA protein of strains from the previous four years had several mutations and formed a new branch (branch I) when compared with strains previous to 2012. The mutations were located in the non-polar transmembrane region, stalk and head of the NA protein, but the function of these mutations was not clear.
The antigenic drift of H9N2 AIVs continues to occur in China [14,41]. Since the countrywide administration of commercially inactivated H9N2 vaccines in chickens, the immunological pressure may have contributed to the antigenic drift of field strains. Reciprocal HI testing showed that all 12 field isolates had a low antigenic reactivity with vaccine strains SD696 and SS, which was further verified by the antigenMap analysis. A previous report [41] showed that the antigenic drift of influenza virus is not continuous but punctuated, and antigenically homogenous clusters of strains predominate for an average of three years. In this study strains from the adjacent year often clustered together in the antigenic map. The antigenic drift of influenza A virus was characterized by a complex interplay between frequent reassortment and periodic selective sweeps [41,42]. In addition, strains in the same branch could also be diverted into different antigenic groups. For instance, A/chicken/Chongqing/LP/2015 and A/ chicken/Sichuan/LB/2013 both belong to branch A but fall into antigenic group 1 and 3, respectively. This may be due to the fact that some mutations in HA may exert a disproportionately large effect on the antigenic type, whereas others are "hitchhikers" with no phenotypic effect [42].
The protection efficiency of vaccine SD696 against the H9N2 field strains in China has been previously studied and results showed that the antigenicity of most field isolates was distinctly different from vaccine strains [18,38]. It is worth noting that the commonly used vaccine in China could not efficiently prevent the shedding of A/chicken/Guizhou/QZ/2015 strain from branch B viruses. We speculate that the branch B strains may have the potential to cause widespread outbreaks in next few years.
In conclusion, we have demonstrated that the genetic and antigenic characteristics of H9N2 AIVs isolated from southwestern China in recent years have undergone significant changes from the vaccine strains (SD696 and SS) and field isolates previous to 2013, and strains generated after 2013 formed a different genetic branch and antigenic profiles. Vaccine strains that have a good antigenic match with prevailing strains and which are broadly cross-reactive should be applied in the field.
Supporting information S1
|
2018-04-03T02:23:21.910Z
|
2017-02-03T00:00:00.000
|
{
"year": 2017,
"sha1": "8aa46297ebc3e99c5d917f6c54c5c0b55bfdffac",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0171564&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8aa46297ebc3e99c5d917f6c54c5c0b55bfdffac",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
15959471
|
pes2o/s2orc
|
v3-fos-license
|
Do age and professional rank influence the order of authorship in scientific publications? Some evidence from a micro-level perspective
Scientific authorship has important implications in science since it reflects the contribution to research of the different individual scientists and it is considered by evaluation committees in research assessment processes. This study analyses the order of authorship in the scientific output of 1,064 permanent scientists at the Spanish CSIC (WoS, 1994–2004). The influence of age, professional rank and bibliometric profile of scientists over the position of their names in the byline of publications is explored in three different research areas: Biology and Biomedicine, Materials Science and Natural Resources. There is a strong trend for signatures of younger researchers and those in the lower professional ranks to appear in the first position (junior signing pattern), while more veteran or highly-ranked ones, who tend to play supervisory functions in research, are proportionally more likely to sign in the last position (senior signing pattern). Professional rank and age have an effect on authorship order in the three fields analysed, but there are inter-field differences. Authorship patterns are especially marked in the most collaboration-intensive field (i.e. Biology and Biomedicine), where professional rank seems to be more significant than age in determining the role of scientists in research as seen through their authorship patterns, while age has a more significant effect in the least collaboration-intensive field (Natural Resources).
Introduction
Bibliometric indicators at the micro level constitute a valuable tool not only for supporting research assessment of scientists but also for the better understanding of the scientific process. Studies dealing with this second purpose are less common in the literature but equally attractive, since they enable us to delve into different aspects of the behaviour of researchers such as their collaboration habits and interactions, their different roles in the production of new knowledge or the determinants of their scientific performance. Interesting examples of this use of bibliometric indicators are the so-called ''bibliometric portraits'', which pursue the bibliometric characterisation of individual scientists (Kalyane and Munnolli 1995;Prakasan et al. 2009), reflecting their personal history (Cronin and Shaw 2002). Moreover, studies developed to explore inter-gender differences in the behaviour of scientists (Fox 2005;Mauleon and Bordons 2006) or those focused on the analysis of main determinants of successful professional careers (Carayol and Matt 2004) also deserve being mentioned. The combination of bibliometric indicators with other methodologies such as surveys or questionnaires have emerged as very relevant, since it enables the introduction of a more sociological perspective in the study of research performance (Feist 1993;Hemlin and Gustafsson 1996;Prpic 2000;Fox and Stephan 2001).
This paper focuses on one specific aspect of the publication of research results: authorship and authors' name order in publications. Multi-authored documents are now the norm in science as a result of the important role of collaboration in research (Bordons and Gómez 2000;Hara et al. 2003). Due to the increasing complexity of research, teamwork and inter-scientist collaboration have become essential for the advancement of science. Scientists with different skills and specialisation profiles may successfully collaborate for the development of research projects and the creation of new knowledge.
The main role of authorship is giving credit for the scientific contribution of authors, but also assigning responsibility for their published research (Biagioli 1998;Pontille 2004). There are different guidelines concerning authorship criteria and although none of them is universally respected, it is usually accepted that authorship criteria include (a) involvement in conception, planning and execution of the research work, (b) interpretation of results, (c) writing a substantial portion of the manuscript, and (d) final approval of the version to be published (ICMJE 2010;Cronenwett and Seeger 2005). However, all authors do not contribute equally to the research published in a paper. Interestingly, in some disciplines the sequence of authors in the byline of publications provides significant information about the contribution of authors to the research or, at least, enables us to identify principal authors which occupy the ''key positions'' in the paper. This is currently true, although the upward trend observed in the number of authors per paper makes it increasingly difficult to assess the nature and extent of the contribution of each author (Birnholtz 2006), as well as to discern who is accountable for the integrity of the work (Bellis 2009).
In spite of the abovementioned limitations, authorship in peer-reviewed journals is basic for academic appointments and is used in research evaluation processes associated to getting promotion, tenure, prizes, funding and, in the long term, professional prestige (Tscharntke et al. 2007). Moreover, the order of authorship is sometimes taken into account in bibliometric studies, since credit among co-authors can be distributed in such a way that the greater percentage share of the credit is given to those who contributed the most (see for example Hu 2009or Hu et al. 2010, Vinkler 2010. Accordingly, the knowledge of the implicit existing conventions concerning the order of authorship within each field is very useful for evaluators--also for readers and editors--who want to assign the correct credit and accountability to authors. Concerning the meaning of the order of authors' names in the byline of publications important differences by fields have been described (Pontille 2004). Some scientific associations have a formal policy on author order (Osborne and Holland 2009), but this is not the norm and different practices exist depending on the disciplines and even by research group and country, since the practice of signing also may reflect national traditions in less international disciplines (Pontille 2004). Moreover, slight differences depending on the basic/clinical nature of research, measured through the scope of journals or the specialisation background of scientists, have been described in some specific biomedical disciplines (Savitz 1999).
The most widely accepted convention among the experimental sciences is that the most important positions are the first and the last (Zuckerman 1968). In this sense, first-position authors are very often responsible for the experimental work supervised by the last-position author (Moed 2000), who has a role of supervision and leadership of the research (Beveridge and Morris 2007; Shapiro et al. 1994). The importance of the first-position author's contribution to the papers is supported by the fact that he/she is very often the reprint author (Costas and Iribarren-Maestro 2007;Mattsson et al. 2010). The remaining authors tend to appear in intermediate positions in descending order of their contribution, with senior authors normally listed at the end. However, in some disciplines such as Mathematics, Economics or High Energy Physics, alphabetical order of authors is followed (AMS 2004;Mauleon and Bordons 2007;Engers et al. 1999;Birnholtz 2006;Frandsen and Nicolaisen 2010).
Browsing through the literature, the order of authors in the byline has been studied from different points of view. Inter-field differences in the interpretation of author order in papers have been put forward in a number of studies, in which the prevailing policy was described (for example Mendki 2006), sometimes contrasted with case studies that report authors' views on their contribution to papers (Shapiro et al. 1994), or with the perceptions on author contributions by scientific committees based on author's position (Wren and Kozak 2007). The influence of different variables, such as professional rank and age, on authorship practices has also been explored in the literature, noting that scientists tend to sign more as last author and less often as first author as they get older (Gingras et al. 2008) and as they go up in the hierarchy (Drenth 1998).
This paper focuses on the relationship between the position of authors in the byline and three variables: age, professional rank and research performance of scientists. Although some of these aspects have been previously analysed, our purpose here is to study the interaction between them which is an original approach. If principal researchers tend to sign as last authors, we would also expect research professors to be found more frequently in such a position. However, age must also be an influential factor, since those with a long professional career are more likely to have attained a leadership position in a consolidated team. In addition, the authorship pattern of ''top scientists''--identified following the methodology suggested by Costas et al. (2010) for the use of bibliometric indicators at the individual level--is explored. Finally, the interaction between these variables is examined.
Objectives
The main objective of this article is to study authorship practices in publications as regards the order of names in the byline. The influence of the age, professional rank and bibliometric profile of scientists over the author's position in the byline of publications is explored--as well as the interaction between these variables--in three different research areas.
The following questions are addressed: Are there specific authorship patterns for junior and senior scientists at the Spanish National Research Council (Consejo Superior de Investigaciones Cienti 9ficas, CSIC)? What is the influential role of professional rank, age and scientific performance class (and their interaction) on authorship patterns? Are there any differences by field in the effect of these criteria? Can we explore the position of scientists in the social structure of Science through their signing habits?
Methodology
This study is based on the bibliometric analysis of 1,064 permanent researchers working at the Spanish CSIC in 2004 with a full time position. For research management purposes, scientists at the CSIC are organised in seven research areas 1 according to their scientific topics, three of which are subject to analysis in this study: Biology and Biomedicine (388 scientists), Natural Resources (348) and Materials Science (327). These researchers are also organised in three professional ranks: Tenured Scientist (the lowest rank-558 researchers), Research Scientist (the intermediate rank-269) and Research Professor (the highest rank-237). The full name, age, professional rank and research institute of each scientist were provided for each of the three areas under study.
The scientific production of the scientists under survey published in journals covered by the Web of Science (WoS) during the period 1994-2004 was downloaded and assigned to their authors. Several methodologies for the correct matching of authors and documents were applied (see Costas and Bordons 2006). Documents published by scientists during their stays abroad were also included in the study, and all document types were considered.
For every scientist, his/her number of documents in the period under analysis was recorded as well as his/her position in the byline of the documents. The following indicators were obtained: a. % Documents in First position: percentage of a scientist's publications in which he/she appears as first author. b. % Documents in Last position: percentage of publications that each scientist has published as last author. c. % Documents in Middle position: percentage of publications where the scientist appears in any intermediate position. d. % Single-authored documents: percentage of publications where the scientist appears alone (not shown in this paper, but mentioned here because the sum of the four described indicators for a given scientist accounts for 100% of his/her production).
These indicators were analysed in relation to the professional rank, age and scientific class of scientists. The three professional ranks existing at the CSIC for permanent scientists were considered: tenured scientist, research scientist and research professor. The age of scientists was considered as a quantitative variable but also a categorical one, including three age-groups: B44 years (young), 45-54 years (middle-aged) and C55 years (veteran). 2 1 Agricultural Sciences, Biology and Biomedicine, Chemical Sciences and Technology, Food Science and Technology, Humanities and Social Sciences, Materials Sciences and Technology, Natural Resources, Physical Sciences and Technology. 2 Age-group limits determined by the percentile values in the distribution of scientists by age (P25 = 44 years old and P75 = 55 years old).
In this paper, we also use the concept of ''scientific performance class'' which refers to a three-group classification of scientists according to their performance in three bibliometric dimensions (Production, Observed Impact and Expected Impact). These three scientific performance classes are: top, medium and low. Top researchers are the ones with a high performance in at least two of the three dimensions, medium class scientists present an intermediate performance in two of the three dimensions and low class researchers have a low performance in at least two of the three dimensions suggested (cf. Costas et al. 2010).
The statistical analysis of data was carried out with SPSS. Tests for non-parametric variables were applied for the comparison between means (U Mann-Whitney test and Kruskal-Wallis test). The generalised linear model was used to study the influence of age and professional rank on the author's position in the byline of publications as well as to explore the interaction between both factors.
Results
First of all, some general data about the production of the researchers under analysis are shown. The researchers of the three areas account for a total of 24,982 documents: 9,660 in Materials Science, 9,318 in Biology and Biomedicine and 6,102 in Natural Resources; receiving 80,546, 189,699 and 56,940 citations, respectively. For additional data on the research performance of scientists in these areas we refer to Costas et al. (2010).
Only 26 scientists (2.4%) had no WoS publications during the period of analysis. This paper focuses on the research performance of the remaining 1,038 scientists, which have at least 1 publication during the reference period. Altogether, the distribution of these scientists by professional rank was as follows: 52% were tenured scientists, 25% were research scientists and 22% were research professors. In respect to age, we can mention that around 31% of scientists were labelled as ''young'' (less than 45 years), 39% were ''middle-aged'' (45-54 years old) and 30% were in the ''veteran'' group (C55 years old) ( Table 1).
The relationship between the professional rank and age of scientists is displayed in Table 2. We can observe that young scientists predominate in the lowest rank, while middle-aged ones are almost half of the research scientists and scientists over 55 constitute (100) Evidence from a micro-level perspective 149 more than half of the research professors. Although there are small inter-field differences (see Appendix), this general pattern was found in the three scientific areas. The average number of authors per document can be considered as a proxy for the average size of teams in each area (Table 3). It is interesting to observe that the smallest team size corresponds to Natural Resources (around four authors) and the largest to Biology and Biomedicine (around seven authors). As the number of authors per document increases, the contribution of the different authors is more diffuse and the ambiguity of authorship increases. However, the first and last positions maintain in many disciplines a special meaning as far as their contribution to the research is concerned.
In the following sections the trend of authors to sign as first and last authors of publications is analysed in relation to their professional rank, age and scientific performance class.
Professional rank
The author's position in the byline of publications regarding their professional rank is analysed in each of the three areas under study in Fig. 1. Figure 1 presents the distribution of the percentage of documents signed by researchers as first and last author in each of the three professional ranks. The thick line within the box plots represents the median of the distribution; the lower and upper hinges of the boxes represent the lower and upper quartiles of the distribution (meaning that 50% of all the researchers are included in the box). Finally, the circles and asterisks in the upper part of the figures represent the outliers and extreme values of the different distributions. 3 These figures show a clear pattern for the three scientific areas. As we go up in the professional rank, the percentage of first-authored papers decreases and the percentage of (100) 269 (100) 237 (100) 1,064 (100) Note: data disaggregated by areas in the Appendix last-authored papers rises. Differences among professional ranks in the percentage of firstauthored documents are found, as well as in the percentage of last-authored papers (P \ 0.001 in both cases, Kruskal-Wallis test), which means that the position of authors in the byline is related to their professional rank. Within each rank, differences between the percentage of first and last-authored documents are generally observed. Tenured scientists publish proportionally more documents in the first position and fewer in the last position, while research scientists and research professors publish proportionally more in last position and less in the initial position (P \ 0.01, signed test). The only exception concerns tenured scientists in Biology and Biomedicine, who sign indistinctly in the first or last position (no significant differences were found).
Signing patterns are especially marked in Biology and Biomedicine where research professors show the lowest percentage of first-authored documents (below 10%) and the highest percentage of last-authored documents (around 50%). On the other end of spectrum, we find Natural Resources, with smoother signing patterns (research professors sign around 20% of their documents as first author and 40% as last author). An intermediate situation is revealed for the area of Materials Science.
Age
The distribution of the percentage of documents signed by scientists in the first and the last positions according to their age is shown for the three areas under study in Fig. 2.
A very clear pattern is observed in all three areas: the percentage of first-authored documents decreases with age, while the percentage of last-authored documents increases. Younger researchers present the highest percentage of documents signed in first position and the lowest percentage in last position; while the contrary holds for older researchers who present the highest percentages of documents signed in last position and the lowest percentage in first position. Statistical significant differences were found between young researchers and the other two groups of scientists (P \ 0.000). These results show that author position in the byline of publications in these research areas is clearly age-related, which is consistent with earlier results (e.g. Gingras et al. 2008).
Scientific performance class
Differences in the author's byline position according to the scientific performance class of researchers are explored in Fig. 3. In this case, contrary to our previous analysis, no clear and common pattern for the three areas is observed.
In Natural Resources and Materials Science the three classes of scientists present similar percentages of documents signed in first and last position; in fact, there are no inter-class differences in signing habits.
In the case of Biology and Biomedicine, the percentage of first-authored documents decreases from top to low class scientists, while the opposite trend is observed for the percentage of last-authored papers. However, the influential factor here is not scientific class, but age, which increases from top to low class (see previous section). Within each scientific class, large differences in first/last-authorship by age were found. This picture is only observed in Biology and Biomedicine due to the sharper age-related signing patterns described for this field.
Evolution of author's position in the byline of publications according to age This analysis is based on the age of scientists when publishing the documents. Therefore, the age of researchers in the year of publication (''age of publication'') as well as their position in the byline are taken into account for each document. Documents with several researchers with different ages are counted for each age, considering that duplications are suitable for the better understanding of authorship practices of researchers according to their age. Figure 4 shows the evolution of the percentage of documents signed by scientists in first, middle and last positions for each area depending on their age. In the field of Natural Resources, a total of 6,031 documents were published by researchers aged between 26 and 60. As we can see, scientists under 34 tend to sign mainly in first position. As the age of Fig. 3 Share of first and last-authored documents by scientific class and area researchers increases, they tend to change their position in the byline of papers from the first to the last one. In Natural Resources, the ''shift age'', when researchers start to sign more in last than in first position, is around 38-39, although quite similar percentages of documents are signed in both positions for scientists in the 38-46 age-group. Over the age of 46, scientists tend to sign mainly as last or middle author, and they seldom appear as first author.
In the case of Biology and Biomedicine, a total of 8,922 documents have been published by researchers aged between 26 and 60. The pattern revealed here is very similar to that described for Natural Resources: first-authored documents predominate among the youngest scientists while this author's byline position is infrequent for veteran scientists, who tend to sign in the middle or last position. In this area, the shift age is 35-36 (slightly earlier than for Natural Resources). The intermediate author's byline position is very frequent along the whole life of researchers (it appears in around 40-50% of their production), partly due to the higher number of co-authors in this particular area. In Materials Science, a total of 9,537 documents published by scientists aged between 26 and 60 are analysed. The same tendency revealed for the other two areas is observed. In this case the shift-age corresponds to scientists aged 37-38. Besides, documents signed in intermediate positions are more usual than in the other two areas during practically the whole life of researchers.
In Natural Resources, scientists keep signing as first-author of documents for a longer period when compared to the other two areas: scientists aged 47 appear as first authors in around 20% of the documents in this area vs. 10% in Materials Science and Biology and Biomedicine.
Signing patterns are especially evident in Biology and Biomedicine where scientists tend to acquire a supervisory role (last position) earlier and in a more sustained manner than in the other areas.
Interaction between professional rank and age
We have seen that both age and professional rank are influential factors on the position of authors in the byline, but which one carries more weight? It is clear that as age and rank rise, the probability of signing in the last position also does. But what happens with those scientists that never attain the highest rank and get older in the lowest rank? What about those brilliant scientists that attain the highest rank in their youth? Do their signing habits resemble those of their age-group colleagues or those that are standard for their professional rank? To explore these issues (that have not been previously dealt with in the literature) the effects of age and professional rank on the signing habits of scientists were analysed from a global perspective. A multivariable generalised linear model was used, since it provides variance analysis for multiple dependent variables (percentage of firstauthored documents and percentage of last-authored documents) which follow a probability distribution other than the normal distribution (Poisson distribution). It allows us to assess the effects of the relevant factors (age and professional rank) on the dependent variables as well as the interaction between factors.
Our analysis shows that the percentage of last-authored documents is influenced by the professional rank and age of scientists in Biology and Biomedicine and Natural Resources. In these areas, there is no interaction between age and rank (Table 4), that is, for all professional ranks the percentage of last-authored documents tends to increase with age (Fig. 5). Interestingly, the effect of the professional rank is higher than that of age in Biology and Biomedicine (higher Wald Chi square value 4 ), while both variables show a very similar effect in the field of Natural Resources. In Materials Science, the professional rank is the major influential variable. An interaction between age and rank is identified in this area due to the fact that the percentage of last-authored documents tends to increase with age for tenured scientists and research professors, but this pattern is not so apparent for research scientists (Fig. 5).
Professional rank and age are also influential factors on the percentage of first-authored documents. The percentage of first-authored documents tends to decrease as scientists get older (Fig. 5). We can see that rank is the major influential factor in both Biology and Biomedicine and Materials Science, corrected by age in the latter case (there is an interaction between professional rank and age), while age is the major influential factor in Natural Resources (Table 4).
Discussion and conclusions
First of all, we would like to mention some limitations to the present study. This is mainly a cross-sectional study (with the exception of the analysis in Fig. 4), so each scientist was considered in his/her professional rank in 2004, although some of them could have been promoted during the period under analysis. Although this issue should be taken into account, we consider that it does not impair the validity of our study since (a) scientists promoted constitute a small percentage of the total, and (b) scientists display rather stable authorship patterns that are not immediately affected by a promotion.
Junior and senior signing patterns
Our results show that authorship patterns observed in the three areas analysed are clearly influenced by the age and professional rank of scientists, although there exist some differences by area. In general, there is a strong trend for signatures of younger researchers and those in the lower professional ranks to appear in the first position, while more veteran or highly-ranked ones are proportionally more likely to sign in the last position. Accordingly, a junior signing pattern can be described in all three areas, characterised by the predominance of first-authored versus last-authored documents, as well as a senior signing pattern, where last-authored documents largely exceed first-authored ones. These results are consistent with the previously described convention in many fields whereby the order of authors is determined by the role and extent of their contribution to the research. In many fields, first authors are those who contributed most to the experimental work and they are very often young scientists at the beginning of their professional career, while last authors are very frequently scientists of a higher professional rank and/or with longer professional trajectories who play a supervisory role. This authorship practice is in line with the results of other studies (see for example Davis and Wilson 2001;Drenth 1998) and is consistent with the perception of promotion committees in different biomedical and experimental disciplines, as shown in a study focused on a sample of medical schools (Wren and Kozak 2007).
The study of the evolution of the signing position of scientists with age is very illustrative in the three areas under analysis, since it reflects a gradual transition in the role Fig. 5 Share of documents signed by scientists as first and last author by age and professional rank played by scientists in the research process along their life cycle. Scientists at the beginning of their career tend to sign more often as first authors, as they frequently work under the supervision of a senior researcher, who usually signs in the last position. In fact, the name of the senior scientist follows that of the younger one in teacher-student collaborations in the study of Liang et al. (2004), in which PhD students tend to sign in first position whereas their supervisors do so in the closing one. Doctoral students are not included in our study, but tenured scientists are the most junior scientists and the youngest among them show the highest propensity to sign as first author of publications. As scientists gather experience, they may assume a supervisory role of the work of other scientists, and in many cases they will build their own group. The ''shift age'' at which scientists tend to adopt a more ''senior'' signing pattern is around 36-38.
Social status and function in research
In our study, we do not know whether the position of authors derives from the contribution of the different scientists to a given research or to social conventions such as those based in the prestige or social status of scientists within teams. It is clear that professional rank is a sign of status and we consider that age can also be associated to a certain status since older scientists usually have broader experience and knowledge. Asking authors themselves about their involvement in the research and the criteria followed for authorship would be the only way to obtain a reliable answer to this question (as developed by Hoen et al. 1998). However, our results show that social status (as measured through professional rank and age) is clearly related with author order in the byline of publications and we assume that this determines the specific functions of scientists (experimental work, supervisory tasks). In other words, structural and functional features of scientists within teams interact, they are highly dependent on one another, and taken together, contribute to the construction of the social structure within research teams.
The analysis of individual bibliometric profiles allows us to identify ''top scientists'', who are those with high production levels, who publish in prestigious journals and are highly rewarded with citations (''top performance''). Interestingly, our study shows that top performance is not related with any specific position in the byline of publications. The reason is that scientific class and professional rank do not perfectly match (Costas et al. 2010) as professors tend to lead research, but not necessarily with a top profile. Probably their involvement in management, supervision and coordination tasks prevents them from obtaining very selective high-quality research results. On the other hand, an increase in productivity but a slump in average impact for scientists as they get older--until in their fifties--has been described in some fields, not only for Spanish CSIC scientists (Costas et al. 2010) but also for Canadian ones (Gingras et al. 2008). As described elsewhere (Costas et al. 2010), ''top scientists'' at CSIC are very often young scientists, who have been recently abroad in research stays and have been involved in international collaboration. A top profile is needed at present to get tenure at CSIC after a very competitive selection process strongly based on the quantity and quality of the scientific publications of scientists. Those with a top profile are very often at the beginning of their professional career as permanent scientists at CSIC. It might be the case that they do not have a team of their own yet (Rey-Rocha et al. 2006), and therefore they contribute with their skills and knowledge to the performance of already established teams, until they consolidate their position in the institution. As a consequence, the signing patterns studied are related with age or professional rank rather than with the top or low profile of scientists.
Age and professional rank: which is more influential on authorship order?
Our research shows (in line with previous studies) that both professional rank and age have an effect on authorship order, but differences by area have been identified. Authorship patterns are especially marked in the area of Biology and Biomedicine, where research professors show the lowest percentage of first-authored documents and the highest percentage of last-authored documents irrespective of their age. Smoother authorship patterns are observed in Natural Resources and Materials Science.
It is interesting to note that professional rank seems to be more influential than age on the authorship patterns in Biology and Biomedicine, while the effect of age seems to be equal (last authorship) or greater (first authorship) than that of rank in Natural Resources. Interestingly, a senior authorship pattern was observed for all scientists aged over 55--whatever their rank--in Natural Resources, as well as for all research professors--whatever their age--in Biology and Biomedicine. In other words, our results suggest that rank carries more weight than age in determining the role of researchers in Biology and Biomedicine, while age plays a more significant role in Natural Resources. A more difficult promotion of scientists to the upper professional rank in Natural Resources (around 45% of scientists aged over 55 are research professors in Biology and Biomedicine and Materials Science whilst only 31% have attained this rank in Natural Resources) (see Appendix) could contribute to explain this finding. In Natural Resources there are more experienced scientists outside the upper rank that seem to be research leaders according to their senior signing patterns.
Moreover, the more significant role of professional rank over age in the trend to sign as last author in publications in the field of Biology and Biomedicine could also stem from the fact that it is a highly collaborative and competitive field at the CSIC (CSIC scientists in this field publish in higher-impact factor journals and receive a higher number of citations than the national average, while Natural Resources remains below average) (Costas et al. 2010). Scientists in the upper rank are usually in a better position to obtain economic support in competitive calls for research projects and this could be the case particularly when large teams are involved.
Future trends
Our results show that studies based on the position of authors in the byline can provide useful information about the role played by scientists in research, the influence of social variables and the manner in which it evolves along the professional life of scientists. However, differences by scientific field, country and even institutional settings might exist and deserve further attention. Inter-field differences in authorship conventions should be kept in mind were conducting this type of study. Although differences by country might also exist, they are expected to be smaller due to the increasing internationalisation of science. In fact, the trend of scientists to sign more often as last author and less as first author as they get older was also described for a sample of Canadian scientists (Gingras et al. 2008). Concerning institutional settings, it is interesting to note that some authorship patterns described herein in connection with age were also reported in the abovementioned Canadian study, although our work deals with full-time researchers in a public research institution and the Gingras study focuses on university professors and university-affiliated researchers who were also involved in teaching activities. This finding suggests that the scientific community is governed by its own laws modulating the publishing strategies of different individuals on the basis of targeted specific reward structures. This supports the role of authorship as a mode of social organisation of the scientific community (Pontille 2004).
As collaboration rises in the increasingly complex world of research, the meaning of authorship and author position in the byline is becoming more ambiguous (Pontille 2004). Multilateral collaboration in which members from more than one group are involved is increasingly frequent and may contribute to blur signing patterns. Author position in these cases can be determined after difficult negotiations among scientists, sometimes including agreements about rotation of first-authors in subsequent documents resulting from a given collaborative project or sharing 'equal first-authorship' to evenly reward members of different teams. Thus, the need to include in each publication the specific contribution of every author to the research is increasingly demanded by journals, associations and institutions (Pontille 2004;Cronenwett and Seeger 2005). The regular inclusion of this information in journals will provide important support to the decisions of evaluation committees. At the same time, these data could allow us to carry out more accurate bibliometric studies based on more detailed information, maybe through categorisation of authors' roles or through the construction of more advanced bibliometric indicators which take into account the different roles played by scientists in any given research work.
Acknowledgments This research was supported by an I3P grant by the Spanish CSIC during the initial analysis of the data. The authors are also grateful to Laura Barrios for statistical advice and to the two anonymous reviewers for their valuable comments and suggestions.
Open Access This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
Appendix
See Table 5
|
2015-07-20T18:47:21.000Z
|
2011-03-20T00:00:00.000
|
{
"year": 2011,
"sha1": "d5e6cd80842f8faf11f2e817071baf2651233d0e",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11192-011-0368-z.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "a634aa8d6ec9a7fff9f761665db8dff5d9285b07",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Sociology",
"Computer Science",
"Medicine"
]
}
|
225312138
|
pes2o/s2orc
|
v3-fos-license
|
Using Biochemistry to Educate Students on the Causal Link between Social Epigenetics and Health Disparities
ABSTRACT Background: While pharmacy education standards require students to recognize social determinants of health (SDOH), there is an opportunity to improve how this is taught in the curriculum. One innovative approach is to educate student pharmacists in a biochemistry course through the integration of topics like epigenetics using SDOH as the framework. Innovation: A 50-minute educational activity was used to supplement material on the regulation of gene expression, in which epigenetic changes are driven by SDOH. It provided students with a biochemical basis to explain some health disparities, rather than viewing them exclusively as social obstacles to health. The activity employed a mini-lecture, a short video, as well as both small and large group discussion. A reflective paper was used to assess students’ understanding of the topic, and the role of the pharmacist in helping patients prevent diseases caused by epigenetic changes due to social determinants of health. Findings: A post-activity survey showed that the activity increased students’ perception of knowledge about SDOH, as well as the effect of epigenetic changes on health outcomes. Furthermore, this activity increased students’ awareness about the role that SDOH play in epigenetic changes and challenged students to understand the role that society plays in health outcomes. Conclusions: The preventable nature of health inequities creates an opportunity to integrate public health into pharmacy education. The integration of epigenetics and SDOH gives the student an opportunity to provide a mechanistic link between social inequities and biochemical processes.
While health inequalities refer to general differences of health between population groups, health inequities are a type of preventable health inequalities that are the result of unjust social policies and practices. This problem of health inequity and the impact of social circumstances on our health will continue to necessitate awareness from health professionals. To address this issue, there have been recent changes in the development and education of future health professionals. 3 The Accreditation Council for Pharmacy Education (ACPE) established standards to help address this gap in the education of pharmacy professionals. The standard requires students to be able to recognize SDOH using cultural sensitivity as a means to address health disparities. 4 Even with this change, there remains a disconnect between the proficiency training of cultural sensitivity, and the action of addressing the issues of health disparities. 5,6 There is a need to continue to refine not only content, but also delivery. As the topic of SDOH and its impact on health disparities are continually addressed, efforts to employ new integrated curricula in different areas are needed. These issues are not solely a social or public health issue, but have implications in basic sciences, such as biochemistry, that need to be explored with health professionals as a means of education. Within the biochemical sciences, the study of epigenetics, or the modification of DNA expression to induce heritable traits, has been studied in the context of SDOH. This crossroads at the intersection of the social and genetic sciences has developed into the new field of social epigenetics. 7,8 One innovative approach is to integrate the concepts of social epigenetics into the education of student pharmacists in a traditional lecture-based biochemistry course, using SDOH as a framework for the discussion. The authors are unaware of an existing curricular model for educating healthcare professionals that explains biochemistry concepts through a social epigenetic lens. This integration of the social sciences, molecular biochemistry, and pharmaceutical sciences provides a novel framework for future pharmacy education.
THE INNOVATION
Epigenetic mechanisms are traditionally covered in biochemistry textbooks as special features involved in the regulation of eukaryotic gene transcription. The classic example used in medical education textbooks involves the effect of epigenetic mechanisms in cancer, as it allows instructors to discuss the use of DNA methyltransferase inhibitors (e.g. azacitidine) and histone deacetylase inhibitors (e.g. vorinostat) as potential antineoplastic agents. The idea of developing cancer by altering gene expression through mechanisms that do not involve changes to the underlying DNA sequence is often fascinating to students.
However, conventional biochemistry education does very little to explain the process of epigenetic changes, the role society plays in this process or how epigenetic changes contribute to health disparities.
Overview of the Approach
In this educational activity, SDOH was used as the framework to educate student pharmacists on epigenetic mechanisms. This approach offered students an opportunity to learn about the interplay among SDOH, epigenetics, and health disparities. The objectives of the educational activity were to: 1) define epigenetics, SDOH, and health disparities, 2) discuss the impact of social determinants on health, 3) assess the impact of epigenetic changes on gene expression, and 4) evaluate the role pharmacists play in helping patients prevent diseases caused by epigenetic changes due to SDOH. The activity was conducted over a 50-minute class period and was used to supplement a previous lecture on the regulation of gene expression in which epigenetic changes were first introduced. The approximate allocation of time was: think-pair-share exercise (5 minutes), instructor-led mini lecture including a 5-minute video-clip (20 minutes), large-group discussion and session debrief (20 minutes), and survey (5 minutes).
Educational Activities
First year student pharmacists registered for a biochemistry course at Belmont University College of Pharmacy over the course of three separate years were included in the evaluation of this educational session (n=238). The educational activity started with a think-pair-share exercise to have students discuss the following: 1) Are epigenetic changes reversible? 2) Could epigenetic changes alter gene expression patterns permanently? and 3) Explain how epigenetic changes may influence gene expression. A mini lecture followed consisting of background information and examples of epigenetic changes (focusing on DNA methylation), a definition and examples of SDOH, and research data from selected recent studies that show examples of the effect of epigenetic changes on health outcomes linked to SDOH. The video-clip titled: "Epigenetics: Nature vs Nurture" 9 was used to supplement an assigned course reading to discuss the rat study conducted by Weaver et al. in which the investigators measure the effect of maternal stress on offspring epigenetic changes. 10 The importance of this study lies in the fact that it challenged the notion of health outcomes merely driven by human genetics, but instead considers the effect of social epigenetics as well. A discussion on epigenetic inheritance as the biological basis of SDOH followed to discuss the effect of epigenetic changes through generations (e.g. mother, fetus, reproductive cells).
The large group discussion started with the notion that epigenetics reveals how the choices patients make can change their genes and those of their kids. The discussion challenged students to reflect on the idea of "choice" versus "fate" and its implications on SDOH. Student pharmacists were asked to reflect on the examples discussed in class about cardiovascular disease in racial/ethnic minorities, breast cancer in black women, head and neck cancer in Blacks and Latinos, obesity, and pre-mature aging. A specific example used in class of the interaction between epigenetics and SDOH involves the DNA methylation status of the CDH13 promoter. CDH13 (cadherin 13) is an important tumor suppressor in breast cancer patients that has been shown to exhibit significantly differential methylation status in African American women compared to white American women. These altered epigenetic events relevant to racial disparity consequently result in higher rates of cancer development, poor outcome and worse overall survival ( Figure 1). 11 This example highlights a combination of conventional epigenetics in conjunction with a social epigenetics approach. The potential educational outcome of this innovative approach is to allow students to not only recognize the role of epigenetics in cancer development, but also to identify the impact of social determinants on cancer prognosis and outcomes. Moreover, students were asked to reflect on SDOH that have the potential to negatively impact individuals. Specifically, they looked at socioeconomic status, neighborhood, physical environment, education, access to healthy food, access to health care, as well as community and social context.
At the end of the session, students were asked to complete an anonymous post-activity survey, including basic demographics, using Qualtrics® (Qualtrics Labs Inc., Provo, UT). The survey asked questions regarding perception of knowledge and awareness of the causal link between social epigenetics and health disparities, in addition to assessing their satisfaction with the activity. The survey received an exemption from the Institutional Review Board.
A reflective paper was used to assess students' understanding of the topic. Students were allowed 24 hours to answer the following question: "As a future pharmacist, what do you foresee is your role in helping patients prevent diseases that are caused by epigenetic changes due to SDOH?" The paper was graded based on a rubric for: content and focus (focuses reflection on thoughts and conclusions based on course concepts [20%]), depth of reflection (demonstrates thorough understanding of concepts and theories discussed in class [30%]), and analysis (identifies specific examples applicable to pharmacy practice [50%]). The grading rubric was given to the students as part of the assignment directions. The reflection contributed to 25% of the in-class activities grade. There were four in-class activities in the course, which collectively contributed to 6.25% of the final course grade.
FINDINGS AND CRITICAL ANALYSIS
A total of 238 students completed the survey for a participation rate of 100%. Students were asked to rate their knowledge about SDOH before and after completing the educational activity using a 5-point Likert scale (1=strongly disagree and 5=strongly agree). Student self-rated knowledge statistically increased after completion of the activity (2.86 1.04 vs. 4.36 0.58; p<0.0001 by t-test). Furthermore, students were asked to rate their knowledge about the effect of epigenetic changes on health outcomes using the same 5-point Likert scale. Student self-rated knowledge statistically increased after completion of the activity (2.44 1.11 vs. 4.22 0.65; p<0.0001 by t-test).
The discussion and debrief session allowed for constructive dialogue about current challenges faced by pharmacists as they try to serve patients who belong to populations affected by health inequities. The discussion was mainly driven by students' life experiences and interests. While the discussion with students has varied to some extent over the past three years, the discussion often revolves around rural versus urban, racial/ethnic identities, socioeconomic status, immigration status, and sexual orientation-related health disparities.
Students' evaluation of the educational activity was positive (Table 1). A qualitative approach was used to evaluate the reflection papers. Themes related to the pharmacist's role in helping patients prevent diseases caused by epigenetic changes due to SDOH were identified and their frequency was determined.
Most papers explored specific interventions with a particular patient population that could be spearheaded by pharmacists practicing in a community setting. The average score over three separate years of instruction was 8.9/10 with a range of scores between 6 to 10. Deductions were made because of a lack of depth of reflection (27.3% of the papers failing to demonstrate full understanding of the main concepts and 18.2% of the papers failing to use learned theory to justify their statements) and analysis (29.5% of students failing to provide a specific example spearheaded by pharmacists). A key issue in implementation is the training and/or confidence for biochemistry instructor to address SDOH and health disparities. This can be overcome by having a faculty member from the social sciences co-facilitate the activity. This type of approach to teaching epigenetics requires full commitment from the biochemistry instructor and the pharmacy program to provide resources for faculty development in the area of social epigenetics. Furthermore, there is the possibility that some students may be triggered by the comments and/or topics being discussed, due to the nature of health inequities. Similarly, students might feel as if this is a one-time isolated course event. It is important for the instructor to include others involved in delivery of related content to ensure students' knowledge is reinforced in the curriculum during subsequent years of training.
While this activity proved to be effective in educating student pharmacists on SDOH and biochemical concepts, there needs to be a continued effort of integrating this material into the pharmacy curriculum as a whole. Continued blending of concepts like SDOH and genetics can be applied to other topic areas in biochemistry, such as metabolic changes and certain disease states. Building a multifaceted course curriculum is challenging, yet provides an opportunity to teach different topics through varying viewpoints, which leads to a more complex and comprehensive understanding of foundational knowledge. Expanding upon this single activity through future classroom discussions in the social sciences (healthcare delivery, communications, and ethics), as well as in the clinical sciences (therapeutics and case studies) of pharmacy would be beneficial.
CONCLUSIONS
While the effects of social factors continue to be a focal point when educating student pharmacists on health disparities, an emphasis on the biochemical determinants of cause and health outcomes lags behind. As the role of pharmacy continues to expand, including helping patients with health promotion and disease prevention, so does the role of pharmacy educators to find more innovative teaching approaches that address SDOH. 5 The preventable nature of health inequities creates an opportunity to integrate public health into pharmacy education. There is a call to action to increase the level of pharmacist involvement in public health efforts. 5 As a result, it is imperative that programs find innovative ways to include these topics in pharmacy curricula. The integration of epigenetics and SDOH gives the student an opportunity to critically recognize how social determinants disproportionately influence health outcomes. 11 Furthermore, it encourages students to identify optimal interventions tailored to specific patient populations through counseling and advocacy ( Figure 1). The theoretical underpinning to this public health educational approach is to explore how social epigenetics might reveal the biochemical mechanisms underlying health disparities. 12 The benefit of this teaching approach is that students are able to provide a mechanistic link between social inequities and epigenetic biochemical processes. It allows students to focus on equitable health for all patients rather than focusing on a one-size-fits-all approach to disease prevention and treatment.
Just as the healthcare landscape is evolving and adapting continually, so must the ways in which we present topics to students. With the increasing amount of information new pharmacy professionals must show proficiency in upon graduation, there must be innovative ways of education that layer various topics to enhance learning. As demonstrated through positive evaluations, this educational activity proved to The activity challenged me to understand the role that society plays in health outcomes.
4.58 ± 0.58 I would recommend this in-class activity to other student pharmacists.
Conventional Epigenetics Biochemistry Education Approach
Educational Outcome: Recognize the role of gene expression due to DNA modification rather than DNA mutations in cancer development and progression Potential Outcome: Educate students on the multifaceted effects of social determinants of health in an effort to identify optimal interventions tailored to specific patient populations through counseling and advocacy.
Social Epigenetics Biochemistry Education Approach
Educational Outcome: Identify health inequities and explicitly acknowledge the impact of social determinants on cancer development and progression C = Cytosine Me = DNA methylation
|
2020-09-03T09:12:45.828Z
|
2020-07-31T00:00:00.000
|
{
"year": 2020,
"sha1": "37114718a4be2c199dac94c6c155dca112619cf2",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.lib.umn.edu/index.php/innovations/article/download/2418/2523",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ba71235ee7e978ae86ce7e4f5a7cf39213fdd23f",
"s2fieldsofstudy": [
"Biology",
"Education",
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
}
|
255712584
|
pes2o/s2orc
|
v3-fos-license
|
Time-efficient simulations of fighter aircraft weapon bay
A cavity flow exhibits aero-acoustic coupling between the separated shear layer and reflecting waves within the walls of the cavity, which leads to emergence of dominant modes. It is of primary importance that this flow mechanism inside the cavity is understood to provide insights and control the relevant parameters and that it can be properly predicted using state-of-the-art CFD tools. In this study, an open-cavity configuration with doors attached on the sides and a length-to-depth ratio of 5.7\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathbf{5}.7 $$\end{document} have been studied numerically using the TAU code developed by the German Aerospace Center for transonic flows with three simulation methods such as DES with wall functions and SST-SAS with resolved wall flow or wall function techniques. The free-stream conditions investigated are Mach number (Ma) 0.8\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathbf{0}.8 $$\end{document} with Reynolds number (Re) 12×106\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathbf{12} \times \mathbf{10} ^\mathbf{6 }$$\end{document}. The Rossiter modes occurring in the cavity due to the acoustic feedback mechanism have been numerically computed and validated. The SST-SAS model is around 90% more computationally efficient compared to the hybrid RANS-LES model providing excellent accuracy in predicting the Rossiter modes. The SST-SAS model with wall functions is 50% more computationally efficient than wall-resolving SAS simulations showing good behaviour in predicting modal frequencies and shapes, with further scope for improvement in the spectral magnitude levels.
Introduction
Historically, research of cavity flows has been done by aerospace companies, specifically for weapon bays. A cavity flow presents a complex unsteady flow and acoustic processes due to the shedding of separated shear layer from the front edge of the cavity. This causes severe limitations for operating weapon bays and landing gears, etc. In weapon bays, the deployment of stores could be improved by controlling the flow mechanisms existing inside the cavity which requires a fundamental understanding of cavity flow physics. Additionally at present, due to the requirement of stealth characteristics of the aircraft, the investigation of cavity flows has become even more crucial.
In general, there are closed, transitional and open-cavity flow type [1]. Categorization of cavity flow type can be based on several factors such as length-to-depth ratio (L/D) and Mach number (Ma). In the closed cavity configuration, the shear layer flow separates from the front edge of the cavity, loses its energy, and reattaches to the cavity before separating again. There exists two large-scale recirculations on either corners of the cavity. In the open flow cavity, there is one large-scale recirculation caused by the shear layer that carries enough energy to cross the length of the cavity. The shear layer in the open-cavity flow type impinges on the rear wall, which then acts as an acoustic source to initiate sustained flow oscillations inside the cavity. In the transitional cavity, the flow reattachment to the ceiling of the cavity is unstable.
There are many articles in the literature that show the effort to investigate the modal tones produced in the cavity. One explanation that is accepted by many authors is the Rossiter flow oscillation model (Eq. 1). Rossiter [2] postulated a semi-empirical model to estimate the dominant modal frequencies produced in the cavity. The model is based on the observations that the downstream convection of vortices from the shear layer generates aerodynamic disturbances, which then are excited by the reflected pressure waves from the rear wall produced by the shedding layer. This feedback process continues and leads to a selfsustained oscillation process Much of the cavity flow research has been experimental. The study by Rossiter [2] was one of the foremost studies that provided a solid understanding of the physics-based acoustic-flow dynamic interaction. The model by Rossiter (Eq. 1) is still widely used to predict the modes, particularly in the subsonic and transonic flow conditions. However, the model has shown some inaccuracies in supersonic flow conditions. Heller et al. [3] improved the Rossiter model by assuming that the speed of sound inside the cavity is equal to that in the stagnating freestream to extend its validity to supersonic flows as well. Handa et al. [4] have studied the generation and propagation of pressure waves experimentally and showed the periodic nature of them. The study explains the process by which the pressure waves are generated. It attempts to clarify the relationship between the shear-layer motion, pressure-wave generation, and the pressure oscillation inside the cavity.
After extensive wind-tunnel studies on the cavity, several attempts have been made to study the cavity flows numerically, mostly on the M219 cavity [5] to describe the flow physics inside the cavity and accordingly determine the resonant modes. Henderson et al. [6] carried out timeaccurate simulations using the RANS k-model and it has been shown that the models are unable to predict the broadband spectra. To determine both the narrowband and broadband spectra accurately, advanced turbulence resolving methods such as DES methods based on Spalart-Allmaras (SA) and k-SST have been used [7].
Many of the numerical studies pertaining to cavity flows were based on the URANS approach. Chang et al. [8] studied 3D incompressible flow past an open cavity using the SA model. Although the predictions of the mean velocity field from the URANS and the scale-resolving simulation were similar, the study found that the URANS predictions show poor agreement with LES and experimental results for the turbulent quantities. ( Woo et al. [9] studied the three-dimensional effects of supersonic cavity flow due to the variation of cavity aspect and width ratios using the RANS k-turbulence model. The compressible NS equations were solved with the fourthorder Runge-Kutta method and FVS method with van Leer's flux limiter. The study concluded the oscillation mode 2 appeared as a dominant oscillation frequency regardless of the aspect ratio of the cavity in the two-dimensional flow and oscillation mode 1 and 2 appeared in three-dimensional cavities of small aspect ratios. With the increase in the aspect or the width ratios, only the modes 2 or 3 appeared as a dominant frequency.
It is understood that due to the nature of the URANS formulation, the method has an inherent inability to detect modes accurately. Therefore, a number of studies have been dedicated to scale-resolving turbulence models such as LES. The study by Larcheveque et al. [10] shows the accuracy of employing LES or DES methods for a 3D cavity case where doors are present and aligned vertically.
DES simulations are still expensive, whereas the scaleadaptive simulation (SAS) approach developed by Menter [11] has shown to provide results nearly as good as DES or LES. Girimaji et al. [12] evaluated the scale-adaptive simulation of M219 cavity flows for transonic flow conditions. The SAS results showed good agreement with the experimental data for the M219 cavity at a tenth of the time required for Detached-Eddy simulations. As the SAS simulations are yet quite expensive for use in the industrial design process, further reduction in the computational time is sought. Therefore, in the pursuit of reducing the computational effort for cavity simulations, the SAS simulations have been investigated using the wall function technique in this study and its merits and demerits have been outlined in terms of acoustic prediction.
In this study, a novel open-cavity configuration with opened doors at the sides [13] has been investigated numerically at transonic flow conditions using two scale-resolving methods namely, a hybrid RANS-LES method with wall function (DES-WF) and scale adaptive simulation (SAS). The scale-adaptive simulation was also carried out using wall function technique (SAS-WF) to investigate the feasibility of simulating the cavity flows. The numerical simulations have been performed using the DLR-TAU computational fluid dynamics (CFD) code [14] under the flow conditions of Ma 0.8 and Re 12 × 10 6 . The numerically computed RMS values and wall spectra have been validated against the experimental data, which have been made available for this study by Airbus Defence and Space [13].
Model configuration and mesh generation
The cavity configuration used in this study has dimensions of length-to-depth ratio (L/D) 5.7 and length-to-width ratio (L/W) 4.16 (see Fig. 1). The cavity is cut onto a flat side along the centre line of the cavity rig at a certain distance from the rig's sharp leading edge. Under the transonic flow condition of Ma = 0.8 with the Reynolds number of the flow 12 × 10 6 , the cavity is expected to exhibit resonance. The doors are placed on either side of the cavity with positive Z pointing into the cavity. The probe locations placed at equidistant locations along the cavity ceiling and are named L1 to L8, as shown in Fig. 1.
The cavity geometry has been spatially discretized with a mesh consisting of unstructured elements with tetrahedral, prism, hexahedral, and pyramid cells. An RANS model was used to estimate the integral length scale, and according to the estimates, cell sizes have been chosen appropriately during the meshing stage. Figure 2 shows the DES-WF mesh that is used for this study. In Fig. 2a, the overview of the mesh distribution is shown. As the motivation of the study is to investigate the flow mechanisms as efficient as possible in terms of computational time, only the region of interest has been meshed with the highest level of refinement, as shown in Fig. 2. By adapting the mesh proportion inside the cavity, as shown in Fig. 2b, one can save a significant number of mesh nodes. In the cavity geometry, a 50% reduction in the number of prism cells has the potential to reduce the total number of mesh nodes by almost 40% , which suggests that one can save a significant amount of computational time by reducing the number of prism layers while adopting a wall function technique. The model has been meshed in half and mirrored about the symmetry axis to avoid the asymmetric grid effects. The DES-WF mesh is composed of 12.5 × 10 6 grid nodes. In the SAS mesh, the cell size in the shear layer and inside the cavity is almost double the cell size of the DES-WF mesh and contains around 5 × 10 6 grid nodes. The number of mesh nodes in SAS-WF is about half of that of the SAS mesh. Moreover, a non-dimensional wall coordinate ( y + ) of more than 100 has been set along the walls of the cavity for DES-WF and SAS-WF cases.
Flow solver and turbulence modelling
The numerical simulations have been carried out in this study using the DLR-TAU code, a finite volume (FV) flow solver based on the compressible Navier-Stokes formulation developed by the German Aerospace Center (DLR) [14]. The popular RANS approach in industry for turbulence modelling loses a lot of intricate details in the flow field when it is employed to the unsteady cavity flow simulation. An alternative is an LES-based model, which resolves major part of the energy carrying eddies and model the isotropic sub-grid scales [15]. As the cavity configuration has a boundary layer developed on the cavity rig upstream of the cavity, which then leads to the shear layer formation, an LES model requires an uncompromised resolution of the boundary layer and a substantial computing time for this application. Therefore, the following numerical approaches for turbulence modelling are employed in this study.
Hybrid RANS-LES approach
In the author's previous work [16], the hybrid RANS-LES approach with wall resolved technique has been investigated and the first promising results of this cavity configuration have been published. In the present study, some of the numerical settings used in the previous study [16] have been optimised and the DES part of this study differentiates from the author's earlier work using matrix dissipation and adopting the wall function approach for the cavity flow in the current study. By applying matrix dissipation in this study, the artificial dissipation is reduced to prevent excessive damping of the resolved turbulent structures.
The SAneg-IDDES model [17] is based on the standard one-equation Spalart-Allmaras model, which models the transport equation for the eddy viscosity [18] where the production term P and the destruction term are This model represents the standard SA model, except that the length scale d in the destruction term is modified. In the SA model, d is the distance to the nearest wall. In the IDDES model [19] that is used in this study, d is replaced with d , which is defined as with Δ = max(Δx, Δy, Δz) and f d is the shielding function designed to be unity in the LES region and zero elsewhere.
The SA-neg model is the same as the "standard" version when the turbulence variable ̃ is greater than or equal to zero. When the kinematic eddy viscosity would become negative, the Eq. 2 is modified, such that the turbulent eddy viscosity in the momentum and energy equations is set to zero [20].
Scale-adaptive approach
To only resolve turbulence where significant fluctuations exist, the work by Menter et al. [21] suggested a modified turbulence model which adds a source term Q SAS based on the local von Karman length scale L vK into the dissipation rate equation. This scale-resolving technique has been used in this study with the standard k-SST model [22] as the base model in conjunction with the wall function technique. The governing equations of the SST-SAS model differ from the k-SST model by the additional source Unlike LES, it also remains well defined if the mesh cells get coarser and do not allow resolving scales well within the inertial range. This makes it attractive in the present application, where the aero-acoustic effects are mostly affected by larger turbulent scales which, in turn, need to be predicted accurately.
Wall functions approach
There are two classical wall boundary conditions, namely low-Re and high-Re type boundary conditions. The low-Re boundary condition imposes no-slip at the wall and requires a finer mesh. The aim of grid-independent wall functions is to provide a boundary condition at solid walls that enable flow solutions independent of the location of the first grid node above the wall. In this study, wall functions based on the universal law of the wall are employed for the DES-WF and SAS-WF simulations, whereas the low-Re boundary condition is used for the SAS simulation. The RANS equations are solved only down to the first grid node above the wall and are matched with an adaptive wall function solution at the first grid node above the wall. The matching condition (Eq. 6) makes sure that the wall-parallel components of the RANS solution and the wall function are equal at wall distance y , which is then solved for the friction velocity u using Newton's method. The shear stress is then prescribed at the wall node In all the simulations, a second-order central scheme and backward Euler scheme have been used to discretize spatial flux and temporal terms, respectively. The time step size has been chosen, such that the convective Courant-Friedrichs-Lewy number (CFL) is well below 1.0.
Results and discussion
In this section, the experimental data will be first analysed and the effect of FFT window length will be determined to support the validation of the numerical simulations. Moreover, this section presents the physics of cavity flows including the validation of DES-WF, SAS, and SAS-WF simulations. The commonalities and differences between the used simulation methodologies will be highlighted and appropriate findings will be outlined.
Acoustic spectral analysis
In this subsection, first, the Rossiter modes are estimated using the semi-empirical model, which is then will be followed by the flow statistics with respect to prediction of the resonant modes by different simulation methodologies and RMS pressure.
Theoretical estimation of Rossiter modes
The pressure signal that varies over a time series has been extracted along the cavity ceiling and transformed into frequency space to investigate the modes existing in the signal. Analytically, the frequencies at which the Rossiter modes occur can be computed through the Heller's modified Rossiter oscillation model (Eq. 7) [3]
Effect of signal length on the RMS and FFT values
A total of 20.0s of pressure measurement data have been made available for the validation of simulation results. Prior to deciding on the length of the time series to be simulated, awareness of the effect of signal length on the FFT and RMS statistics is sought. To estimate this effect, some fundamental analysis of the raw data has been performed. The 20.0s of measurement data has been split into two groups of data: one group containing 160 samples of each 0.125s and the other group containing 40 samples of each 0.5s. Figure 3a shows the RMS pressure for both the sample groups. In the 0.125s case, the deviation in the RMS values is around 350 Pa near the front edge of the cavity between x∕L = 0 and x∕L = 0.2 and its value increases along the cavity length reaching around 500 Pa near x∕L = 0.9 . In the 0.5s case, the deviation in the RMS values is around 100 Pa near x∕L = 0.2 and increases to 350 Pa towards x∕L = 0.9 . Figure 3b shows the FFT results of the two sample groups and displays that there is a variation of around 8-9 dB/Hz in the amplitude levels for 0.125s case, whereas the variation is around 4-5 dB/ Hz for 0.5s case. This underlines the importance of selecting an appropriate signal length for the validation of the simulation data.
Performance in terms of acoustic prediction
A fast Fourier transform (FFT) has been performed based on Welch's method to decompose the pressure fluctuations into its frequency components. The input for the FFT has been the pressure data, which have been collected for over a 1000 locations on the mid-plane of the cavity. From all these locations, the amplitude of the first four resonance modes were identified and interpolated on the mid-plane to visualise the shape of the modes inside the cavity. The results can be seen in Fig. 4 showing the SPL distribution of the Rossiter modes 1,2,3 and 4 on the plane y = 0 . Rossiter mode 1 has a node in the center of the cavity, anti-nodes on both ends and the front part is significantly overlayed by the shear layer, which suppresses the mode with its own frequency. The higher order Rossiter modes 2,3 and 4 correspond to the standing waves resulting from the organised vortical structures between the front and rear wall of the cavity. It is also observed that the lip of the cavity in all the modes is overlayed by the shear layer. Figure 5 shows the FFT plots of experimental data, DES-WF, SAS, and SAS-WF simulations for the probe locations L1 and L8. From the experimental data in the FFT, the band with the group with samples of each 0.5 s is shown. Due to the expensive nature of the DES-WF simulation, the length of the series simulated in the DES-WF is 0.15s, whereas the length of series simulated in both the SAS and SAS-WF simulations is 0.25s. The time series of the simulations have been processed for the FFT analysis using the Hamming window function with the maximum offset length of FFT windows equivalent to the integral time scale computed through the autocorrelation function. The lowest frequency that the simulated data can resolve is kept around 40 Hz for all the simulations.
From the experimental data, the dominant modes of the probe locations L1 are 1, 2, and 3, whereas the dominant modes of L8 are 2 and 3. At the probe location L1, the mode 1 is predicted well by the DES-WF and SAS simulations. 2 is slightly over-predicted by the SAS simulation, whereas mode 3 is slightly underpredicted by the DES-WF simulation. The SAS-WF simulation tends to over-predict the modes, but shows the tendency to capture the frequencies as good as the DES-WF and SAS simulations.
As the pressure fluctuations are higher near the rear wall, it is worth to analyse the performance of the simulations with the FFT of the probe location L8. It is clearly seen that the SPL levels in general are higher in the probe location L8 as compared to the probe location L1. At the probe location L8, the mode 1 is captured quite well by the SAS simulation as compared to the DES-WF and SAS-WF simulations. The modes 2, 3, and 4 are captured adequately well by the DES-WF and SAS simulations. Although the DES-WF results slightly lie outside the experimental range formed by 0.5s samples, they show reasonable agreement in capturing the modes considering the length of the series simulated is 0.15s. Table 1 shows the frequencies of the modes computed from modified Rossiter model and measured data together with the SAS simulation results. In the SAS simulations, the predicted modes fit extremely well with the frequencies of the experimental data and also with the theoretical modes, as shown in Table 1. In the probe location L1, the magnitude of the dominant mode is predicated well with a slight overprediction of the modes 2. In the probe location L8, all the modes are predicted exceptionally well in terms of relative magnitudes, absolute magnitude, and the frequencies. Considering that severe pressure fluctuations exist towards the rear wall, the prediction of modes at the probe location L8 is extremely sensitive with higher SPL levels and the fact that the SAS simulation has captured the features of this location shows the reliability of the SAS simulation results.
In the SAS-WF simulation, the prediction of Rossiter frequencies still is quite good. It is observed that there is an offset by 3-4 dB/Hz when compared with SAS simulation results. Considering that the SAS-WF are 50% computationally cheaper than SAS simulations, the results seem quite promising.
To summarise the results, it is observed that the overall behaviour of all simulations is extremely good in terms of frequency prediction. However, the magnitude levels between simulations show a noticeable difference. In particular, the SAS simulations fit the magnitude levels as good as the DES-WF simulations. The SAS-WF simulations show some good trends in predicting the modal frequencies and shapes with scope for improvement in its prediction capability. Figure 6 shows the plot of root mean square (RMS) of pressure along the centerline of the cavity ceiling compared with the measured data. In DES-WF simulation, the predicted RMS of pressure fits the experimental data extremely well. In the SAS simulation, the predicted values fit the experimental data within the first 30% of the cavity length, overpredicted in the middle region, and captured reasonably well towards the rear portion. The reason for the over-prediction is related to the delayed prediction of the resolved structures in the shear layer (see Fig. 7). The activation of the Q SAS term has been delayed and thereby the shear layer breakdown prediction shows a different behaviour than the DES-WF simulation. This delayed prediction of the shear layer has a consequent effect of higher fluctuations over the midsection of the cavity. In the SAS-WF simulation, the RMS profile follows the trend of DES-WF simulation quite well but over-predicts the values significantly towards the regions of higher pressure RMS. The over-predicting behaviour of SAS-WF is also relatable to the distribution of the resolved turbulent kinetic energy, as shown in Fig. 7. The shear layer breakup has been considerably delayed compared to both the DES-WF and SAS simulations, and clearly, this has increased the scale of the fluctuations by a significant margin in the second half of the cavity.
Flow field visualisation
In this section, some of the flow features of the cavity, such as the resolution of the turbulent structures, boundary layer profile, and turbulent kinetic energy profile, are investigated and the performance of the simulations in terms of acoustic levels is discussed.
Visualisation of flow structures
To visualise the structures in the cavity configuration, the Q-criterion has been computed and it is shown in Fig. 8a. Attached boundary layer upstream of the cavity separates from the front edge and starts to shed vortices of varying scales. The vortical structures during their life time combine with other structures as they are convected downstream. After impinging on the rear wall of the cavity, some of the flow structures travel downstream after being ejected out and some travel upstream. Figure 8b shows the highly turbulent behaviour on the downstream corner of the cavity showing the flow redirecting from the rear wall and interacting with the oncoming shear flow components. Figure 8b shows a characteristic feature of an open-cavity configuration that the shear layer starts developing from the front edge as a narrow region in the transverse direction and grows in the streamwise direction reaching its maximum width near the center of the cavity and reducing in width as the shear layer approaches the rear edge of the cavity. The prediction of shear layer width by the simulations is more clearly visible with the distribution of the resolved turbulent fluctuations in the streamwise and crosswise directions u ′ w ′ , which is shown in Fig. 9. Furthermore, the distribution is such that the streamwise and crosswise velocity fluctuations are intense towards the aft wall in the case of SAS-WF simulation compared to the SAS simulation, while having maximum fluctuations inside the core of the shear layer. Figure 10 presents the flow field resolving capability of the DES-WF, SAS, and SAS-WF simulations inside the cavity by showing the vorticity magnitude in the plane y = 0 . One can expect the flow field resolution from the DES-WF simulation to be high and this is indeed true as seen in Fig. 10a, which is used as a reference to investigate the resolving capability of the other turbulence models. Figure 10b shows Figure 10c shows the flowfield snapshot from SAS-WF. The fine scale structures are clearly less pronounced than in the SAS simulation. The wall functions upstream of the wall did not produce resolved structures and this has led to the difference in the resolving capability of this variant.
L vK prediction between SAS and SAS-WF
To further investigate the difference in the turbulent field resolution between SAS and SAS-WF simulations, the distribution of von Karman length scale has been investigated. The only difference between the SAS and SAS-WF meshes is the number of prism layers close to the wall. The SAS mesh has 35 prism layers with a y + value less than 1.0, whereas the SAS-WF mesh has 10 prism layers with y + value greater than 100. Therefore, it is noteworthy to investigate the von Karman length scale, L vK predicted by SAS and SAS-WF simulations, as shown in Fig. 11, especially close to the walls. The von Karman length-scale represents a key element in triggering the model to allow the generation of resolved turbulence in Scale-Adaptive Simulations. As seen in Fig. 11, the L vK is predicted strongly over a larger region in the SAS simulation, whereas, in SAS-WF simulation, the region of L vK presence is more limited. One can evidently observe that there is a difference near the upstream wall of the cavity between the two simulations. The authors believe that the usage of wall functions has rendered the SAS model to operate in pure RANS mode near the upstream wall of the cavity, which has led to the difference in the resolving capability of the model inside the cavity between SAS and SAS-WF simulations. If the model had operated in the resolving mode close to the front edge of the cavity, the SAS-WF could have better predicted the shear layer growth and its breakdown as observed in SAS and DES-WF simulations. In Fig. 12, the asymptotic near-wall flow profile at 0.1L distance upstream of the cavity has been shown. It is noticed that as a result of RANS behaviour close to the wall without resolved structures, the thickness of the boundary layer based on the 99% U ∞ measure in SAS-WF simulation is larger than the thickness predicted by the DES-WF and SAS simulations. The boundary layer developed upstream of the cavity has an important effect on the growth of the shear layer. Most, if not all of eddy viscosity contained in the boundary layer is transferred to the shear layer making it more stable than in the DES-WF and SAS simulations. This thicker shear layer with higher turbulent energy content cannot breakdown sooner as seen in the SAS simulation and the process of shear layer breakdown is thereby delayed, as the shear layer contains most of the energy-carrying eddies and they do not dissipate enough energy. This leads to over-prediction of energy levels inside the cavity as seen in Figs. 5 and 6. Moreover, the predicted shape factor (i.e., the ratio of displacement to momentum thickness) has been determined as 1.24 in the case of DES-WF simulation at a distance 0.1L upstream of the cavity, having the local Re x = 2.8 × 10 6 . Further, it is observed that with respect to the DES-WF case, there is a nominal over-prediction of 5-10% in the displacement and momentum thicknesses in the SAS simulation, whereas around 20% over-prediction is found in the case of the SAS-WF simulation, both showing deviations of the shape factor as low as 3% from the DES-WF case. Finally, the 99% thickness for the DES-WF reference case has been found to be 60L x which coincides with the SAS prediction. Figure 13 further confirms presence of more energy inside the cavity by showing the mean turbulent kinetic energy profile for four slices at x∕L = 0.19, 0.37, 0.56 and 0.94. It is evident that the turbulent kinetic energy produced by SAS-WF simulation is higher than in the SAS simulation. The thicker boundary layer profile in SAS-WF has transferred most of its energy to the shear layer, and therefore, turbulent kinetic energy is maximal at the cavity lip. Further downstream, at locations x∕L = 0.56 and x∕L = 0.94 , more energy is seen transferred inside the cavity in the SAS-WF simulation, which leads to higher pressure fluctuations in SAS-WF as seen in Fig. 6.
Conclusion and outlook
In this study, a novel cavity configuration with sidewise doors has been studied numerically with three simulation methodologies such as DES with wall functions (DES-WF) and SAS with wall resolved and using wall functions (SAS and SAS-WF) under the transonic flow conditions of Ma 0.8 and Re 12 × 10 6 . The correlation of the Rossiter modes with the flow processes has been identified in detail through FFT of DES-WF simulation results. It has been proven that all three simulation methodologies can capture the Rossiter frequencies well with a marginal over-prediction of spectral magnitudes by the SAS-WF simulation. The reason for the over-prediction behaviour in the SAS-WF simulation has been investigated with the boundary layer profile and the resolved fluctuations inside the cavity. The commonalities and differences between SAS and SAS-WF simulations were investigated and outlined using the von Karman length scale and vorticity fields. On the requirements of computational cost, the DES-WF simulation is estimated to be around 50% computationally cheaper than the wall-integrated DES simulation, whereas the SAS simulation is estimated to be 90% faster than DES simulations and the SAS-WF simulation is twice as fast as the SAS simulation. As the cheapest of the three simulations that were carried out in this study, the SAS-WF shows good trends in predicting the modal frequencies and shapes. The reasons for a moderate over-prediction behaviour have been investigated and outlined in this study. To overcome these numerical issues in SAS-WF simulations, future work will address the breakdown phenomena of shear layer in detail by incorporating a synthetic turbulence forcing term.
Funding Open Access funding enabled and organized by Projekt DEAL. This work has been carried out with the financial support from Airbus Defence and Space (ADS) under the project "Analysis of Unsteady Effects in Fighter Aircraft Aerodynamics", which is greatly acknowledged. The authors would like to thank the German Aerospace Center (DLR) for providing the TAU code and Ennova Technologies, Inc. for the meshing software. The authors would also like to acknowledge the Gauss Centre for Supercomputing for making the required computing hours available to this study.
Data availability Datasets generated during the study can be made available by the corresponding author upon request.
Conflict of interest
The authors have no competing interests to declare that are relevant to the content of this article.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
|
2023-01-12T17:09:44.379Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "5cb6b4e62eff91915907b3d535e416a2a2975f83",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s13272-022-00630-1.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "84f58c3a59b893e4d8279d694371ee8c9c1c551a",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": []
}
|
119125818
|
pes2o/s2orc
|
v3-fos-license
|
Three-Body Model Analysis of Subbarrier alpha Transfer Reaction
Subbarrier alpha transfer reaction 13C(6Li,d)17O(6.356 MeV, 1/2+) at 3.6 MeV is analyzed with a alpha + d + 13C three-body model, and the asymptotic normalization coefficient (ANC) for alpha + 13C -->17O(6.356 MeV, 1/2+), which essentially determines the reaction rate of 13C(alpha,n)16O, is extracted. Breakup effects of 6Li in the initial channel and those of 17O in the final channel are investigated with the continuum-discretized coupled-channels method (CDCC). The former is found to have a large back-coupling to the elastic channel, while the latter turns out significantly small. The transfer cross section calculated with Born approximation to the transition operator, including breakup states of 6Li, gives (C_{alpha 13C}{17O*})^2 =1.03 \pm 0.29 fm^{-1}. This result is consistent with the value obtained by the previous DWBA calculation.
§1. Introduction
Transfer reactions below Coulomb barrier energies are known to be a powerful technique to determine asymptotic properties of the overlap between the initial and final state wave functions, essentially free from uncertainties associated with optical potentials and structural complexity of wave functions in the nuclear interior region. 1) Recently, subbarrier α transfer reactions have been used to indirectly measure cross sections of α-induced reactions of astrophysical interest. 2), 3) In Ref. 2), Johnson and collaborators determined the reaction rate of 13 C(α, n) 16 O by measuring the 13 C( 6 Li, d) 17 O(6.356 MeV, 1/2 + ) reaction; for simplicity, we henceforth denote the final state of 17 O as 17 O * . The 13 C(α, n) 16 O reaction is considered to be important as a neutron source for the slow neutron capture process (s-process) taken place in the asymptotic giant branch (AGB) stars. 4) In the cross section formula, Eq. (1) of Ref.
2), of the 13 C(α, n) 16 O reaction based on R-matrix approach, 5) the asymptotic normalization coefficient (ANC) for α+ 13 C −→ 17 O * , C 17 O * α 13 C , is the only missing quantity. Throughout this study we consider the ANC with Coulomb-modification, 2) i.e., a value divided by the Gamma function Γ (2 + η), where η is the Sommerfeld parameter for the α-13 C system. In Ref. 2), the α transfer reaction 13 C( 6 Li, d) 17 O * was analyzed with DWBA, disregarding the breakup effects of 6 Li and 17 O * , and (C 17 O * α 13 C ) 2 = 0.89± 0.23 fm −1 was extracted. The ground state energy of 6 Li is, however, just 1.47 MeV below the α+d threshold. Furthermore, the binding energy of 17 O * , i.e., 17 O(6.356 MeV, 1/2 + ), from the α+ 13 C threshold is only 3 keV. Therefore, to extract a reliable value of C 17 O * α 13 C , one should investigate how important 6 Li and 17 O * breakup are in the α transfer reaction.
The purpose of the present Letter is to analyze the 13 C( 6 Li, d) 17 O * reaction at 3.6 MeV (for the incident energy of 6 Li) with the three-body (α + d+ 13 C) model and to determine C 17 O * α 13 C accurately. Roles of 6 Li breakup in the initial channel and 17 O breakup in the final channel are investigated with the continuum-discretized coupledchannels method (CDCC). 6), 7) As shown in §3.2, the former is found important as a large back-coupling to the elastic channel, while the latter is confirmed much less important. CDCC was proposed and developed by Kyushu group and has been highly successful in quantitatively reproducing observables of reaction processes in which virtual or real breakup effects of the projectile are significant. 8), 9) CDCC treats continuum states of the projectile nonperturbatively, with reasonable truncation and discretization, and thus can describe the breakup effects with very high accuracy. Note that theoretical foundation of CDCC was established in Refs. 10)-12). The transition from the 6 Li+ 13 C channel to the d+ 17 O * channel is described with Born approximation; the breakup states of 6 Li are explicitly taken into account in the calculation of the transfer process. The ANC thus extracted is compared with the result of the previous DWBA analysis.
This paper is constructed as follows. In §2 we formulate the three-body wave functions in the initial and final channels and the transfer cross section of the 13 In the present calculation, we work with the α + d+ 13 C model shown in Fig. 1. The transition matrix (T matrix) for the transfer reaction 13 C( 6 Li, d) 17 O * is given by are the three-body wave functions of the system in the initial and final channels, respectively, and V tr is the transition operator of the transfer process. We put a normalization constant S 1/2 exp in T f i , physics meaning of which is discussed below.
The three-body wave function Ψ (+) i in the initial state satisfies the Schrödinger where E is the total energy of the system in the center-of-mass (c.m.) frame and r (R) is the coordinate of α ( 6 Li) relative to d ( 13 C). The Hamiltonian H i is given by where T R is the kinetic energy operator associated with R and h i is the internal Hamiltonian of 6 Li. We use V (N) XY for the nuclear interaction between X and Y; each of X and Y represents a particle, i.e., d, α, or C ( 13 C). Similarly, R XY denotes the relative coordinate between X and Y. V Coul is the Coulomb interaction between 6 Li and 13 C. Note that we neglect the Coulomb breakup of 6 Li, which can be justified by the fact that the effective charge of the α + d system for electric dipole transition is almost zero. Furthermore, as shown in §3.2, it is numerically confirmed that Coulomb breakup processes due to electric quadrupole and higher multipoles are negligibly small.
As the partial wave Ψ i;JM of Ψ (+) i , we adopt the following CDCC wave function: where J and M are the total angular momentum and its z-component, respectively, and ℓ (L) is the orbital angular momentum between α and d ( 6 Li and 13 C). We disregard the intrinsic spin of each particle for simplicity. The radial part of the 6 Li wave function is denoted byφ j,ℓ (r)/r, where j is the energy index; j = 0 corresponds to the ground state and j = 0 to discretized continuum states obtained by the momentum-bin discretization. 6) The internal wave functionΦ j,ℓ,m given bŷ where µ is the reduced mass of the 6 Li-13 C system, E j,ℓ = E − ǫ j,ℓ , and (2 . 8) For simple notation, we denote the channel indices {j, ℓ, L} as c. The CDCC equation is solved numerically up to R = R max andχ c is connected with the usual boundary condition L,η j,ℓ ) is the incoming (outgoing) Coulomb wave function with the Sommerfeld parameter η j,ℓ , and W −η j,ℓ ,L+1/2 is the Whittaker function. The subscript 0 of ℓ and c represents the incident channel. With the Smatrix elementsŜ J cc 0 in Eq. (2 . 9), one may obtain any physics quantities with the standard procedure except that one needs to make the discrete results smooth when breakup observables are investigated.
Since the CDCC wave function Ψ CDCC i can be regarded as, with very high accuracy, an exact solution to Eq. (2 . 2) in evaluation of T -matrix elements that contain a short range interaction, one may define V tr by with any choice of the auxiliary potential V aux . In Eq. (2 . 10), V αd , V dC , and V αC contain both nuclear and Coulomb parts. Note that V aux determines the final state wave function Ψ f . In the present study, we adopt The superscript (C) of V αd in Eq. (2 . 11) represents the Coulomb part of the interaction. We then have where T R dO is the kinetic energy regarding R dO and h f is the internal Hamiltonian of 17 O. Note that we here consider a Schrödinger equation for Ψ , just in the same way as in the initial channel, except that i) we should include Coulomb breakup of 17 O, ii) we have no nuclear part of V αd , and iii) the bound state of 17 O at 6.356 MeV is a p-wave that generates both monopole and quadrupole interactions between d and 17 O; the latter causes also change in the d-17 O angular momentum that is called reorientation. Note that V dC in Eq. (2 . 14) contains both nuclear and Coulomb parts, as mentioned above.
It is shown in §3.2 that 17 O breakup channels have very small (∼ 5%) effects on the d-17 O elastic scattering. Furthermore, the quadrupole interaction is found negligibly small (see Fig. 2). Then we can approximate where ϕ 0 (r) is the relative wave function between α and 13 C in 17 O * , and ξ (−) 0 (R dO ) is the distorted wave function obtained by the single-channel calculation, in which both the breakup channels and the aforementioned quadrupole interaction are switched off.
In the calculation of T f i , we make zero-range approximation; the strength D j,ℓ of the zero-range α-d interaction is given by The finite-range correction to the zero-range calculation of T f i is made with the standard prescription. 1) One may examine the validity of this approximation by the magnitude of the correction. We use Ψ Results and discussion
Numerical input
The α-d wave function in Ψ CDCC i is constructed by following Ref. 13), except that we do not use the orthogonal condition model (OCM) but exclude Pauli's forbidden states by hand. We include ℓ = 0, 1, and 2 states. As for the nuclear part of the α-d interaction for ℓ = 0, we use is adopted. We neglect the intrinsic spin S of d, and we have only one resonance state at 3.474 MeV (measured from the ground state energy) with a width of 0.45 MeV. It is found that, however, if we include S and a spin-orbit interaction that reproduces the 1 + , 2 + , and 3 + resonance states, the resulting value of the ANC shown below changes by only 0.2%. Thus, the separation of the ℓ = 2 resonance state to 1 + , 2 + , and 3 + resonance states by the spin-orbit interaction plays no role in the present subbarrier α transfer reaction. For ℓ = 1, we adopt 14) V (N) which is used also for ℓ > 2 when we check the convergence of CDCC calculation with respect to ℓ max (see §3.4). The Coulomb interaction between α and d is evaluated by assuming a uniformly charged sphere with the charge radius R C of 3.0 fm; see Eq. (3 . 5) below. We take the maximum value k max (r max ) of the relative wave number k (coordinate r) between α and d to be 2.0 fm −1 (60 fm); the maximum relative energy ǫ max is 62.4 MeV. We use j max = 100 for each of the ℓ = 0, 1, and 2 states and the width ∆k of the momentum bin is thus 0.02 fm −1 . The number of channels, N ch , in the CDCC equation (2 . 7) is 601. When we see the effects of Coulomb breakup in Fig. 2, we take r max = 300 fm.
As for the interactions of the α-13 C and d-13 C systems, we use the parameters shown in Table I. The standard Woods-Saxon form is adopted: where Z 1 Z 2 is the product of the atomic numbers of the interacting particles. These parameters are used in the calculation of both initial and final state wave functions. The parameter set for the d- 13 C system is determined to reproduce the elastic scattering cross section obtained with the parameters in Ref.
2) that contains a spin-orbit part. We determine V 0 for the α-13 C system to reproduce ε 0 assuming that the orbital angular momentum is 1 and the number of forbidden states is 2. Note that we use Eq. (3 . 5) with R C = 2.94 fm for the 6 Li-13 C Coulomb interaction unless we include Coulomb breakup of 6 Li. In the calculation of Ψ CDCC i;JM , we use R max = 15 fm and J max = 7. Note that we explicitly include closed channels, in which E j,ℓ < 0, in CDCC calculations. In the evaluation of T f i , we set the maximum value of R dC to be 30 fm; we use the asymptotic form ofχ J c , Eq. (2 . 9), to obtain Ψ CDCC i;JM for R > 15 fm. When we include Coulomb breakup, we set R max to 200 fm.
For the final channel, the relative energy between α and 13 C in the 1/2 + state at 6.356 MeV is ε 0 = −3 keV from the α-13 C threshold. In the calculation of Ψ CDCC(−) f , we include the p-wave bound state and the s-, p-, d-continua of the α+ 13 C system up to the relative momentum of 1.2 fm −1 (relative energy of 39.6 MeV) with the momentum bin with a common width 0.06 fm −1 . The maximum values of R αC and R dO are both set to 100 fm, and we put J max = 10. We include all closed channels in the CDCC calculations as in the initial channel. we estimate the error due to this approximation to be 5% as mentioned above. It should be noted that breakup cross sections in the initial and final channels are both found smaller than the nuclear part of the elastic cross section by about four orders of magnitude.
Breakup effects of
The very small breakup effects in the final channel are because the incoming energy of d is suitably below the Coulomb barrier, and the interaction that causes breakup in Eq. (2 . 14) is significantly weaker than in Eq. (2 . 3); note that V (N) αd (r) is defined as V tr and does not appear in Eq. (2 . 14). We show in Fig. 3 the cross section of the transfer reaction 13 C( 6 Li, d) 17 O * at 3.6 MeV as a function of the outgoing angle θ of d in the c.m. frame. The solid line represents the result with S exp = 1 and the dashed line shows the result of the χ 2 fit to the experimental data. 2) The resulting value of S exp is 0.357. Note that S exp cannot be regarded as a spectroscopic factor. Indeed, S exp has strong dependence on the model wave function of the α-13 C system; typically, it varies by a factor of 2 with changing the geometric parameters of V (N) αC by 30%. This clearly shows that it is not feasible to determine S exp from the present analysis of the experimental data. On the other hand, the ANC C 17 O * α 13 C given by
Transfer cross section and ANC
with the single particle ANC C sp α 13 C of the α-13 C wave function, is robust against changes in the potential parameters. This shows that the reaction process considered is peripheral with respect to R αC , i.e., only the tail of the α-13 C wave function contributes to the transition amplitude. Note that C sp α 13 C is defined by whereφ 0 is the radial part of ϕ 0 ,η is the Sommerfeld parameter of the α-13 C system, κ 0 = √ −2µ α 13 C ε 0 / with µ α 13 C the reduced mass of α and 13 C, Γ is the Gamma function, and R N represents the range of V (N) αC . The value of (C 17 O * α 13 C ) 2 extracted by the present calculation is 1.03 fm −1 . We then evaluate the uncertainty of (C 17 O * α 13 C ) 2 associated with the α-13 C and d-13 C potential parameters shown in Table I by changing each value by 30%. Note that V 0 for α-13 C has a constraint that it must reproduce ε 0 . The uncertainty is found to be 22%. We take into account also the uncertainty due to the use of Eq. (2 . 15) (5%) and that coming from the zero-range approximation to V In the left panel, we show the convergence of the cross section with respect to increasing k max , where the ℓ = 0, 1, and 2 breakup continua are taken with ∆k = 0.02 fm −1 . One can see that the convergence is very slow and obtained at k max = 2.0 fm −1 . In usual CDCC calculation, one takes only the open channels, i.e., channels with E j,ℓ > 0. The result thus obtained (the thin solid line) is, however, sizably different from the converged one (the thick dotted line), at backward angles in particular. Thus, inclusion of the breakup channels is important.
In the right panel of Fig. 4, the dashed line is the result including the ℓ = 0, 1, and 2 breakup continua with ∆ k = 0.01 fm −1 and k max = 2.0 fm −1 (N ch = 1201), and the dotted line is the result including the ℓ = 0, 1, 2, 3, 4, and 5 breakup continua with ∆ k = 0.02 fm −1 and k max = 2.0 fm −1 (N ch = 2101). The dashed and solid lines both agree well with the solid line, which is the same as in Fig. 3. In fact, the resulting values of (C 17 O * α 13 C ) 2 differ from each other by less than 1%. Thus, the modelspace used in the solid line of Fig. 3 f i describes the transfer reaction from the elastic channel, i.e., the elastic transfer process, which includes the back-coupling effect of the breakup channels to the elastic channel. On the other hand, T (br) f i describes the transfer reaction from 6 Li breakup channels, i.e., the breakup transfer process. Thus, there are two kinds of breakup effects on the transfer reaction; one is the backcoupling effect in the elastic transfer process and the other is the presence of the breakup transfer process. The dashed line is a result of the elastic transfer transition only. The result agrees with the solid line, indicating that the breakup transfer transition is much smaller than the elastic transfer one. This is consistent with the small breakup cross section of 6 Li by 13 C as mentioned in §3.2. Hence, only the backcoupling effect is important in the present subbarrier transfer reaction. In DWBA, the back-coupling effect is expected to be included by using the 6 Li optical potential, which describes the elastic scattering by definition, as the distorting potential. The ANC, (C 17 O * α 13 C ) 2 , extracted in the preceding DWBA calculation 2) is 0.89 ± 0.23 fm −1 . This value agrees well with the present result Eq. (3 . 8) within the uncertainties. §4. Summary In summary, we analyze the 13 C( 6 Li, d) 17 O(6.356 MeV, 1/2 + ) reaction at 3.6 MeV by the three-body (α + d+ 13 C) model. The breakup effects of 6 Li and 17 O are investigated by CDCC. Those of 6 Li are found important as a large back-coupling to the elastic channel, while those of 17 O turns out negligible with an error of 5%. The transfer cross section is calculated with Born approximation to the transition interaction, and including only the breakup of 6 Li. The ANC extracted by the three-body reaction model is (C 17 O * α 13 C ) 2 = 1.03 ± 0.25 (theor) ± 0.15 (expt). The back-coupling effect of 6 Li breakup on the transfer reaction is large, while the breakup transfer transition is negligible compared with the elastic transfer transition. The preceding DWBA calculation implicitly treated the back-coupling effect by using a 6 Li optical potential that described the elastic scattering as the distorting potential. The value of (C One of the authors (K. O.) wishes to thank G. V. Rogachev and E. D. Johnson for valuable discussions and providing detailed information on their DWBA calculation. The authors are grateful to Y. Iseri for providing a computer code rana for calculation of transfer processes. The computation was carried out using the computer facilities at the Research Institute for Information Technology, Kyushu University.
|
2011-04-07T02:54:01.000Z
|
2011-04-07T00:00:00.000
|
{
"year": 2011,
"sha1": "8e1466ea01fc04fbcd5f3f6fa6853b3cad40317f",
"oa_license": null,
"oa_url": "https://academic.oup.com/ptp/article-pdf/125/6/1193/5266726/125-6-1193.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "8e1466ea01fc04fbcd5f3f6fa6853b3cad40317f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
134671154
|
pes2o/s2orc
|
v3-fos-license
|
The relative roles of inheritance and long-term passive margin lithospheric evolution on the modern structure and tectonic activity in the southeastern United States
We perform inversions for the shear-wave velocity structure of the southeastern United States (SEUS) using Rayleigh-wave phase and amplitude data from the broadband stations of the South Eastern Suture of the Appalachian Margin Experiment (SESAME) and EarthScope Transportable Array (TA). Our tomographic images of shear-wave velocities in the upper mantle beneath the SEUS provide new constraints on the evolution of mantle lithosphere, both from the inheritance of structures from repeated Wilson cycles and from processes that have occurred while at a passive margin setting. Our images also allow us to correlate these structures to evidence of Eocene to recent tectonism observed at the surface. We find evidence for both inherited structures and more recently evolved structures, both of which bear some correlation to observations of ongoing tectonism. Our results suggest that lithospheric mantle continues to evolve while in a passive margin setting and that even relatively “stable” continental mantle lithosphere is subject to episodes of delamination, foundering, and erosion due to processes that are still not well under stood. Our results provide structural constraints on the types of processes that may be ongoing and on possible explanations for the numerous observations of comparatively recent tectonic activity occurring along this passive margin setting.
INTRODUCTION
The east coast of North America is an archetypical passive margin and, therefore, an obvious locale for the study of continental lithosphere that is neither actively evolving at a plate boundary nor part of a cratonic continental core. The southeastern United States (Fig. 1) in particular is an ideal location to study the temporal evolution of continental lithosphere. It is the product of repeated Wilson cycles, composed of multiple accreted terranes of different ages, subsequently exposed to flood basalt volcanism and failed rifting prior to the opening of the Atlantic Ocean and Gulf of Mexico. Here, we broadly define the southeastern United States as extending south from the latitude of the Mason-Dixon line (northern Maryland, ~40°N) and as far west as western Tennessee and Kentucky (~89°W).
The extent to which the modern lithospheric mantle beneath the southeastern United States (SEUS) is dominated by structures inherited from past episodes of convergence and rifting has long been the subject of research (e.g., Vauchez and Barruol, 1996;Cook and Vasudevan, 2006;Thomas, 2006). Surface-wave tomography is ideally suited for imaging lithospheric structure due to its sensitivity to velocity changes in the crust and uppermost mantle, especially as a function of depth. This allows us to look for evidence of possible inherited structures, such as the edge of the cratonic keel, dipping shear zones, or petrological discontinuities in the mantle preserved from past accretionary events, relict restite produced by the formation of flood basalts, or thinned lithosphere from past rifting events. The location and geometry of these structures can shed light on the details of the tectonic history of the region. Another key question is how these types of inherited structures relate to evidence for postrift instability and ongoing tectonism. Here, we present the results of our inversions of teleseismic Rayleigh-wave phase and amplitude data to image the shear-wave velocity structure of the uppermost mantle beneath the SEUS. We find evidence of both inherited structures and structures that do not obviously correspond to known accretionary, collisional, or rift geometries. These structures suggest that lithospheric evolution does not end with the transition to a passive margin and that all but the oldest continental interiors may not be as stable as previously assumed. and Antarctica during the Grenville Orogeny (1.35-1.1 Ga) (Thomas and Astini, 1996;Loewy et al., 2003;Tohver et al., 2004;Thomas, 2006;Whitmeyer and Karlstrom, 2007;Loewy et al., 2011). Deformation from this collision extended as far westward as the Grenville Front (Fig. 1). Subsequent rifting (760-530 Ma; Thomas, 2006;Whitmeyer and Karlstrom, 2007) resulted in the opening of the Iapetus Ocean. It also produced a number of failed rifts, including the Rome Trough and Reelfoot Rift ( Fig. 1) (Whitmeyer and Karlstrom, 2007).
Subsequent to Iapetan rifting but prior to the formation of Pangea, the southeastern Laurentian margin (current orientation) was subjected to a sequence of accretionary events resulting in the addition of the Inner Piedmont and Carolina terranes (also known as Carolinia) (Hibbard, 2000;Miller et al., 2006;Anderson and Moecher, 2009;Huebner et al., 2017). The exact number of accreted terranes and associated orogenic events is still debated (Hibbard et al., 2002;Hatcher et al., 2007;Hatcher, 2010;Hibbard et al., 2010), as are the locations of some of the terrane boundaries and vergence directions of associated subduction zones Anderson and Moecher, 2009). There is consensus on the location of the exposed suture between the Carolina and Inner Piedmont terranes along the Central Piedmont Suture (CPS) (Fig. 1). However, the subsequent Alleghanian Orogeny is believed to have formed an extensive thrust sheet, 5-10 km thick, across much of the SEUS that translated the uppermost crust tens to hundreds of kilometers inland (e.g., Hatcher, 1972;Cook and Vasudevan, 2006;Hopper et al., 2017), making it difficult to connect terrane boundaries at the surface, to basement terrane boundaries at depth.
The collision of Gondwana and Laurentia and the formation of Pangea during the Alleghanian Orogeny at ca. 350-300 Ma produced the massive Appalachian orogen. The location of the suture is believed to lie offshore or beneath the easternmost continental margin along most of the eastern seaboard (McBride and Nelson, 1988;Keller and Hatcher, 1999;Cook and Vasudevan, 2006), except in the south, where the accreted Gondwanan-affinity Suwannee and related terranes were left behind after rifting. The location of the Suwannee suture has often been placed along the Brunswick Magnetic Anomaly (BMA) (Fig. 1), which crosses southern Georgia and Alabama (e.g., Chowns and Williams, 1983;Daniels et al., 1983;Nelson et al., 1985aNelson et al., , 1985bMcBride and Nelson, 1988;Mueller et al., 1994;Heatherington and Mueller, 1999;Heather ington and Mueller, 2003). However, recent work has revisited another hypothesis (e.g., Higgins and Zietz, 1983): that the Alleghanian margin between Laurentia and Gondwana lies farther north (e.g., Mueller et al., 2014;Boote and Knapp, 2016;Hopper et al., 2017) and that the BMA represents an intra-Gondwanan suture (Boote and Knapp, 2016). The degree and timing of convergence versus strike-slip motion on the transpressional Suwannee Suture Zone (SSZ) is also still debated (e.g., Mueller et al., 2014;Boote and Knapp, 2016;Hopper et al., 2017).
The breakup of Pangea began in the Middle to Late Triassic with the development of extensive rift basins across Georgia (the South Georgia Rift Basin) ( Fig. 1) (e.g., Chowns and Williams, 1983;McBride, 1991) and a series of smaller basins along the east coast (also known as Newark Super Group rifts) (e.g., Reinemund, 1955;Olsen et al., 1991;Schlische, 1993). This failed rifting episode culminated with the extensive flood basalt volcanism of the Central Atlantic Magmatic Province (CAMP) at ca. 190 Ma (e.g., Marzoli et al., 1999;McHone, 2000;Whalen et al., 2015). Rifting resumed in the earliest Jurassic, and the lithosphere of the SEUS transitioned from rift to drift by the mid-Jurassic (e.g., Withjack et al., 1998). Despite this transition to a passive margin setting, the eastern United States has continued to exhibit tectonic activity including deformation, uplift, seismicity, and volcanism.
Blue Ridge Topography
In the southeastern United States, the Blue Ridge Mountains are what remain of the Andean-scale orogen produced during the collision between Gondwana and Laurentia in the Pennsylvanian-Permian Alleghanian Orogeny. The persistence of both the total elevation of the Blue Ridge (which includes Mount Mitchell [2037 m], the highest point in the eastern United States) and the pronounced topography, in particular across the Blue Ridge Escarpment in North Carolina and southern Virginia, is surprising given models that predict the elimination of topographic relief within ~100 m.y. of the end of convergence (e.g., Ahnert, 1970;Tucker and Slingerland, 1994). A number of competing theories exist on the persistence of elevation and steep topography. These range from theories that require no postorogenic uplift and posit persistent topography due to dynamic equilibrium and lithologic variability (Hack, 1980;Baldwin et al., 2003;Matmon et al., 2003) to those that do require recent uplift (e.g., Hack, 1982), which may be due to various epeirogenic processes (e.g., Pazzaglia and Brandon, 1996;Fischer, 2002;Moucha et al., 2008;Flament et al., 2013;Linari et al., 2016) or climatic variability (e.g., Molnar, 2004).
One intriguing line of evidence comes from recent studies that have found migrating river knickpoints and relict topography consistent with hundreds of meters of relative base-level fall in the past 3.5-15 m.y. (Gallen et al., 2013;Miller et al., 2013;Prince and Spotila, 2013). The cause of these knickpoints has been variably attributed to a renewed uplift of the orogen (Gallen et al., 2013;Miller et al., 2013) or stream capture (e.g., Prince et al., 2010) resulting in the retreat of the Appalachian escarpment over time (e.g., Tucker and Slingerland, 1994;Spotila et al., 2004). Such unsteadiness in the postrifting landscape may be episodic, as indicated by thermochronology data that provide evidence for similar rejuvenation of topography in the Late Cretaceous (McKeon et al., 2014). this arch has no topographic expression, it was first identified on the basis of river valley morphology in the late nineteenth century (e.g., Kerr, 1875). The arch is typically described either as an upwarping of basement topography as determined by active source seismic profiles and borehole data (e.g., Hersey et al., 1959;Bonini and Woollard, 1960;Soller, 1988), as patterns of exhumed late Cenozoic sedimentary layers that indicate pronounced erosion along the axis of the proposed ridge (e.g., Riggs and Belknap, 1988;Soller, 1988), and/or as the displacement of river channels in both North and South Carolina as the result of Pleistocene to modern uplift (e.g., Soller, 1988;Baldwin et al., 2006;Bartholomew and Rich, 2012). Uplift rates are weakly constrained but are conservatively estimated to have ranged between ~0.006 mm/yr from 2.75 to 1.75 Ma, increasing to ~0.04 mm/yr over the past 100,000 years (Cronin, 1981;Soller, 1988;Gardner, 1989 and references therein).
The axis of the Cape Fear Arch lies along the Cape Fear River, roughly parallel to the South Carolina-North Carolina border. The geometry of this uplift does not correlate with known terrane boundaries, gravity anomalies, or magnetic anomalies. There are few theories to explain the uplift of the Cape Fear Arch. Morgan (1983) attributed the arch to the passing of the Bermuda Hot Spot during the Paleocene, though this would not explain the ongoing deformation observed (Vogt, 1991). Others have postulated the existence of a Cape Fear fault (e.g., Zullo and Harris, 1979;Bartholomew and Rich, 2012), though direct evidence for discrete faulting is largely absent. The Cape Fear Arch also coincides with uplifted regions of the Orangeburg Scarp, a mid-Pliocene shoreline that extends from central North Carolina to northern Florida. Recent work has indicated that some of this uplift might be explained through dynamic topography associated with mantle upwelling (e.g., Rowley et al., 2013;Liu, 2015;Rovere et al., 2015), although this alone cannot explain the relatively short wavelength variations in uplift observed in the Carolinas.
Eocene Volcanism
The formation of the east coast passive margin and the transition from rift to drift was complete by 175 Ma (Withjack et al., 1998). However, two distinct pulses of volcanism have been identified in western Virginia and eastern West Virginia that postdate the removal of the SEUS from proximity to a plate boundary. The first pulse comprises Late Jurassic (ca 145 Ma) alkaline volcanics that have variably been described as adakites (Meyer and van Wijk, 2015) or phonolites (Mazza et al., 2017). The second pulse occurred 125 m.y. after the completion of rifting. These Eocene basanites ( Fig. 1) have modeled melt equilibration conditions of 2.32 ± 0.31 GPa (~77 ± 9 km depth) and 1412 ± 25 °C (Southworth et al., 1993;Mazza et al., 2014Mazza et al., , 2017 and represent the youngest known magmatism in the eastern United States. Theories on the cause of this volcanism include the presence of a basement fracture zone along the 38th parallel (e.g., Fullagar and Bottino, 1969;Dennison and Johnson, 1971), the passing of a hotspot track (Chu et al., 2013), and the delamination of eclogitized lower crust or mantle lithosphere (Mazza et al., 2014(Mazza et al., , 2017.
Significant Earthquakes and Variable Seismicity Patterns
Despite its passive margin setting, the SEUS has been the site of several moderate to large earthquakes over the past two centuries. The two most notable within our study area were the M = 7 1886 Charleston, South Carolina earthquake (e.g., Tarr et al., 1981;Talwani, 1982;Cramer and Boyd, 2014;Chapman et al., 2016) and the Mw = 5.8 2011, Virginia event (e.g., Wolin et al., 2012;McNamara et al., 2014) (Fig. 1). These large earthquakes are associated with localized regions of increased seismic activity. Seismicity in the immediate vicinity of the Charleston, South Carolina, event may represent a lingering aftershock sequence, given the locations and focal mechanisms of these events (Chapman et al., 2016). The Virginia earthquake occurred in an area that had previously been identified as a region of moderately increased seismicity known as the Central Virginia Seismic Zone (CVSZ) (e.g., Bollinger, 1973;Çoruh et al., 1988;Kim and Chapman, 2005). In addition to the seismicity clusters associated with these large events, other distinct regions of increased seismicity exist in the SEUS. The Eastern Tennessee Seismic Zone (ETSZ) comprises a concentrated cluster of seismic activity located parallel to the strike of the Appalachian Mountains in eastern Tennessee and westernmost North Carolina (e.g., Powell et al., 1994;Chapman et al., 1997;Powell and Thomas, 2016). Other more subtle seismic patterns are also visible along the east coast passive margin. In particular, there is a discernable decrease in the amount of seismic activity in the Piedmont and coastal plains of North Carolina and southern Virginia relative to areas to the north and south (Fig. 1). Part of this difference is due to increased seismicity in the region of the 1886 Charleston earthquake, but ongoing seismicity is also present over much of central and western South Carolina, parts of northern Georgia, and southwestern North Carolina. This region of increased seismicity is commonly referred to as the South Carolina Seismic Zone (SCSZ) (e.g., Bollinger, 1973;Tarr et al., 1981;Domoracki et al., 1998;Li et al., 2007) and has been incorporated into the U.S. Geological Survey (USGS) hazard maps (Petersen et al., 2008;Petersen et al., 2012). Finally, there is a notable "earthquake shadow" or gap in observed seismicity relative to surrounding areas (e.g., Bollinger and Gilbert, 1974) near the Eocene volcanics in western Virginia and in the central part of eastern West Virginia, just west of the Central Virginia Seismic Zone. A better understanding of lithospheric-scale structures across the SEUS will help to constrain the potential contributing factors associated with ongoing seismicity in passive margin settings.
DATA
We use data from the South Eastern Suture of the Appalachian Margin Experiment (SESAME) deployed across Georgia and into North Carolina, Tennessee, and Florida between July 2010 and May 2014 (Fig. 2). SESAME comprised three transects-two north-south-trending transects through the eastern and western portions of Georgia and Florida and one NW-SE transect that extended from Tennessee to near Augusta, Georgia. The first seven stations for SESAME were installed in 2010. The western transect and portions of the diagonal transect were installed in May 2011, and the eastern transect was installed in May 2012. In addition to SESAME, we use data from the EarthScope Transportable Array (TA), which moved from west to east across the study area during the time of the SESAME deployment. We include all TA stations between 89°W and 75°W and from 28°N to 40°N. We also include stations from the temporary deployment Pre-Hydrofracking Regional Assessment of Central Carolina Seismicity (PHRACCS) (Wagner, 2012).
We analyze all events of magnitude greater than 6.2 that occurred at depths of less than 50 km at a distance of at least 25° from the center of our study area. The time period of our study extends from May 2011 (after the western transect of SESAME was installed) until June 2015. The last year (after the demobiliza- tion of SESAME) is included in order to have sufficient data at the northeasternmost portion of our study area where the TA was not installed until 2013. In total, we include data from 94 events in our inversions (Fig. 2 inset). Because of the temporary nature of these stations, different stations recorded various subsets of these events. In order to be included, a given station must have recorded at least ten events in that inversion. Most stations recorded >35 events over a wide range of back azimuths (Fig. 2). During the course of this study, we determined that the dense station spacing of the SESAME deployment, particularly over the three-year duration of the western transect, appeared to create suspicious anomalies in phase-velocity inversions (Fig. S1 in the Supplemental Material 1 ) at periods >65 seconds that closely paralleled the station locations. We therefore err on the side of caution and remove SESAME data from phase-velocity inversions at periods >65 seconds.
Our data processing follows that in Forsyth and Li (2005) and Wagner et al. (2010). Each event is visually inspected at each station over a range of periods from 33 to 143 seconds. In order to inspect at a particular period, the data are first normalized to a standard station response file and then bandpass filtered around the central frequency using a 7-10-mHz-wide filter. We eliminate any data with signal to noise ratios of less than five, and we ensure a clear separation of fundamental modes and overtones before including a given event at a given frequency. Data are cut to include only the fundamental mode for each station for each event and a 50 second taper was applied. We then use a Fourier analysis to determine the amplitude and phase at the desired period for that station and event.
Phase-Velocity Inversions
We follow the finite-frequency, two-plane-wave-tomography (TPWT) approach of Forsyth and Li (2005) and Yang and Forsyth (2006). This approach addresses scattering of the incoming plane wave due to structures outside of the study area by approximating the observed waveforms as the convolution of two plane waves with different back azimuths, amplitudes, and phases. Of particular importance in this study area was the need for an accurate starting velocity model. Typically, starting phase-velocity maps at these periods include a priori information on the crustal thickness in order to have a starting model closer to the final model. In the southeastern United States, particularly along the coastal plain in Georgia and northern Florida, the presence of thick sedimentary basins such as the South Georgia Rift Basin (e.g., Nelson et al., 1985a) produces complications in the determination of crustal thickness using automated P-s receiver functions. We use constraints on crustal thickness from P-s and S-p receiver functions (Parker et al., 2013;Parker et al., 2015;Parker et al., 2016;Hopper et al., 2017) and P-s wavefield migration (Hopper et al., 2016) at SESAME and TA stations to determine an accurate map of crustal thickness (Fig. 3A). Offshore crustal thicknesses were obtained from Crust 1.0 (Laske et al., 2013). : Examples of the effects of high station density SESAME stations on inversion results at longer periods. These results are identical to those shown in Figure 10 except that they include the SESAME stations. In Figure 10, the results do not include SESAME stations for periods >65 seconds. In addition, it was necessary to take into account the very thick and slow sedimentary basins that significantly affected the starting phase-velocity models, especially at shorter periods along the Gulf of Mexico and Mississippi Embayment. For this, we use the sedimentary thickness map of Crust 1.0 (Laske et al., 2013) (Fig. 3B). In order to determine the best average velocity for the sedimentary layer to calculate our starting phase velocities for the TPWT, we use the shorter-period Rayleigh waves from existing ambient noise tomography results that are more sensitive to these shallow crustal structures than our longer-period TPWT results. To do that, we calculate the predicted phase velocities for our starting model at the shorter periods determined by ambient wave tomography (10-25 s) using a range of different velocities for the sedimentary layer (1.6-2.4 km/s in 0.05 km/s increments). The starting shear-wave velocity model for a 40-km-thick crust with no sedimentary layer is shown in Figure 4. This model is then adjusted to the correct crustal thickness and to incorporate sedimentary layers as needed. Examples of this adjustment are also shown in Figure 4. The model layer thicknesses were kept consistent throughout the crust and mantle except adjacent to the base of the crust. At the Moho, the layer boundary closest to that Moho depth was moved to the Moho depth to allow for an abrupt discontinuity. The Vp/Vs ratio in the crust is held fixed at 1.726 and the mantle at 1.8 above 200 km depth, consistent with the Vp/Vs ratios in IASP91 in the crust and uppermost mantle, respectively (Kennett, 1991). While Rayleigh waves are somewhat sensitive to P-wave velocities at very shallow depths, our focus on mantle structures makes this assumption acceptable for this application. Velocities below 200 km are from IASP91 (Kennett, 1991). Predicted phase velocities for each sedimentary layer velocity at each point in map view were calculated using the method of Saito (1988). We calculate a root mean square (RMS) misfit between these calculated phase velocities and the phase velocities from USArray ambient noise tomography results (USANT13; Ekström, 2013). This process was repeated for each of the aforementioned sedimentary layer velocities. A sediment velocity of 2.1 km/s produced the lowest RMS misfits within the study area and was therefore used in the construction of the starting phase-velocity models, using the same layer parameterization described above.
For the TPWT inversion, we use a grid-node spacing of 0.5° and an a priori estimation of standard deviation of 0.25 km/s. We also invert for the effects of azimuthal anisotropy using 1° grid-node spacing. Outermost grid nodes are underdamped in order to absorb effects of velocity perturbations outside of the study area. Phase velocities are approximated by We omit higher-order 4θ terms (e.g., Smith and Dahlen, 1973). Peak-to-peak anisotropy is calculated as 2*(B 1 2 + B 2 2 ) 1/2 , and the direction of fastest velocity is calculated with 0.5*arctan(B 2 /B 1 ). Starting anisotropy at all grid nodes in our preferred model is set to zero. The regularization of anisotropy terms is achieved with damping. In our preferred model, this is set to 0.04. The effects of varying regularization and starting model on anisotropy terms are described in the resolution section below.
Shear-Velocity Inversions
In order to obtain a 3D shear-wave velocity model, we use the aforementioned phase velocities to calculate predicted 1D shear-wave velocity profiles at each grid node in the phase-velocity inversion. In order to account for the effects of laterally variable crustal structure, we include the ambient noise results of Herrmann et al. (2016) for periods between 10-25 seconds. These results include the full EarthScope Transportable Array (TA) data set across our study area (Fig. S2 [footnote 1]). The method for the shear inversion follows that of Weeraratne et al. (2003) and Wagner et al. (2012a), using the forward calculation of Saito (1988). The layer parameterization and starting shear-wave velocities for this step are the same as that used to calculate the starting phase velocities in the previous step (Fig. 4). The regularization for the shear inversion is controlled by the standard deviations of each phase-velocity measurement provided by the TPWT inversions. For shorter periods from the ambient noise results, we use the standard deviation of the shortest TPWT period used (33 seconds) and multiply that value by 1.5 to place more weight on the results
Depth (km)
Rayleigh wave sensitivity to shear wave velocity of the TPWT we performed. We only invert at those grid nodes defined by the TPWT inversions where at all periods the final standard deviation is <25 m/s and where the ambient noise tomography results of Herrmann et al. (2016) are defined by more than 10 ray paths. This eliminates most offshore grid nodes, as well as some of the coastal nodes where there are few crossing rays. Shearwave velocity depth maps are smoothed between grid nodes (shown as black squares) using the Generic Mapping Tools triangulate function (Wessel and Smith, 1998). The shear-wave layer parameterization and lateral node locations can be seen in cross sections as square boxes as well.
RESOLUTION Effect of Regularization and Parameterization on Phase Velocities
As with any inversion, the choice of regularization plays a significant role in the determination of velocity structures. For the phase-velocity inversions, the most relevant parameters are grid-node spacing and damping. The use of broader grid-node spacing limits the ability to image smaller-scale structures but reduces the need for damping, which can suppress the absolute velocities of areas that differ significantly from the starting model. By improving our starting velocity model with the inclusion of a priori information on sedimentary basins, we reduced the need for damping at a given grid-node spacing.
In order to test the effects of changes in grid-node spacing and damping on the phase-velocity results, we have included the results of test inversions using different grid-node spacing (0.33°) and different damping (0.01 and 0.4). These results are shown in Figures S3-S5 (footnote 1). In the case of the reduced grid-node spacing (Fig. S3), at shorter periods, the results are similar to those of our preferred model. However, at longer periods, it is clear that the results are underdamped for this grid-node spacing, and the results are unrealistically pixelated. At our preferred grid-node spacing, even the underdamped model looks comparatively stable, and very similar to our preferred model, at least over the well-resolved portions of our model. The overdamped model shows that the major features discussed in the text are required by the data even when the regularization is strongly suppressing the observed velocity deviations.
Spatial Resolution of Phase Velocities
Resolution within the study area can be seen in checkerboard tests (Fig. 5), which require that a variety of check sizes are tested in order to demonstrate resolvability (Lévěque et al., 1993). We ran checkerboard tests with alternating positive and negative 5% velocity deviation anomalies that were 2, 3, and 4 grid nodes wide for all periods. Longer periods have more difficulty recovering the smaller anomalies, especially at the edges of the model space. Offshore, and in areas outside of the array of stations used, streaking is evident in all tests. However, anomalies are well recovered both in size and amplitude at all but the longest periods at the smallest anomaly sizes within the central portion of our study area.
Effect of Starting Model and Regularization on Phase-Velocity Anisotropy
Given that the dominant anisotropy observed in our results is very small, we considered the possibility that our anisotropy parameters were overdamped in the inversion, or that the choice of a zero starting anisotropy biased the solution toward zero-magnitude anisotropy. To test this, we ran our inversions, first with greatly reduced damping on the anisotropy terms, and then again, with our preferred damping and a 5% E-W or N-S starting anisotropy across the entire study area. The results of these tests are shown in Figure 6 and Figures S6 and S7 (footnote 1). These figures show clearly the areas where changes in damping or starting anisotropy affect the results and those areas that are insensitive to these parameters. Of particular importance to our results is that when we include a starting 5% anisotropy, the inversion changes the magnitude of the fast direction to match that of those inversions where we had no anisotropy in the starting model. This, together with the test in which we reduced the damping parameter, suggests that the small magnitudes we observe in anisotropy are not the result of insensitivity of the inversion to anisotropy but are in fact required by the data.
Vertical Resolution of Shear-Wave Velocity Structures
We calculate the RMS average misfit between the phase velocities used as data in the shear-velocity inversion and the phase velocities predicted for our preferred shear-wave velocity model over all periods at each point, and we plot these misfits in Figure S8 (footnote 1). To assess the ability of our inversion to recover the structures we observe, we perform recovery tests on the simplified uppermost mantle structures seen in two orogen-parallel cross sections. We calculate the predicted phase velocities for the inputted model (Figs. 7A and 7C) and then perform the shear-velocity inversions using the same starting model and same standard deviations assigned to the phase velocities at those grid nodes in our actual preferred inversion. The results of the recovery test are shown in Figures 7B and 7D. These recovery tests show a slight tendency to smear anomalies vertically. The magnitude of the velocity deviations is also not fully recovered. However, the basic shapes of the structures are clearly recognizable, as are the patterns of velocity deviations. We therefore limit our interpretation to those aspects of our results.
Phase Velocities
The results of our phase-velocity inversions can be seen in Figure 8 and Figures S9 and S10 (footnote 1). The phase-velocity deviations in Figure 8 and Figure S9 reflect the difference between the starting phasevelocity maps (which account for crustal thickness variations and sedimentary basins) and the absolute velocities determined by the inversion. Velocity deviation dVs/Vs (%) These deviations therefore emphasize significant differences in crust and mantle structures not accounted for by the sedimentary basins and crustal thicknesses included in the starting velocity model. Azimuthal phase-velocity anisotropy is also plotted on these figures and is discussed below. In general, shorter periods (<77 s) are dominated by high phase velocities across most of our study area. Phase velocities are on the order of 3%-5% faster than the predicted starting phase velocities, with the highest velocities occurring to the northwest and moderately fast anomalies occurring closer to the coast. There are a number of distinct exceptions. The first is located in northern Virginia and eastern West Virginia. Phase velocities here are on the order of 2%-3% lower than the predicted starting phase velocities, putting this area in distinct contrast with surrounding areas at shorter periods. Another distinct anomaly is visible at 33 and 40 seconds (and to a lesser extent at 45 s), where there is a low-velocity region centered on the Cape Fear River beneath the Cape Fear Arch; both parallel the border between North Carolina and South Carolina near the coast. Low velocities are also observed at most periods in northern Florida and southeastern Georgia. Moderately lower phase velocities are also observed along and east of the Fall Line, and to a lesser extent in northern Georgia. Finally, we observe a distinct region of low velocities in central Kentucky at 33-45 seconds.
Starting with 77 seconds (Fig. 8), phase-velocity maps show broad regions of lower than predicted phase velocities over much of the eastern seaboard, extending west to the Appalachian orogen (at 77 seconds) and by the longest periods, across much of Tennessee, West Virginia, and Kentucky. One notable exception is in Alabama, where slightly higher than predicted velocities persist even to long periods.
Our results are broadly consistent with recently published continental-scale and regional-scale phase-velocity maps (Schmandt et al., 2015;Pollitz and Mooney, 2016;Shen and Ritzwoller, 2016;Zhao et al., 2017), although some of the smaller-scale structures in our phase-velocity maps are blurred or are difficult to see in other studies. For example, while most models show low velocities in the region of northern Virginia, most do not observe a decrease in velocities along the Cape Fear River or east of the Fall Line. This is likely simply due to a difference in resolution and grid-node spacing employed by these larger spatial-scale inversions.
Azimuthal Anisotropy
The results of the anisotropy terms of our phase-velocity inversions are shown in Figure 8 and Figure S9 (footnote 1). We plot only those measurements that are the most robust-those that are insensitive to starting model and regularization as described in the Resolution section below (Fig. 6). The direction of the bars at each node shows the fast direction, and the length of each bar indicates the percent anisotropy. Percent anisotropy is also shown in the diamond located at each node, color coded from black (no anisotropy) to red (2% anisotropy). Any node with anisotropy greater than 2% is shown with a white diamond. There are very few white diamonds, indicating that anisotropy recovered by our inversions is first and foremost small. We also note that fast directions seem to follow few consistent patterns. To facilitate comparison with previous results, we have also plotted on Figure 8 and Figure S9 the boundaries of subregions defined by the teleseismic shearwave splitting studies of Long et al. (2015) and Yang et al. (2017), which are broadly, though not entirely, consistent with one another. We recognize that XKS shear-wave splitting studies are integrated measurements from the core-mantle boundary to the surface, whereas our results are sensitive only to a specific range of depths for any given period. What follows is a description of previous determinations of anisotropy for each region (from west to east) and a comparison of these findings to our results over all periods and to new teleseismic shear-wave splitting measurements from the stations of the SESAME array . While beyond the scope of this paper, future work might include a more detailed analysis of the relative contributions of anisotropy at different depths to the anisotropy observed here and through XKS splitting studies.
Northwest Corner: Regions A3 and B2 (Yang) and Region E (Long)
This region lies north and west of the highest elevations of the modern Appalachian orogen (Fig. 1), in an area that is dominantly west of the Grenville Front (Whitmeyer and Karlstrom, 2007). SKS splitting results indicate fast directions that are close to, if not identical to, absolute plate motion (APM) (Fig. 8; Figure S9 (footnote 1). Long et al., 2015;Yang et al., 2017). Uppermost mantle anisotropy determined from Pn arrivals suggest large variability in fast directions immediately below the Moho, ranging from due north-south in central and western Tennessee to NE-SW in southern Indiana (Buehler and Shearer, 2017). Our results are consistent with significant variability in fast directions, both laterally and as a function of period/depth. We do not observe a consistent trend at or close to APM. At shorter periods, we see some indication of north-south fast directions in western Tennessee and perhaps NE-SW fast directions in the northwesternmost corner of our resolved study area, consistent with the results of Buehler and Shearer (2017).
Interior Appalachian Orogen: Regions C3 (Yang) and B (Long)
SKS splitting measurements along the interior of the orogen from previous studies generally indicate fast directions that are parallel to both the strike of the orogen and APM (e.g., Fouch et al., 2000;Wagner et al., 2012b;Long et al., 2015;Yang et al., 2017), largely in agreement with SKS splitting fast directions obtained at SESAME stations in this region . Pn fast directions are also generally oriented NE-SW except in northwestern Virginia and central and eastern West Virginia, where fast directions rotate to a more N-S orientation (Buehler and Shearer, 2017). Our results indicate orogen-parallel fast directions are more common to the southwest and western portions of this area, though this is not consistent across all frequencies. Longer periods generally show stronger orogen-parallel fast directions than shorter periods, though the anisotropy observed at the longest periods also tends to be very small (<1%).
Outer Appalachian Orogen and Coastal Plain: Regions C2 (Yang) and C (Long)
This area comprises the eastern portions of North and South Carolina, as well as the southeastern half of Virginia and portions of central Georgia. Long et al. (2015) and Wagner et al. (2012b) find dominantly null shear-wave splitting results across this region. In contrast, Yang et al. (2017) find in addition to these nulls a significant number of shear-wave splitting measurements that indicate a rotation from the orogen-parallel fast directions to the west to a more NNE-SSW-trending fast direction along the coast, particularly in North Carolina. Fischer et al. (2015) found similar NNE-SSW fast directions from their analysis of TA and permanent stations in the Carolinas. However, in central Georgia, SKS fast directions from the dense stations of the SESAME array manifest a strong back-azimuthal dependence that is consistent with N-S fast directions at shallower depths and APM-parallel fast directions at greater depths. Uppermost mantle anisotropy from Pn analyses suggests dominantly N-S fast directions at the easternmost margin of North Carolina and Virginia but a gradual rotation to NW-SE in central North Carolina, eastern Virginia, and most of South Carolina, reaching W-E azimuths in central Georgia (Buehler and Shearer, 2017). Our observations are dominated by a mix of N-S or NNE-SSW fast directions or near-null anisotropy across most of this area at all periods. In central Georgia, the N-S or NNE-SSW fast directions are consistent with the shallower layer of azimuthal anisotropy indicated by the SESAME SKS splitting, even at periods of 143 s, suggesting that the deeper layer of anisotropy inferred from SESAME data lies at greater depths than can be imaged with surface waves. In the eastern portion of Region C in central and eastern North Carolina, northeastern South Carolina, and southeastern Virginia, our results show variable anisotropy ranging from NNW-SSE to NNE-SSW anisotropy at shorter periods, consistent with the results of Yang et al. (2017) and Fischer et al. (2015). At longer periods in this region, anisotropy values are generally very small and are more consistent with the results of Long et al. (2015).
Suwannee Terrane: Regions C4 (Yang) and D (Long)
This area includes Florida and southernmost Georgia and Alabama. All previous studies indicate dominantly east-west fast directions, rotating somewhat to NE-SW along the Atlantic coast Long et al., 2015;Buehler and Shearer, 2017;Yang et al., 2017). We have limited resolution this far to the south for seismic anisotropy, but to the extent that we do, our observations are consistent with those of existing SKS shear-wave splitting studies.
Shear-Wave Velocity Structure
Our shear-wave velocity results are shown in map view in Figure 9 and in cross section in Figures 10-12. Given the heterogeneity of the crust across our study area, it was necessary to include the shorter-period, ambient-noise phase-velocity data in our shear-wave velocity inversions. However, the primary focus of this paper is on upper-mantle structure, as constrained by the earthquake-induced, phase-velocity maps that are the product of our research. As such we will focus our discussion on structures observed below the Moho.
High-Velocity Layer in the Uppermost Mantle
A high shear-wave velocity layer (4.6-4.7 km/s) extends across much of our study area between the Moho and 100-200 km depth. This layer is interrupted or altered only in very discrete regions, which are discussed below. The thickness of this layer is variable. To the northwest, velocities greater than 4.6 km/s persist to depths of at least 150 km, but in areas of the southeast, the base of this layer shallows to 75-90 km depth. Over most of the study region, this layer appears to thin uniformly from northwest to southeast ( Fig. 11; cross sections X1-X4), showing only moderate variation parallel to the strike of the orogen (Fig. 10). However, in Alabama, the high-velocity layer is particularly thick (cross section X4), extending to at least 150 km depth. The presence of an uppermost-mantle, high-velocity layer has been seen in previous shear-wave velocity inversions of the SEUS (e.g., Yuan et al., 2014;Schmandt et al., 2015;Biryol et al., 2016;Pollitz and Mooney, 2016;Shen and Ritzwoller, 2016;Burdick et al., 2017;Savage et al., 2017). Previous surface-wave inversions (e.g., Yuan et al., 2014;Schmandt et al., 2015;Pollitz and Mooney, 2016;Shen and Ritzwoller, 2016;Savage et al., 2017) have shown very similar structures to those shown here, with some exceptions as discussed below. Teleseismic body-wave tomography studies (e.g., Biryol et al., 2016;Burdick et al., 2017), while less sensitive to depth variations in the uppermost mantle, do show evidence of high uppermost-mantle velocities with a particularly thick, high-velocity layer across Alabama, central Tennessee, and Kentucky.
Virginia Anomaly
The most prominent anomaly in map view is the "bulls-eye" located in western Virginia and eastern West Virginia. This feature is most strongly observed at only four grid nodes in map view ( Fig. 9 and Fig. S10a [footnote 1]). The size of this anomaly in map view is therefore controlled, to some extent, by the grid-node spacing we employ in our phase-velocity inversion. However, in our test of finer grid-node spacing (Fig. S3a [footnote 1]), the phase-velocity maps do not show a significant change in the size or shape of this anomaly. It is unlikely that the anomaly is much larger than observed in our inversions, because the regularization employed would, if anything, tend to smear out an anomaly of this size; although it is possible that the anomaly is smaller. Our checkerboard tests (Fig. 5) indicate that an anomaly that is 2 × 2 grid nodes wide can be resolved at the shorter periods in question without significant smearing, giving us confidence in our results. In cross sections A, X1, and D1 (Figs. 10-12, respectively), velocities of <4.4 km/s extend from the base of the crust to ~90 km depth, underlain by somewhat less slow velocities persisting to at least 200 km depth. Our recovery tests (Fig. 7) indicate that a similar anomaly would be smeared downward and the velocity deviation reduced by the inversion. This suggests that the actual velocity structure is shallower and/or slower than the anomaly in our preferred model.
Almost all previous tomographic inversions that cover the eastern United States show some evidence of reduced seismic velocities in northern Virginia and easternmost West Virginia (e.g., Schmandt et al., 2015;Biryol et al., 2016;Pollitz and Mooney, 2016;Shen and Ritzwoller, 2016;Buehler and Shearer, 2017;Burdick et al., 2017;Savage et al., 2017). However, the regional model of Pollitz and Mooney (2016) does not show the low-velocity anomaly as shallow as it is recovered here. That study uses a somewhat different methodology than the one presented here. Another possible explanation for the observed difference is the use by the present study of a revised crustal thickness map and sediment thicknesses both for the phase-velocity inversion and the shearwave velocity inversions. The presence of this feature at periods as short as 33 seconds gives us confidence in the shallow depth of our recovered anomaly.
Rift Anomalies
In a few discrete areas, the high-velocity layer that underlies the crust across much of our study area does not extend all of the way to the base of the crust. For example, along the Rome Trough and Reelfoot Rift in central and western Kentucky (cross sections C, D, X2, and X3; Figs. 10 and 11), the high-velocity layer is separated from the Moho by 10-20 km. For the Reelfoot Rift (cross sections D and X3; Figs. 10 and 11), the high-velocity layer appears to be deflected downward while maintaining constant thickness beneath the sub-Moho low-velocity layer. In contrast, the high-velocity layer beneath the low velocities that coincide with the Rome Trough (cross sections C and X2; Figs. 10 and 11) does not appear to be deflected downward at all. The reduced sub-Moho velocities can be seen clearly in map view at 60 km depth in central and western Kentucky (Fig. 9). These shallow low velocities are not seen in earlier continent-scale surface-wave inversions (e.g., Yuan et al., 2014;Schmandt et al., 2015;Shen and Ritzwoller, 2016), but they are observed in regional surface-wave inversions in the eastern United States (e.g., Chen et al., 2016;Pollitz and Mooney, 2016;Savage et al., 2017). However, the shallow low-velocity anomalies observed in this study contrast with the more dramatic and deeper low-velocity structures observed by Chen et al. (2016) beneath the southwestern extension of the Reelfoot Rift, which lies outside our study region. Similarly, the continent-wide Pn tomographic inversion of Buehler and Shearer (2017) shows reduced uppermost-mantle velocities along the Reelfoot Rift, but these do not extend as far to the east as those presented here.
Cape Fear Arch Anomaly
The moderately low-velocity anomaly at 60 and 75 km depths along the North Carolina-South Carolina border can be seen in map view and in cross sections A (Fig. 10), X2 (Fig. 11), and D2 (Fig. 12). Here, shallow low velocities appear above the high-velocity layer. The high-velocity layer is not thinned to accommodate this low-velocity feature but appears deflected downward around the subcrustal low velocities by 25-50 km, resulting in a greater maximum depth of this high-velocity layer beneath this low-velocity anomaly. This apparent downward deflection of the high-velocity layer in this area is significantly greater than that observed beneath the rifts to the northwest. To the southwest (cross section D2) and south (cross section A) of the arch, the downward-deflected, highvelocity layer appears to shallow and possibly thin as it approaches the South Georgia Rift Basin. Northeast of the arch (cross section D2), the high-velocity anomaly is located immediately below the crust but appears thicker than the high-velocity anomaly to the southwest of the arch. To the north and northwest of the arch (cross sections A and X2), the high-velocity layer is directly sub-Moho and transitions abruptly to its offset position beneath the arch to the south and southeast of the Fall Line. To our knowledge, this anomaly has not previously been discussed in the literature, although the regional surface-wave model of Pollitz and Mooney (2016) does show some indication of a downward deflection of the high-velocity layer beneath the Triassic basins and coastal plain.
Northern Georgia Anomaly
Another significant disruption of the high-velocity layer is observed across northern Georgia and western South Carolina ( Fig. 9; 60-105 km depth). Here, the 4.6-4.8 km/s high-velocity layer is broken up by a region where velocity deviations are close to zero, and absolute shear-wave velocities are ~4.5 km/s. In some locations, this average velocity region extends as far north as central Tennessee. While not as pronounced as the low-velocity anomaly in northern Virginia, this comprises a significant disruption of the otherwise fairly uniform <4.6 km/s high shear-wave velocity layer observed in the uppermost mantle across most of our study area. This is perhaps the most subtle of the anomalies presented here, but its proximity to the increased station density of the SESAME deployment along with our resolution tests give us confidence that this feature is well resolved. A similar feature is recovered by Shen and Ritzwoller (2016) at 70 km depth, and to a lesser extent by Savage et al. (2017) at 50 km depth. The location of this lower-velocity region also coincides with a reduction in Moho reflectivity found by Hopper et al. (2016) beneath the Blue Ridge Mountains and with decreased Pn velocities found by MacDougall et al. (2015) across the northernmost stations of the SESAME array.
DISCUSSION
One of the important questions this study seeks to address is the extent to which the modern mantle lithosphere in the SEUS is composed of inherited structures from earlier plate boundary processes and to what extent the existing mantle lithosphere reflects processes that have occurred since the region resumed a passive margin setting. We can also address the question of how these structures, whether inherited from plate boundaries or evolved in a passive margin setting, correlate with ongoing tectonism observed at the surface, such as seismicity and deformation. It is beyond the scope of this study to explore fully any possible causality between our seismic observations and the geological observations to which they correlate. We propose some possible explanations, and we hope this study will spur further investigations into the evolution of mantle lithosphere in passive margin settings.
A key facet of the shear-velocity structure is the marked variation in velocity in the depth range typically associated with the mantle lithosphere (Moho to 150 km). Multiple possible origins for these anomalies exist, including: (1) variations in lithospheric thickness; and (2) variations in the properties of intact mantle lithosphere, for example due to metasomatism and/or infiltration by partial melt. Depending on their location, such variations may be related to the transition from Proterozoic craton to Phanerozoic lithosphere, Phanerozoic tectonism (orogenesis and rifting), or more recent alteration of the lithosphere (e.g., delamination, other forms of thermal or mechanical lithospheric loss, or metasomatism and/or melt percolation).
In this section, we evaluate the plausibility of these alternative scenarios, drawing on complementary geological, geochemical, and geophysical constraints as available.
Evidence for Structures Inherited from Past Plate Boundary Processes
We find strong evidence for inherited structures in our shear-wave velocity model in a number of different regions. Figure 13 compares our shear-wave velocity model at 75 and 150 km depth to gravity anomalies, magnetic anomalies, and geologic observables at the surface. At 150 km depth, we see a strong correlation between the location of the Grenville Front (defined as the western margin of Grenville deformation) and the boundary between high velocities to the northwest and lower velocities to the southeast. We interpret these high velocities, along with the high-velocity layer that is dominant across much of our study area, as evidence of mantle lithosphere. Areas northwest of the Grenville Front represent those regions with the thickest mantle lithosphere, consistent with a continental core unaffected by plate boundary processes for >1 Ga. The only disruption to this region of thick mantle lithosphere is in those areas affected by the failed rifting of Rodinia-the Reelfoot Rift and Rome Trough. In these areas, we see evidence of structural changes to the mantle lithosphere in the form of reduced velocities in the uppermost mantle and, in some cases, a downward deflection of the lower boundary of the mantle lithosphere. These structures are in contrast to the larger velocity reductions observed in the southwestern Reelfoot Rift, where alteration of the deep lithosphere appears to have been more pronounced (Chen et al., 2016).
We do not, however, see consistent evidence for a change in mantle lithospheric structure across the New York-Alabama (NY-AL) Magnetic Lineament, the Central Piedmont Shear Zone, the Suwannee Suture Zone, or the Brunswick Magnetic Anomaly. These proposed terrane boundaries might be expected to show evidence of different mantle lithosphere corresponding to the differing provenances of the accreted terranes. While some cross sections do show changes near these boundaries, these changes are not consistent and do not appear to follow the terrane boundaries in map view. In the case of the NY-AL Magnetic Lineament and the Central Piedmont Shear Zone, this is consistent with earlier studies that propose that Grenville basement rocks underlie most of the SEUS due to the formation of an Alleghanian thrust sheet (e.g., Cook and Vasudevan, 2006;Duff and Kellogg, 2017).
There may be some evidence of alteration of mantle lithosphere associated with the location of the Suwannee Suture Zone (e.g., Mueller et al., 2014;Boote and Knapp, 2016;Hopper et al., 2017). To the north-northwest of the Suwannee Suture Zone in northern Georgia and South Carolina, we observe average shear-wave velocities (~4.5 km/s) in lieu of the high-velocity layer (>4.6 km/s) that is dominant across most of our study area. One possibility is that the average velocities observed north of the suture represent the alteration of mantle lithospheric material as the result of the collision between Laurentia and Gondwana and the accretion of the Suwannee terrane. Seismic velocities could have been reduced by the hydration of the mantle lithosphere above a northwestward-oriented subducting slab (present orientation) (e.g., Whalen et al., 2015). However, this subduction polarity is opposite to the dip of the suture in the crust (e.g., Hatcher, 1972;Cook and Vasudevan, 2006;Hopper et al., 2017). Moreover, subsequent CAMP flood basalts produced extensive diking across the southeastern half of this average-velocity region (e.g., Hames et al., 2000 and references therein). Such extensive volcanism would most likely have dehydrated the mantle lithosphere, resulting in faster seismic velocities across those portions of this feature, which we do not observe. Another possible explanation is that the mantle lithosphere has been gradually removed by a lithospheric foundering event in this zone. Biryol et al. (2016) found evidence for a high-velocity anomaly dipping to the east in the upper mantle beneath our study area. The shallowest portion of their anomaly extends from Alabama to Kentucky, consistent with deeper high velocities observed in our model, both west of the Grenville Front and across much of Alabama. Biryol et al. (2016) argue that this feature is most plausibly explained by downwelling mantle lithosphere drawn from lithosphere farther to the east, which allows the dip to be explained by the westward absolute plate motion of the overriding plate. However, the exact source of this removed mantle lithosphere was difficult for Biryol et al. (2016) to discern due to the reduced resolution of regional teleseismic body-wave tomographic images above 100 km depth. We propose that one possible source location for the foundered lithospheric material may be from this area of average seismic velocities across northern Georgia and South Carolina. However, the absence of significantly slower seismic velocities suggests that the removed material was not simply replaced by hot asthenospheric mantle, as is observed elsewhere. More work is needed to understand better the genesis of this velocity change in the uppermost mantle and its relationship to other upper-mantle seismic velocity structures and/or tectonic history.
Our results are generally consistent with the presence of Grenville lithosphere beneath most of the SEUS north of the Suwannee Suture Zone. This mantle lithosphere is, however, not homogeneous. We see a general thinning of the mantle lithosphere toward the Atlantic coast, consistent with lithospheric thinning due to rifting and continental breakup. There is perhaps some correlation between the location of the South Georgia Rift Basin (SGR) and a sub-Moho low-velocity region similar to what is observed at the Reelfoot Rift and Rome Trough (cross sections B-D; Fig. 10), but this is not consistent (cross sections X4 and D2; Figs. 11 and 12, respectively). However, a number of structures (discussed below) cannot easily be associated directly with inherited Grenville lithosphere or subsequent rifting and likely reflect structures that evolved in the mantle lithosphere while located in a passive margin setting.
Evidence for Structures Associated with Observed Passive Margin Tectonism
In several regions, our observed mantle lithospheric structures correlate closely with passive margin tectonic activity observed at the surface. The clearest evidence of this is the very localized low-velocity anomaly that directly underlies the Eocene volcanism in western Virginia and eastern West Virginia. It is unlikely that such a small-scale structure, unassociated with any other known geologic boundary or feature, would have always had a near absence of mantle lithosphere while surrounded by otherwise relatively uniform and unremarkable mantle lithosphere, leading us to believe this material was lost. Our results do not indicate how this material was lost. However, we would argue there is strong suggestion that the material was lost during the Eocene. In addition to the strong spatial correlation in map view, thermobarometric modeling indicates that the overlying Eocene basanites last equilibrated at pressures consistent with depths only ~30 km below the Moho (Mazza et al., 2014). This is consistent with our observation that high-velocity mantle lithosphere appears largely absent below the crust in this region. We note that the size of the anomaly is on the order of 100 × 150 km or less in map view. The high-velocity layer surrounding this low-velocity anomaly suggests that the thickness of the mantle lithosphere removed is on the order of 50-70 km. This adjacent high-velocity layer is also indistinguishable in velocity and thickness from the high-velocity layer seen across large regions of our study area, suggesting that the mantle lithosphere surrounding this feature was not thinned or altered by whatever process created this low-velocity anomaly. The exception is due east of the anomaly, where velocities are less fast than those observed elsewhere (4.5-4.6 km/s). These velocities are consistent with other velocities observed east of the Fall Line and may therefore be due to lithospheric thinning along the continental margin rather than to processes attributable to the formation of this low-velocity anomaly. While seismic tomography cannot determine how this low-velocity anomaly formed, possible explanations include thermomechanical erosion of the continental lithosphere (e.g., Ranalli et al., 2007;Foley, 2008 and references therein), possibly assisted by decompression melting due to upwelling induced by variations in lithospheric thickness (e.g., Till et al., 2010) or a very localized delamination of mantle lithosphere due to the removal of eclogitized lower crust as proposed by Mazza et al. (2014Mazza et al. ( , 2017. A further question remains about how such a low velocity can be sustained over a period of 48 m.y. There is no evidence that water was introduced to form these melts, making hydration an unlikely explanation for these low velocities. More work is needed to constrain how such an abrupt lithospheric structure could form and how it might persist for a period of ~48 m.y. Another example of a correlation between observed mantle lithospheric structure and postrift tectonism lies along the Cape Fear Arch. The location of exhumed Upper Cretaceous sediments adjacent to Upper Oligocene sediments along the coast corresponds closely to an apparent delamination of the mantle lithosphere in our tomographic images. The high-velocity layer is offset from the Moho by up to 50 km and is replaced by moderately lower velocities. The location of this anomaly also corresponds to the local maximum uplift of the Orangeburg Scarp located along the Fall Line (e.g., Rovere et al., 2015). The Orangeburg Scarp, as a Middle Pliocene shoreline, was presumably developed at uniform elevation at sea level; thus, differential uplift along this scarp provides constraints on deformation across the Cape Fear Arch during the past ~3 m.y.
A number of recent studies have attempted to model this deformation with dynamic topography caused by a combination of glacial isostatic adjustment and large-scale mantle flow patterns identified by tomographic imaging of the upper and lower mantle beneath the United States (e.g., Moucha et al., 2008;Spasojević et al., 2008;Rowley et al., 2013;Liu, 2014). Results across these models differ, but none are able to replicate the short wavelength of the observed uplift across the Cape Fear Arch and Orangeburg Scarp. Recently, Moucha and Ruetenik (2017) added flexure due to sediment loading to their dynamic topography models and were able to successfully model the uplift along the Orangeburg Scarp. It is difficult to know without further modeling studies what the predicted effect of the downward deflection and apparent delamination of the mantle lithosphere beneath the Cape Fear Arch might be on surface elevations. It is also impossible to know the temporal evolution of this structure from the seismic tomography alone. More work is needed to incorporate the effects of these localized lithospheric structures into our understanding of the development of this uplift.
Blue Ridge Escarpment
The steepness of the topography observed across the southern Appalachians, particularly across the Blue Ridge Escarpment, has been explained by a number of different mechanisms ranging from rift flank retreat (e.g., Tucker and Slingerland, 1994;Spotila et al., 2004), to flexural bending associated with continental erosion and sediment deposition (e.g., Pazzaglia and Gardner, 1994;Liu, 2014), to isostatic response to the delamination of the lower crust and mantle lithosphere (e.g., Wagner et al., 2012c). However, we do not find any remarkable structures beneath the high topography of the southern Appalachians. Indeed, the steepest portions of the orogen along the Blue Ridge Escarpment are uniformly underlain by the high-velocity layer we are interpreting as mantle lithosphere (cross sections B and X2, Figs. 10 and 11, respectively).
The only exception is in northwestern Georgia, where the average velocities observed north of the Suwannee Suture Zone extend beneath portions of the southernmost Appalachian Mountains (34.5°S-35°S, cross section C; Fig. 11; 800-650 km, cross section D1; Fig. 12). We do not, however, observe a change in elevation associated with the northern edge of this anomaly.
These observations suggest that any observed uplift or disequilibrium landscapes in the southern Appalachians are not caused by isostatic responses to changes in mantle lithospheric structure. This suggests that if indeed there is an uplift and/or rejuvenation of the southern Appalachians, this is more likely due to dynamic topography effects (e.g., Liu, 2014Liu, , 2015. However, it is also possible that observations of disequilibrium landscapes are due to stream capture and base-level change rather than a net uplift of this ancient orogen (e.g., Prince et al., 2010).
Seismicity and Mantle Lithospheric Structure
The ongoing seismicity in the southeastern United States, as with intraplate seismicity in general, is not well understood. A variety of contributing factors to the development of intraplate seismicity have been proposed, including: inherited zones of weakness, possibly associated with oceanic fracture zones (e.g., Sykes, 1978); thin or weak mantle lithosphere or abrupt changes in lithospheric thickness or strength (Liu and Zoback, 1997;Assumpção et al., 2004;Mooney et al., 2012); the presence of terrane boundaries or sutures (Babuška et al., 2007); the presence of increased fluid pore pressure either from fluctuations in meteoric water or from increased mantle CO 2 emissions (e.g., Brauer et al., 2003;Costain, 2008); intersecting fault zones (Talwani, 1988); the presence of failed rifts, especially from the most recent episode of orogenesis (e.g., Chapman and Beale, 2010;Bartholomew and van Arsdale, 2012); and faults favorably aligned with the regional stress field (Zoback, 1992;Bartholomew and van Arsdale, 2012).
While this study does not have the ability to test all of these hypotheses, it does allow us to look for correlations between the locations of increased seismicity and mantle lithospheric structure. The largest cluster of seismicity within our study area is the Eastern Tennessee Seismic Zone (ETSZ). This northeast-trending band of seismicity is located just northwest of the highest topography of the southern Appalachians. Our results indicate that for the most part, this cluster is located above the high-velocity layer we are interpreting as mantle lithosphere, just to the northeast of where this high-velocity layer ends and is replaced by the average velocity structure in northern Georgia described earlier (cross sections C, X3, and D1; Figs. 10-12, respectively, and in map view; Fig. 13A). This observation is consistent with Mooney et al. (2012), who observed an increase in intraplate seismicity along lateral gradients in lithospheric thickness globally. However, in this case, the seismicity would be almost exclusively located within the region of thicker and presumably stronger mantle lithosphere.
A similar pattern is observed with the Central Virginia Seismic Zone (CVSZ). This cluster, associated with the Mw = 5.8 2011 Mineral, Virginia earthquake, is located just east of the low-velocity anomaly that underlies the Eocene volcanoes described earlier. That low-velocity anomaly is, in turn, the location of the seismic shadow first described by Bollinger and Gilbert (1974). We also do not observe any significant seismicity along the Cape Fear Arch anomaly. While a full assessment of the implications of our model on the causes of intraplate seismicity is beyond the scope of this paper, we hope that these first-order observations might encourage future work on this important topic.
CONCLUSIONS
(1) We see evidence of tectonic inheritance in the greater thickness of the mantle lithosphere to the west of the Grenville Front compared to that of the mantle lithosphere east of the Grenville Front and in the overall continuity of the mantle lithosphere beneath our study area, consistent with previous work suggesting the underthrusting of Grenville basement beneath most of the Piedmont and coastal plains (e.g., Cook and Vasudevan, 2006;Duff and Kellogg, 2017). We see some evidence of structures that may be due to episodes of failed rifting, both in the early Paleozoic and in the Mesozoic.
(2) We also see a number of structures that appear strongly correlated with surface observations of Eocene to recent tectonism. These include evidence for the wholesale removal of a small patch of mantle lithosphere beneath the Eocene volcanics of western Virginia and eastern West Virginia and the apparent delamination of the mantle lithosphere from beneath the Cape Fear Arch in North and South Carolina. (3) We do not see a strong correlation between the high elevations of the Appalachian Mountains and mantle lithospheric structure. This is in contrast to earlier work using a very limited number of stations that suggested the uplift of the Blue Ridge Escarpment may be due to an earlier episode of mantle lithospheric delamination (Wagner et al., 2012c). (4) We also do not see a consistent correlation between patterns of seismicity and mantle lithospheric structures. While not all regions of increased seismicity are located above homogeneous mantle lithosphere, it is difficult to draw any direct links between the observed structures and earthquake hazards. More work is needed to understand how mantle lithospheric structures might affect seismic activity in a passive margin setting. . The authors are extremely grateful to Don Forsyth, not only for his inversion codes, but also for his significant help with the inversion of this data set. The SESAME deployment was made possible with the help of countless students, postdocs, technicians, and landowners, without whom we would not have been able to succeed. Instruments were provided by the Incorporated Research Institutions for Seismology (IRIS) Program for Array Seismic Studies of the Continental Lithosphere (PASSCAL). This paper benefitted from discussions made possible by the EarthScope Program Synthesis Workshops held at James Madison University on 18-20 November 2016 and at Brown University on 27-29 March 2017. We would also like to thank Paul Mueller and an anonymous reviewer for their thorough reviews of our manuscript.
|
2019-04-27T13:09:47.061Z
|
2018-08-01T00:00:00.000
|
{
"year": 2018,
"sha1": "a8c275d84e5bdef85a882162bbad5752bb16d3a1",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.geoscienceworld.org/gsa/geosphere/article-pdf/14/4/1385/4265462/1385.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "8bcde83034eb05df531530081d7a7ba957f7bf36",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Geology"
]
}
|
149832225
|
pes2o/s2orc
|
v3-fos-license
|
A Study on the Status Quo of Chinese College Students ’ Intercultural Communication Competence
Intercultural encounters have become a fact of daily life in so many cities as well as towns in China, yet English learning is far from enough for cultivating talents with international vision and research on ICC has stepped on agenda. This paper is aimed at investigating the ICC stat quo of Chinese college students. The questionnaire is designed based on Byram’s YOGA including four aspects—intercultural knowledge, intercultural skills, intercultural attitude, and critical intercultural awareness. Research is done with data from the teacher-questionnaire for finding one focus of the four aspects for further research of influencing factors of ICC and with data from the student-questionnaire, on the presumed potential related factors of ICC like major, gender, cultural curriculums, and experience abroad for finding out Chinese college students’ ICC stat quo. 10 (half science and half literary arts) are interviewed to find out how the results come into being. Through the analysis of teacher-questionnaire data, the author gets that intercultural attitude is of the most importance to ICC. The results come out possibly because the acquiring of intercultural awareness, intercultural skills and intercultural knowledge are decided by intercultural attitude.
Introduction
In China, English has become a compulsory course from primary on and gains equal weight in College Entrance Examination.Non-English major students in
Intercultural Communication Competence (ICC)
As for the definition of ICC, different people voice different ideas, among which, the most simple and general one given by Spitzberg (Samovar & Porter, 2004) is that, ICC is "behavior that is appropriate and effective in a given context".It is simple but not easy to understand.Kim's (Samovar & Porter, 2004) definition is much more detailed-ICC is "the overall internal capability of an individual to manage key challenging features of intercultural communication: namely, cultural differences and unfamiliarity, inter-group posture, and the accompanying experience of stress."Many other scholars has also defined it, like Chen & Starosta (1997), who noted that ICC was "the ability to effectively and appropriately execute communication behaviors to elicit a desired response in a specific environment", and Byram, who viewed ICC as the ability to interact effectively with people of cultures other than one's own.
According to Hymes (1971), communicative competence comprises four degrees: 1) possibility (knowledge of and ability to use the generative base of language); 2) feasibility (knowledge of whether and to what extent something is possible, and the ability to be practical or feasible); 3) appropriateness (knowledge of language behaviors and its contextual features and the ability to use language appropriately); 4) performance (knowledge of whether and to what extent action is taken with language and the ability to use language to take such action).
Scholars like Gudykunst (1991), did further research on the components of ICC and proposed that ICC should involve affective or relational competence apart from cognitive and behavioral competence.Kohls & Brussow (1995) (Corbett, 2003).Kim (1991) regards ICC as internal to a person-"should be located within a person as his or her overall capacity or capability to facilitate the communication process between people from differing cultural backgrounds and contribute to successful interaction outcomes.
Byram's Model for Intercultural Communication Competence (ICC)
None of the previous researches have conducted an in-depth research of ICC status quo of Chinese college students by utilizing Byram's Model.According to Byram (1997), learners should master cultural knowledge, acquire communicative skills, develop positive attitude towards foreign cultures and cultivate critical cultural awareness to communicate interculturally.Therefore, ICC can be evaluated and cultivated in the four aspects of intercultural knowledge, intercultural skills, intercultural attitude, and critical intercultural awareness.Intercultural knowledge relates to two aspects: one is the knowledge about social groups and cultures in one's own country as well as that in the target countries or areas and the other is the knowledge about the intercultural interaction process, both of which are important and also the prerequisite for successful intercommunication.
Intercultural skills can also be divided into two categories.The first is the "ability to interpret a document or event from another culture, to explain it and relate it to documents from one's own".The second is the "ability to acquire new knowledge of a culture and cultural practices and the ability to operate knowledge, attitudes and skills under the constraints of real-time communication and interaction".In his opinion, knowing another culture is important, but acquiring skills of analysis and interpretation are more important.
Intercultural attitudes refer to the attitude of openness to the otherness and curiosity about the other cultures, which are crucial to ICC.People need to be willing and active to appreciate and accept other cultures and at the same time, they need to have a right understanding of their own culture, that is to say, to keep their own cultural identity.Neither complete rejection nor total accept is right.
This study is supposed to research on the ICC stat quo of Chinese college students through student-questionnaire, and at the same time probe into the related factors of ICC by connecting student-and teacher-questionnaire and analyze how the results come into being based on the face-to-face interview.
Research Questions
This study is supposed to research on the ICC stat quo of Chinese college students through student-questionnaire, and at the same time probe into the related factors of ICC by connecting student-and teacher-questionnaire and analyze how the results come into being based on the face-to-face interview.So the research questions can be the following four: 1) What is the stat quo of Chinese university students' ICC?
2) Among the four, intercultural knowledge, intercultural attitude, intercultural skills and intercultural awareness, which is most related to ICC?
3) Whether major, cultural class, gender and experience abroad are related to ICC?If yes, then how? 4) What are the possible reasons of the results?
Research Subjects and Procedures
Both the teacher-questionnaire and the student-questionnaire are based on Byram's YOGA.The questionnaire is designed from four aspects-intercultural awareness, intercultural attitude, intercultural skills and intercultural knowledge, and each of the four aspects including 10 detailed items to help examine people's intercultural communication competence.The face-to-face interview mainly center on four points: 1) their own understanding towards ICC; 2) whether and how their experiences affect their ICC; 3) their opinion on whether major, cultural class, gender and experience abroad are related to ICC; 4) their ideas for on the possible reasons of the results.
The questionnaires are mostly done during the break time and about a third of the student-questionnaires are done by senior students in their dormitory.
The face-to-face interview is done in different places including the classroom, the dormitory, the canteen and the playground.
Results from Student-Questionnaire
Through descriptive analysis of data collected from students, the author gets the stat quo of Chinese college students.
It can be concluded from Table 1 that students' intercultural awareness vary greatly because each item gets the minimum point 0 and the maximum point 5.However, the mean vales reflecting most students' intercultural awareness falls on 3 to 4, which means they can only realize the cultural differences but cannot keep in mind those cultural differences while communicating with people from a different background.
It can be concluded from 3.1875 and 3.2500.Moreover, the result that the minimum point of each item gets no 0, three items get 2 and two items get 3 also reflects the importance of intercultural attitude to ICC.It can be concluded from Table 7 that most teachers think the significance of intercultural skills to ICC is relatively great because most of the mean values reflecting the significance of intercultural skills to ICC fall on 3.3 -4.0, and only one is 2.9688.
It can be concluded from Table 8 that most teachers think the significance of intercultural skills to ICC is not great because most of the mean values reflecting the significance of intercultural skills to ICC fall on 2.5 -3.5.
Therefore, it can be inferred from the results of the descriptive analysis of teachers' evaluation that intercultural attitude is of the most importance to ICC.So analysis on the possible factors of students' ICC will concentrate on the aspect of intercultural attitude.
Conclusion
Through the data analysis and face-to-face interview, the stat quo of Chinese college students' ICC can be concluded as the following: Chinese college students are active in intercultural communication, but they often fail because of their limited intercultural knowledge or skills or just avoid such contacts because of their previous failures or having no confidence in their oral and speaking English.7 -10 years of English learning do not make them qualified in intercultural communication and ICC should be stressed when teaching them to be the talents of international vision for the 21st century.
Through the analysis of teacher-questionnaire data, the author gets that intercultural attitude is of the most importance to ICC.The results come out possibly because the acquiring of intercultural awareness, intercultural skills and intercultural knowledge are decided by intercultural attitude.For those who have the opportunity to participate in an exchange program in a foreign university, they usually stay abroad for less than two months and do not have time for in-depth communication or cultural exchange.We suggest that the university administration expands the length of such programs.
How to cite this paper: Chen, L., & Wang, Z. H. (2018).A Study on the Status Quo of Chinese College Students' Intercultural Communication Competence.Chinese Studies, 7, 164-173.https://doi.org/10.4236/chnstd.2018.72014 The teacher-questionnaire is executed to 35 teachers from 7 schools.35 pieces are taken back and 32 pieces are valid.The student-questionnaire is executed to 149 (142 valid) students of 9 majors covering both science and literary arts in a University in central China 147 pieces are retrieved and 142 pieces are valid.10 students participated in the face-to-face interview, among which, half are science students and half are literary arts students.
Table 2 .
Table2that most students' intercultural attitude is not negative because most item get the minimum point 1 and all get the maximum point 5.At the same time, seven of the ten mean vales reflecting most students' intercultural attitude fall on 3.5 to 4, two are 4.11 and 4.10 and only one is Table1.ICC stat quo of Chinese college students reflected from the aspect of intercultural awareness.ICC stat quo of Chinese college students reflected from the aspect of intercultural attitude.
It can be concluded from Table5that most teachers do not think intercultural awareness very important to ICC because most of the mean values reflecting the significance of intercultural awareness to ICC fall on 3 -3.5, and only one is 4.1250.It can be concluded from Table6that most teachers attach great significance of intercultural attitude to ICC because most of the mean values reflecting the significance of intercultural awareness to ICC fall on 3.5 -4.5, and only two are
Table 3 .
ICC stat quo of Chinese college students reflected from the aspect of intercultural skills.
Table 4 .
ICC stat quo of Chinese college students reflected from the aspect of intercultural knowledge.
Table 5 .
The significance of intercultural awareness to ICC.
Table 6 .
The significance of intercultural attitude to ICC.
Table 7 .
The significance of intercultural skills to ICC.
Table 8 .
The significance of intercultural knowledge to ICC.
|
2019-05-12T14:22:41.786Z
|
2018-03-30T00:00:00.000
|
{
"year": 2018,
"sha1": "a0ae31067b42265750f62a15095b4b30f9daa044",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=84804",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "a0ae31067b42265750f62a15095b4b30f9daa044",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
233246328
|
pes2o/s2orc
|
v3-fos-license
|
Light-triggered and phosphorylation-dependent 14-3-3 association with NON-PHOTOTROPIC HYPOCOTYL 3 is required for hypocotyl phototropism
NON-PHOTOTROPIC HYPOCOTYL 3 (NPH3) is a key component of the auxin-dependent plant phototropic growth response. We report that NPH3 directly binds polyacidic phospholipids, required for plasma membrane association in darkness. We further demonstrate that blue light induces an immediate phosphorylation of a C-terminal 14-3-3 binding motif in NPH3. Subsequent association of 14-3-3 proteins is causal for the light-induced release of NPH3 from the membrane and accompanied by NPH3 dephosphorylation. In the cytosol, NPH3 dynamically transitions into membraneless condensate-like structures. The dephosphorylated state of the 14-3-3 binding site and NPH3 membrane recruitment are recoverable in darkness. NPH3 variants that constitutively localize either to the membrane or to condensates are non-functional, revealing a fundamental role of the 14-3-3 mediated dynamic change in NPH3 localization for auxin-dependent phototropism. This regulatory mechanism might be of general nature, given that several members of the NPH3-like family interact with 14-3-3 via a C-terminal motif.
D evelopmental plasticity of plants is impressively demonstrated by the phototropic response, through which plants align their growth with incoming blue light (BL) 1 . Shoots typically grow towards the light by generating a lateral gradient of the growth-promoting phytohormone auxin. Here, the hormone concentration is higher on the shaded side as compared with the lit side, resulting in differential growth. It is well established that the phototropins phot1 and phot2 function as primary photoreceptors controlling phototropism in Arabidopsis 2-4 . Phototropins are plasma membrane (PM)-associated, light-activated protein kinases and, indeed, BL-induced autophosphorylation turned out to be a primary and essential step for the asymmetric growth response 5 . In this context, members of the 14-3-3 family were identified as phot1 interactors in Arabidopsis. Eukaryotic 14-3-3 proteins are known to interact with a multitude of polypeptides in a phosphorylation-dependent manner, thereby regulating distinct cellular processes 6 . Plant 14-3-3s are crucial components regulating auxin transport-related development and polarity of PIN-FORMED (PIN) auxin efflux carriers 7 . As yet, however, a functional role of phot1/14-3-3 association could not be proven 5,8 . Furthermore, evidence for trans-phosphorylation activity of phototropins is surprisingly limited. Besides BLUE LIGHT SIGNALING 1 9 and CONVERGENCE OF BLUE LIGHT AND CO 2 1 10 , both of which contribute to regulation of BLinduced stomatal opening, ATP-BINDING CASSETTE B19 11 and PHYTOCHROME KINASE SUBSTRATE 4 12,13 have been shown to be direct substrate targets of phot1. The last two are indirectly or directly involved in regulating phototropism.
The polar localization of PIN proteins within the PM made them likely candidates promoting formation of the auxin gradient that precedes phototropic growth 14 . Indeed, a mutant lacking the three major PINs expressed in aerial plant parts (PIN3, PIN4, and PIN7) is severely compromised in phototropism 15 . Unilateral illumination polarizes PIN3 specifically to the inner lateral side of hypocotyl endodermis cells, aligning PIN3 polarity with the light direction and presumably redirecting auxin flow towards the shaded side 16 . Moreover, the activity of PINs is positively regulated by two protein kinase families from the AGCVIII class, namely PINOID and D6 PROTEIN KINASES 17 . Although phototropins belong to the same kinase class, direct PIN phosphorylation could not be demonstrated 16 . Taken together, signaling events that couple photoreceptor activation to changes in PIN polarization and consequently auxin relocation remain mainly elusive.
In this regard, the PM-associated NON-PHOTOTROPIC HYPOCOTYL 3 (NPH3) might represent a promising component of early phototropic signaling events. It acts downstream of the photoreceptors and appears to be instrumental for auxin redistribution 3,4,18,19 . NPH3 possesses-in addition to the central NPH3 domain-two putative protein-protein interaction domains, a C-terminal coiled-coil (CC) domain, and an N-terminal bric-a-brac, tramtrack and broad complex (BTB) domain 1,20 (Supplementary Fig. 1). Indeed, NPH3 physically interacts not only with the photoreceptor phot1 but also with further early signaling elements, such as ROOT PHOTO-TROPISM 2 (RPT2) 21 -another member of the plant-specific NPH3/RPT2-like (NRL) family-and defined members of the PHYTOCHROME KINASE SUBSTRATE (PKS) family 22,23 . Interestingly, NPH3 exists in a phosphorylated form in darkgrown seedlings and becomes rapidly dephosphorylated upon phot1 activation 24,25 . Later on, the alteration in phosphorylation status was shown to correlate closely with light-driven changes in the subcellular localization of NPH3, which detaches from the PM upon irradiation, forming aggregated particles in the cytosol 26 . As found for light-triggered dephosphorylation 24 , formation of the NPH3 particles is reversible upon darkness or prolonged irradiation 26 . One factor required for the recovery of phosphorylated NPH3 at the PM over periods of prolonged irradiation is its interaction partner RPT2 26 . Altogether, this has led to the current model that the phosphorylation status of NPH3 determines its subcellular localization and function: phosphorylation of NPH3 promotes its action in mediating phototropic signaling from the PM, whereas NPH3 dephosphorylation reduces it by internalizing NPH3 into aggregates 4,18,26,27 . As yet, however, the functional significance of NPH3 (de)phosphorylation remains poorly understood 25,28 .
Here we identified members of the 14-3-3 family as novel interactors and major regulators of NPH3. Our analyses revealed that BL induces phosphorylation of the third last NPH3 residue (S744), which in turn enables 14-3-3 association. Complex formation interferes with the ability of NPH3 to bind to polyacidic phospholipids, resulting in its displacement from the PM. Accumulation of NPH3 in the cytosol causes formation of membraneless condensates. Intriguingly, both PM association and 14-3-3-triggered PM dissociation are required for NPH3 function. Taking the reversibility of the light-induced processes into account, the phototropin-triggered and 14-3-3-mediated dynamic change in the subcellular localization of NPH3 seems to be crucial for its proper function in the phototropic response.
Results
PM association of NPH3 is phospholipid-dependent and requires its C-terminal domain. Although NPH3 is hydrophilic in nature, green fluorescent protein (GFP) tagged NPH3 (GFP:NPH3) (35S or native promoter) localized to the cell periphery in the leaf epidermis of transiently transformed and darkadapted Nicotiana benthamiana ( Fig. 1a and Supplementary Fig. 2c), suggesting PM association as described previously 1,26,27 . As yet, the molecular mechanism of NPH3 membrane recruitment in darkness remains elusive. MACCHI-BOU 4 (MAB4)/ENHANCER OF PINOID (ENP), another member of the NRL family, was recently shown to associate with the PM in a PIN-dependent manner 29 . Besides protein-protein interactions, hydrophobic and protein-lipid interactions can cause membrane anchoring of proteins. Several members of the AGCVIII kinase class-although not phot1-contain a basic and hydrophobic (BH) motif in the middle domain of the kinase. This polybasic motif interacts directly with phospholipids and is required for PM binding 30 . When we applied the BH score prediction 31 to NPH3, two putative BH motifs were identified in its C-terminal domain ( Supplementary Fig. 2a). To examine the importance of electronegativity for NPH3 PM association in the dark, we made use of a genetic system that depletes the polyacidic phosphoinositide (PI) phosphatidylinositol-4phosphate (PI4P) at the PM via lipid anchoring (myristoylation and palmitoylation (MAP)) of the catalytic domain of the yeast SAC1 PI4P phosphatase 32,33 . Transient co-expression of GFP:NPH3 together with MAP:mCherry:SAC1, but not the catalytically inactive version MAP:mCherry:SAC1 DEAD , displaced NPH3 from the PM into discrete cytosolic bodies in darkness (Fig. 1a), reminiscent of the aggregated particles that have been observed upon BL treatment 26,27 . The strong and unique electrostatic signature of the plant PM is powered by the additive effect of PI4P and the phospholipids phosphatidic acid (PA) and phosphatidylserine (PS) [33][34][35][36] . In lipid overlay assays, hemagglutinin-tagged NPH3 (HA:NPH3) bound to several phospholipids characterized by polyacidic headgroups, namely PA as well as the PIs PI3P, PI4P, PI5P, PI(3,4)P 2 , PI(3,5)P 2 , PI(4,5)P 2 , and PI(3,4,5)P 3 (Fig. 1b). HA:NPH3 did neither bind to phospholipids with monoacidic headgroups, such as phosphatidylinositol or PS, nor to phospholipids with neutral headgroups, namely phosphatidylcholine (PC) and phosphatidylethanolamine (PE). Deletion of the C-terminal 51 residues of NPH3 (HA:NPH3ΔC51, still comprising the CC domain, Supplementary Fig. 1) abolished lipid binding, whereas the bacterially expressed C-terminal 51 residues of NPH3 (tagged with glutathione S-transferase (GST), GST:NPH3-C51) turned out to be sufficient to bind to polyacidic phospholipids (Fig. 1b). Moreover, GST:NPH3-C51 bound to large unilamellar liposomes containing the polyacidic phospholipids PI4P or PA, but not to liposomes composed of only neutral phospholipids such as PC and PE (Fig. 1c). Apparently, the C-terminal 51 residues of NPH3 enable electrostatic association with membrane bilayers irrespective of posttranslational protein modifications or association with other proteins. As expected, transient expression of red fluorescent protein (RFP) or GFP-tagged NPH3ΔC51 in N. benthamiana (35S or native promoter) revealed loss of PM recruitment in the dark, as evident by the presence of discrete bodies in the cytosol ( Fig. 1d and Supplementary Fig. 2c). This resembles the scenario observed upon co-expression of NPH3 and SAC1 (Fig. 1a), as well as upon transient expression of NPH3ΔC65:GFP in guard cells of Vicia faba 37 . By contrast, deletion of the N-terminal domain (35S::RFP:NPH3ΔN54 or NPH3::GFP:NPH3ΔN54, still comprising the BTB domain, Supplementary Fig. 1) did not affect PM association of NPH3 in darkness ( Fig. 1d and Supplementary Fig. 2c).
An amphipathic helix is essential for phospholipid binding and PM association of NPH3 in vivo. As already mentioned, two polybasic motifs with a BH score above the critical threshold value of 0.6 (window size 11 as recommended for the detection of motifs closer to the termini 31 ) were identified in the C-terminal domain of NPH3: (i) a R-rich motif (R736-R742) close to the C-terminal tail and (ii) a K-rich motif further upstream (W700-M713) ( Fig. 2a and Supplementary Fig. 2a). The latter is predicted to form an amphipathic helix, organized with clearly distinct positively charged and hydrophobic faces. The hydrophobic moment-a measure of the amphiphilicity-was calculated to be 0.58 ( Supplementary Fig. 2b), similar to the PM anchor of Remorin 38 . In order to test the requirement of the two motifs for membrane association, NPH3 mutant variants were generated in both HA:NPH3 and GST:NPH3-C51. Within the R-rich motif, all five basic amino acids were replaced by alanine (NPH3-5KR/A). Furthermore, both hydrophobicity and positive charge of the amphipathic helix were decreased by exchange of four hydrophobic residues (NPH3-4WLM/A) and of four lysine residues (NPH3-4K/A), respectively ( Fig. 2a and Supplementary Fig. 2b). The ability of any of the three NPH3 replacement variants to bind polyacidic phospholipids in vitro was significantly impaired (Fig. 2b, c). Nonetheless, the RFP:NPH3-5KR/A mutant remained PM-associated in the dark when transiently expressed in N. benthamiana (Fig. 2d). To verify that the terminal R-rich motif is dispensable for PM recruitment in vivo, NPH3 was truncated by the C-terminal 28 residues (35S::RFP:NPH3ΔC28 or NPH3::GFP:NPH3ΔC28). Indeed, PM anchoring was unaffected ( Fig. 2d and Supplementary Fig. 2d). By contrast, modification of either the amphiphilicity (35S::RFP:NPH3-4K/A or NPH3::GFP:NPH3-4K/A) or the hydrophobicity (35S::RFP:NPH3-4WLM/A or NPH3::GFP:NPH3-4WLM/ A) of the amphipathic helix gave rise to cytosolic particle-like structures in darkness ( Fig. 2d and Supplementary Fig. 2d). Although these particles differ in shape and size, strict colocalization of the respective NPH3 variants was observed upon co-expression ( Supplementary Fig. 3). Taken together, these experiments revealed the necessity of the amphipathic helix for PM anchoring in vivo and indicate hydrophobic interactions to also contribute to PM association of NPH3. Thus, one attractive hypothesis is that the positively charged residues interact electrostatically with polyacidic phospholipids of the PM followed by partial membrane penetration. By this means, interactions with both the polar headgroups and the hydrocarbon region of the bilayer would be established in darkness, causing anchor properties of NPH3 similar to intrinsic proteins. 14-3-3 Proteins interact with NPH3 via a C-terminal binding motif in a BL-dependent manner. A yeast two-hybrid screen performed in our lab (see ref. 39 ) identified NPH3 as a putative interactor of several Arabidopsis 14-3-3 isoforms, among those representatives of both phylogenetic 14-3-3 groups, the nonepsilon group (isoform omega, Fig. 3a) and the epsilon group (isoform epsilon, Supplementary Fig. 4a) 7 . In contrast to phot1 8 , 14-3-3 isoform specificity was thus not observed for binding to NPH3. Complex formation of NPH3 and 14-3-3 omega was confirmed in planta by co-immunoprecipitation (CoIP) of fluorophore-tagged proteins transiently co-expressed in N. benthamiana leaves (Fig. 3b). To elucidate the impact of light on 14-3-3/NPH3 complex assembly, transgenic Arabidopsis lines expressing 14-3-3 epsilon:GFP under control of the native promoter 7 and, as control, UBQ10::GFP were employed. Threeday-old etiolated seedlings were either maintained in complete darkness or irradiated with BL (1 μmol m −2 s −1 ) for 30 min. Potential targets of 14-3-3 epsilon:GFP were identified by stringent CoIP experiments coupled with mass spectrometry (MS)based protein identification. As expected, several known 14-3-3 clients 7 were detected by MS and, remarkably, NPH3 emerged as a major 14-3-3 interactor (Supplementary Table 1). Binding capability of characterized 14-3-3 targets, such as the H + -ATPase (AHA1 and AHA2) and cytosolic invertase 1, was not modified by BL treatment. By contrast, NPH3 turned out to be a BLdependent 14-3-3 interactor in planta ( Fig. 3c and Supplementary Table 1). CoIP of fluorophore-tagged proteins transiently co-expressed in N. benthamiana leaves confirmed that physical association of NPH3 and 14-3-3 omega is not detectable in darkness, whereas BL irradiation triggers complex formation (Fig. 3d). Assuming 14-3-3 association to depend on phosphorylation of the target protein, this observation is in apparent contrast to the light-induced dephosphorylation of NPH3 24 . The specific phosphorylatable 14-3-3-binding sequences of numerous target proteins are mostly flexible and disordered 40 . As both the N-and C-terminal domain of NPH3 are predicted to be intrinsically disordered ( Supplementary Fig. 1 41 ), the corresponding truncated versions were analyzed by yeast two-hybrid assays. While NPH3ΔN54 was capable of 14-3-3 binding, deletion of the C-terminal 51 residues (NPH3ΔC51) abolished 14-3-3 association, suggesting that the 14-3-3-binding site-in addition to the membrane targeting motif-localizes downstream of the CC domain ( Fig. 3a and Supplementary Fig. 4a). We therefore exchanged amino acid residues, phosphorylation of which has recently been demonstrated in planta (S722, S723, S744, and S746 42,43 ), for a non-phosphorylatable alanine. Strikingly, 14-3-3 binding was not affected in all but one NPH3 mutant: replacement of S744-the third last residue of NPH3-prevented 14-3-3 association both in yeast ( Fig. 3a and Supplementary Fig. 4a) and in planta (Fig. 3b), suggesting a phosphorylationdependent C-terminal 14-3-3-binding motif (pS/pTX 1-2 -COOH) 44 in NPH3. Phosphomimic variants (NPH3-S744D/ S744E), however, do not allow for 14-3-3 binding ( Fig. 3a and Supplementary Fig. 4a), consistent with the general finding that aspartate and glutamate do not provide good phosphomimetic residues with respect to 14-3-3 binding 45 . Considering that constitutive 14-3-3 complex formation of other plant targets characterized by a C-terminal binding site, such as the H + -ATPase or the transcription factor FD, has been observed in yeast 46,47 , light-independent NPH3/14-3-3 interaction in yeast ( Fig. 3a and Supplementary Fig. 4a) might arise from a promiscuous kinase with a certain preference for terminal motifs.
14-3-3 Association is required for NPH3 function and its BLinduced PM dissociation. To address the functional significance of 14-3-3 association in vivo, GFP-tagged NPH3 variants were expressed in a T-DNA-induced loss-of-function allele of NPH3, nph3-7 48 . GFP:NPH3 was fully functional in restoring the severe impairment of hypocotyl phototropism in nph3-7, regardless of whether expression was driven by the native or the 35S promoter ( Fig. 4a and Supplementary Fig. 4b), thus confirming previous data 26,27 . By contrast, phototropic hypocotyl bending was still significantly reduced when NPH3 incapable of 14-3-3 association (GFP:NPH3-S744A) was expressed ( Fig. 4a and Supplementary Fig. 4b), suggesting that BL-induced interaction with 14-3-3 is required for proper NPH3 function. Both GFP:NPH3 and GFP:NPH3-S744A localized to the cell periphery in the hypocotyl of etiolated transgenic seedlings ( Fig. 4b and Supplementary Fig. 4c). Within minutes, however, the BL laser used to excite GFP (488 nm, activates phototropins), induced detachment of GFP:NPH3 from the PM into discrete bodies/particle-like structures in the cytoplasm (Supplementary Movie 1). This BL-induced shift in subcellular localization is mediated by phot1 activity 26 and, again, could be observed independent of whether expression of GFP:NPH3 was under control of the endogenous ( Supplementary Fig. 4c 27 ) or the 35S promoter ( Fig. 4b 26 ). By contrast, GFP:NPH3-S744A remained mainly PM-associated upon irradiation (Fig. 4b, Supplementary Fig. 4c, Supplementary Fig. 4e, and Supplementary Movie 2). Mutation of the 14-3-3-binding site does thus not affect PM association of NPH3 in darkness but prevents BL-triggered PM dissociation, suggesting that light-induced binding of 14-3-3 proteins to the third last, presumably phosphorylated residue S744 is required to internalize NPH3 from the PM into cytosolic The crude extract was immunoprecipitated using GFP beads. Input and immunoprecipitate (IP) were separated on 11% SDS-PAGE gels, followed by immunoblotting with anti-GFP and anti-RFP antibodies, respectively. c Arabidopsis 14-3-3 epsilon interactors were identified by mass spectrometry analysis of anti-GFP immunoprecipitations from etiolated seedlings expressing 14-3-3 epsilon:GFP and either maintained in darkness or irradiated with blue light (BL) (1 μmol m −2 s −1 ) for 30 min (two biological replicates). Expression was driven by the native promoter. Protein intensities of 14-3-3 client proteins were normalized to relative abundance of the bait protein (Supplementary Table 1). Fold changes in relative abundance (mean ± SD, logarithmic scale) of blue light treatment vs. darkness are given. AHA1, AHA2, Arabidopsis H + -ATPase; CINV1, cytosolic invertase 1; EIN2, ethylene insensitive 2; PhyA, phytochrome A; SPS1, sucrose phosphate synthase 1. d In vivo interaction of 14-3-3 omega:mEGFP and mCherry:NPH3 in transiently transformed N. benthamiana leaves. Expression was driven by the 35S promoter. Dark-adapted tobacco plants were either kept in darkness (D) or treated with BL (10 μmol m −2 s −1 ) for 40 min. The crude extract was immunoprecipitated using GFP beads. Input, flowthrough (FT) and IP were separated on 11% SDS-PAGE gels, followed by immunoblotting with anti-GFP and anti-RFP antibodies, respectively. Experiments in a, b, and d were performed at least three times with similar results. particles. Nonetheless, the suspected phosphorylation of S744 might per se decrease the interaction of NPH3 with polyacidic phospholipids, hence triggering the PM dissociation. Yet, the appropriate phosphomimic version of NPH3 (S744D) was neither impaired in phospholipid interaction in vitro (GST:NPH3-C51-S744D, Fig. 4c) nor PM recruitment in vivo (RFP:NPH3-S744D, Fig. 4d). Altogether, the C-terminal domain serves a dual function in determining the subcellular localization of NPH3, as it comprises both the amphipathic helix required for phospholipid-dependent PM association in darkness and the 14-3-3-binding motif mediating BLtriggered PM dissociation. We confirmed our findings in transiently transformed N. benthamiana leaves (Fig. 4e, Supplementary Fig. 4d, and Supplementary Movies 3 and 4). Here, primarily RFP-tagged proteins were employed, as excitation of RFP (558 nm)-unlike GFP (488 nm)-does not activate phototropins. This enabled us to conditionally activate phot1 by means of the GFP laser (488 nm). It became evident that RFP:NPH3-instead of being directly internalized into discrete bodies-initially detaches from the PM NPH3 forms membraneless condensates in the cytosol. BLinduced PM dissociation and particle assembly of RFP:NPH3 in the cytosol seem to be separate and consecutive processes (Supplementary Movie 3). As yet, the identity of these particles has not been determined. RFP:NPH3ΔC51 is devoid of the amphipathic helix and localized to cytosolic particles in darkness (Fig. 1d). Subcellular fractionation clearly illustrated that the lack of the C-terminal region shifts NPH3 from a membraneassociated state to the soluble fraction (Fig. 5a). This reveals a non-membrane-attached state of NPH3 in discrete bodies as has been suggested for NPH3 aggregates generated upon BL irradiation 26 . Apparently, the mechanisms of NPH3 targeting towards and away from the PM are distinct from vesiclemediated transport of transmembrane proteins. This is in line with the observation that NPH3 is insensitive to an inhibitor of endosomal trafficking 26 . Considering the lack of the 14-3-3binding motif in NPH3ΔC51, 14-3-3 association seems dispensable for NPH3 body formation in the cytosol. To confirm this assumption, we examined NPH3 variants incapable of 14-3-3 binding, namely (i) RFP:NPH3-4K/A-S744A and (ii) GFP:NPH3-S744A, the latter upon co-expression with MAP:mCherry:SAC1. Indeed, prevention of 14-3-3 association did not affect assembly of RFP:NPH3-4K/A-S744A particles in darkness (Fig. 5b, RFP:NPH3-4K/A is shown in Fig. 2d). Similar to GFP:NPH3 (Fig. 1a), GFP:NPH3-S744A localized to cytosolic particles in the dark upon co-expression of SAC1 but not SAC1 DEAD (Fig. 5c). Generation of NPH3 particles is hence feasible in the absence of 14-3-3s and might be due to intrinsic properties of NPH3 when exceeding a critical concentration in the cytosol. Taking constitutive PM association of NPH3-S744A in the absence of SAC1 into account, 14-3-3 association seems to be crucial for initial PM detachment, while formation of discrete bodies in the cytosol occurs as an autonomous process. The dynamic generation and morphology of NPH3 bodies (Supplementary Movie 3) is reminiscent of membraneless biomolecular condensates, which are micrometer-scale compartments in cells lacking surrounding membranes. An important organizing principle is liquid-liquid phase separation driven by multivalent macromolecular interactions-either mediated by modular interaction domains or disordered regions 49 . NPH3 is characterized by both intrinsically disordered regions and interaction domains such as the BTB and the CC domain ( Supplementary Fig. 1). We performed single-cell time-lapse imaging of RFP:NPH3 body formation to investigate whether NPH3 undergoes transition from a solute to a condensed state in N. benthamiana. Indeed, formation of particle-like structures in the cytosol is initiated after~4 min and the fluorescence intensity per body gradually increased over time as a result of the growth in size (Fig. 5d, e). In contrast to the signal intensity, the number of bodies reached a maximum after~10-15 min and afterwards started to decrease as a result of body fusion (Fig. 5d, f). Worth mentioning, these features are characteristic criteria of biomolecular condensates 49,50 .
Phosphorylation of the 14-3-3-binding site in NPH3 is lightdependent and reversible. In dark-grown seedlings, NPH3 exists as a phosphorylated protein irrespective of phot1 activity 24 . Lightinduced dephosphorylation of NPH3 is almost a dogma in the literature. It has been recognized as a slight shift in electrophoretic mobility of NPH3 upon SDS-polyacrylamide gel electrophoresis (PAGE) 24 and requires-in accordance with the light-induced formation of particle-like structures in the cytosol 26 -the photoreceptor phot1. In the following, (de)phosphorylation of NPH3, represented by a modification of its electrophoretic mobility, will be referred to as "general" (de)phosphorylation of NPH3. Nonetheless, the data presented so far suggest that light-triggered and presumably S744 phosphorylation-dependent 14-3-3 association contributes to NPH3 function-an obvious antagonism to the "dogma of dephosphorylation". A phosphosite-specific peptide antibody (α-pS744) was therefore established (antigen: 734 PPRKPRRWRN-S(P)-IS 746 ) and an antibody against the unmodified peptide (α-NPH3) served as control. Examination of GFP:NPH3 in either N. benthamiana leaves or transgenic Arabidopsis lines revealed the typical enhanced electrophoretic mobility upon BL excitation (Fig. 6), indicative of a "general" dephosphorylation [24][25][26] . Intriguingly, the α-pS744 antibody recognized GFP:NPH3, but not GFP:NPH3-S744A, exclusively upon BL irradiation (Fig. 6). BL thus triggers two different posttranslational modifications of NPH3: (i) the phosphorylation of the 14-3-3-binding site (S744) and (ii) a "general" dephosphorylation. Yet, neither of the modifications could be observed for GFP:NPH3-S744A (Fig. 6a). To uncover light-induced 14-3-3 association at the molecular level, an immunoprecipitation of either GFP:NPH3 or GFP:NPH3-S744A was conducted and combined with 14-3-3 farwestern analysis. Phosphorylation of S744 indeed enabled binding of purified recombinant 14-3-3 proteins to GFP:NPH3 but not GFP:NPH3-S744A upon SDS-PAGE (Fig. 6a, b). Prolonged Fig. 4 14-3-3 Binding is required for proper NPH3 function in phototropic hypocotyl bending and its light-triggered detachment from the plasma membrane. a Quantification of hypocotyl phototropism (mean ± SD) in etiolated Arabidopsis nph3-7 seedlings expressing either GFP:NPH3 or GFP:NPH3-S744A. Expression was driven by the 35S promoter. Seedlings were exposed for 24 h to unilateral blue light (BL) (1 μmol m −2 s −1 ) (n ≥ 30 seedlings per experiment, one representative experiment of three replicates is presented). One-way ANOVA with Tukey's post hoc test is shown, different letters mark statistically significant differences (P < 0.05), same letters mark statistically nonsignificant differences. Center line: median, bounds of box: minima and maxima (25th and the 75th percentiles), whiskers: 1.5 × IQR (IQR: the interquartile range between the 25th and the 75th percentile). Exact P-values for all experiments are provided in the source data file. b Representative confocal microscopy images of hypocotyl cells from 3-day-old etiolated transgenic Arabidopsis nph3-7 seedlings shown in a. Seedlings were either kept in darkness (D) or treated with BL (~6 min GFP laser). Scale bars, 25 μm. c Lipid overlay assay performed with purified GST, GST:NPH3-C51, and GST:NPH3-C51-S744D. Immunodetection was conducted by using the anti-GST antibody. irradiation or transfer of BL-irradiated seedlings to darkness is known to confer PM re-association of NPH3 26 , correlating with a reduced electrophoretic mobility, indicative of a "general" rephosphorylation 24,26 . Remarkably, we observed simultaneous dephosphorylation of S744 (Fig. 6b, c), effectively preventing binding of 14-3-3 to NPH3 (Fig. 6b). Taken together, the dark/lightdependent phosphorylation status of S744 determines 14-3-3 association with NPH3. In addition, the phosphorylation status of the 14-3-3-binding site and of NPH3 "in general" is modulated by the light regime in an opposite manner, giving rise to a coinciding, but inverse pattern. Time-course analyses, however, proved S744 phosphorylation of NPH3 to precede "general" dephosphorylation upon BL treatment (Fig. 6c). "General" dephosphorylation of NPH3 has been assumed to determine PM release of NPH3 coupled to particle assembly in the cytosol 4,18,26,27 . Our data now clearly indicate S744 phosphorylation-dependent 14-3-3 association to be the cause of PM dissociation, but not of condensate assembly in the cytosol. "General" dephosphorylation might thus be coupled to PM dissociation and/or condensate formation. We examined the "general" phosphorylation status of both GFP:NPH3 and GFP:NPH3-S744A when co-expressed with SAC1. Despite the fact that both NPH3 variants constitutively localized to cytosolic condensates (Figs. 1a and 5c), GFP:NPH3 was phosphorylated in darkness and shifted to the dephosphorylated status upon BL treatment, whereas GFP:NPH3-S744A exhibited a permanent phosphorylated state (Fig. 6d). "General" dephosphorylation of NPH3 is thus not coupled to PM dissociation. Moreover, it is neither a prerequisite nor a consequence of condensate assembly, rather it requires prior lighttriggered S744 phosphorylation and potentially 14-3-3 association (Fig. 6a, d). Taken together, we suggest that BL-induced phosphorylation of S744 provokes (i) 14-3-3 association, which releases NPH3 from the PM into the cytosol, and (ii) "general" Cycling of NPH3 might be key to function. The light-triggered and reversible shift in subcellular localization of NPH3 has led to the hypothesis that PM localization of NPH3 promotes its action in mediating phototropic signaling. In turn, NPH3 present in soluble condensates is considered to be inactive 18,26,27 . The functional relevance of the transient changes in subcellular NPH3 localization is, however, still not known. To assess the functionality of NPH3 variants constitutively localizing to condensates, GFP:NPH3-4K/A (RFP-tagged version shown in Fig. 2d), as well as GFP:NPH3ΔC51 (RFP-tagged version shown in Fig. 1d) were expressed in the loss-offunction Arabidopsis mutant nph3-7. Worth mentioning, the electrophoretic mobility of GFP:NPH3-4K/A corresponded to the dephosphorylated version of NPH3 and was not modified by light treatment (Fig. 7c), suggesting that "general" phosphorylation of NPH3 might take place at the PM. In line with the hypothesis mentioned above, NPH3 mutants constitutively present in condensates did not restore hypocotyl phototropism (Fig. 7a, b and Supplementary Movies 5 and 6). Contrary to the hypothesis, however, GFP:NPH3-S744A-despite exhibiting constitutive PM localization (Fig. 4b)-is also largely incapable of mediating phototropic hypocotyl bending in nph3-7 (Fig. 4a). To verify significantly impaired activity of permanently PM-attached NPH3, we examined GFP:NPH3ΔC28 in addition. Comparable to the results obtained in N. benthamiana (RFP-tagged version shown in Fig. 2d), GFP:NPH3ΔC28 remained PM-associated upon activation of phot1 in stable transgenic Arabidopsis lines ( Fig. 7b and Supplementary Movie 7) and its electrophoretic mobility was not modified by BL treatment (Fig. 7c). Noteworthy, both NPH3-S744A and NPH3ΔC28 still interacted with phot1 (Fig. 7d), indicating that complex formation at the PM is not compromised. Nevertheless, permanent attachment of NPH3 to the PM turned out to be insufficient for triggering the phototropic response in nph3-7 (Fig. 7a). Taken together, neither NPH3 mutants permanently detached from the PM nor NPH3 versions permanently attached to the PM seem to be fully functional (Fig. 7a, e). So what is the underlying mechanism of NPH3 function? We examined GFP:NPH3ΔN54 (RFP-tagged version shown in Fig. 1d ARTICLE associated to the PM in etiolated seedlings (Fig. 7b). Upon irradiation, it (i) became phosphorylated at S744 (Fig. 7c), (ii) exhibited an increased electrophoretic mobility, indicative of a "general" dephosphorylation ( Fig. 7c), and (iii) detached from the PM followed by condensate formation in the cytosol (Fig. 7b and Supplementary Movie 9). Furthermore, all these processes were reverted when seedlings were re-transferred to darkness (Fig. 7b, c). Intriguingly, expression of GFP:NPH3ΔN54 completely restored phototropic hypocotyl bending in nph3-7 (Fig. 7a) as did GFP:NPH3 (Fig. 4a). Thus, 14-3-3-mediated cycling of NPH3 between the PM and the cytosol might be of utmost importance for functionality (Fig. 7e).
Discussion
Our data provide novel insight into the molecular mechanisms defining NPH3 function in BL-induced phototropic hypocotyl bending. We applied a combination of genetic, biochemical, physiological, and live-cell imaging approaches to uncover the impact of 14-3-3 proteins on NPH3, in particular its BL-triggered, S744 phosphorylation-dependent and functionally essential release from the PM. Association of NPH3 with the PM is known since decades, but how it is recruited to this compartment is unknown. We demonstrated that NPH3 attaches to the PM in a phospholipid-dependent manner in darkness (Fig. 1a). The electrostatic interaction with polyacidic phospholipids (Fig. 1b, is mediated by four basic residues of an amphipathic helix, the hydrophobic face of which further contributes to PM association (Fig. 2d). We therefore suggest the amphipathic helix to be embedded in the PM inner leaflet with its hydrophobic interface inserted in the hydrophobic core of the bilayer, while the positively charged interface is arranged on the PM surface, interacting with the lipid polar heads. The molecular mechanism underlying PM association of NPH3 is thus different from the NRL protein MAB4/ENP, which is recruited to the PM by interaction with PIN proteins 29 . The amphipathic helix of NPH3 (amino acids 700-713) localizes downstream of the CC domain of NPH3 in its C-terminal region, which also encompasses the 14-3-3-binding site (S744) (Fig. 2a). We discovered that BL induces two distinct posttranslational modifications in NPH3 (Fig. 6): (i) the immediate phosphorylation of S744, which in turn enables association of 14-3-3 proteins with NPH3, followed by (ii) the well-described dephosphorylation, represented by an enhanced electrophoretic mobility of NPH3 ("general" dephosphorylation) [24][25][26] . The-as yet unrecognized-BL-induced NPH3 phosphorylation event linked to 14-3-3 association is of utmost importance, as it is essential for (i) the BL-triggered internalization of NPH3 from the PM (Fig. 4b) and (ii) the function of NPH3 in phototropic hypocotyl bending (Fig. 4a). However, expression of NPH3-S744A, which is incapable of 14-3-3 interaction, partially restored the severe impairment of hypocotyl phototropism in nph3-7 (Fig. 4a). Residual functionality might be due to co-action of this constitutively PM-associated NPH3 mutant with certain members of the NRL protein family. Indeed, RPT2 is required for hypocotyl phototropism at light intensities utilized in our assays 26 and its expression is induced and stabilized by BL treatment 51 . The closest homolog of NPH3, DEFECTIVELY ORGANIZED TRI-BUTARIES 3 (DOT3) 18 is, as yet, functionally uncharacterized. Worth mentioning, RPT2, DOT3, and also MAB4/ENP are capable of interacting with 14-3-3 isoforms representing the two phylogenetic 14-3-3 groups (Supplementary Fig. 5). In each case, exchange of the third last residue (serine) abolished 14-3-3 association in yeast ( Supplementary Fig. 5), suggesting that phosphorylation-dependent 14-3-3 binding is not limited to NPH3 but rather represents a more widespread mechanism of NRL regulation. However, residual activity of NPH3-S744A in phototropic hypocotyl bending might alternatively be caused by its permanent association with the PM per se. Light treatment could induce a reorganization of NPH3-S744A within/along the PM, which might allow for phototropic responsiveness to a certain level. Addressing these alternatives represents a formidable challenge for future research.
NPH3 has been described to re-localize directly from the PM into discrete bodies in the cytosol upon light treatment 26,27 . It became, however, evident that it initially detaches from the PM into the cytosol (Supplementary Movie 3). Here, NPH3 undergoes a dynamic transition from a dilute to a condensed state, resulting in the formation of membraneless biomolecular compartments (Fig. 5a, d). Biomolecular condensates are emerging as an important concept in signaling 52 . Their formation can be driven by multivalent interactions with other macromolecules, by intrinsically disordered regions within a single molecule, or both 49,53 . Interestingly, 14-3-3 proteins are dispensable for condensate assembly in the cytosol, as demonstrated by 14-3-3 binding-deficient NPH3 variants (Fig. 5b, c). Further studies will reveal whether condensate formation of the PM-detached NPH3 is essential for its action.
As described above, the light-triggered modifications of the phosphorylation pattern of NPH3 are highly complex. Our observations disproved the view that BL-triggered "general" dephosphorylation events determine PM dissociation of NPH3 18,26,27 . First of all, dephosphorylation of NPH3-i.e., a decrease in negative charge -is entirely inappropriate to interfere with membrane association relying on electrostatic interactions with polyacidic phospholipids. Furthermore, investigation of the seven NPH3 phosphorylation sites that were recently identified in etiolated Arabidopsis seedlings revealed that the phosphorylation status of these NPH3 residues was neither required for PM association in darkness nor BL-induced release of NPH3 into the cytosol 28 . By contrast, single-site mutation of the 14-3-3-binding site in NPH3 (S744A) abolished PM dissociation upon BL treatment (Fig. 4b, e), indicating light-induced and phosphorylation-dependent 14-3-3 association to mediate PM release of NPH3. Given that the amphipathic helix localizes~30-45 residues upstream of the 14-3-3-binding site (Fig. 2a), 14-3-3 binding to NPH3 is expected to induce a substantial conformational change that liberates the amphipathic helix from the PM. The molecular mechanism of NPH3 internalization is hence different from thelikewise PM-associated-photoreceptor phot1, trafficking of which occurs via vesicles through the endosomal recycling pathway 54 . Now, what about the BL-triggered "general" dephosphorylation of NPH3? Based on our findings, this posttranslational modification temporally succeeded light-induced S744 phosphorylation (Fig. 6c). Furthermore, "general" dephosphorylation was coupled to BL-triggered S744 phosphorylation, irrespective of the subcellular localization of NPH3 (Fig. 6a, d). We therefore suspect phosphorylation-dependent 14-3-3 Fig. 7 Functional relevance of the subcellular localization of NPH3. a Quantification of hypocotyl phototropism (mean ± SD) in etiolated Arabidopsis nph3-7 seedlings expressing GFP:NPH3 variants (35S promoter) and exposed to unilateral blue light (BL) (1 μmol m −2 s −1 , 24 h) (n ≥ 30 seedlings per experiment, three replicates). One-way ANOVA with Tukey's post hoc test is shown, different letters mark statistically significant differences (P < 0.05). Center line: median, bounds of box: minima and maxima (25th and the 75th percentiles), whiskers: 1.5 × IQR (IQR: the interquartile range between the 25th and the 75th percentile). Exact P-values for all experiments are provided in the source data file. b Representative confocal microscopy images of hypocotyl cells from seedlings shown in a and either maintained in darkness (D), treated with BL (~6 min GFP laser) or re-transferred to D (1 h) after 30 min BL (1 μmol m −2 s −1 ) (R-D). Scale bars, 25 μm. c Immunoblot analysis of total protein extracts (7.5% SDS-PAGE) from seedlings essentially treated as shown in b. BL treatment: 1 μmol m −2 s −1 , 30 min. Dashed line: expected shift in molecular mass, closed/open arrowheads: positions of "generally" phosphorylated/dephosphorylated NPH3 proteins, respectively. d In vivo interaction of RFP:NPH3 variants and phot1:GFP in transiently transformed (35S promoter) and dark-adapted N. benthamiana leaves. Microsomal proteins were immunoprecipitated using GFP beads. Immunoblot analysis of flowthrough and immunoprecipitate (IP) (11% SDS-PAGE) is shown. All experiments were performed at least three times with similar results. e Model depicting the light regime-triggered changes in the phosphorylation status, subcellular localization, and phototropic responsiveness of NPH3. BL-induced and phosphorylation-dependent (S744, blue) 14-3-3 association releases NPH3 from the PM into the cytosol followed by condensate formation. Residues phosphorylated in darkness (yellow) and dephosphorylated upon BL cause an electrophoretic mobility shift ("general" (de)phosphorylation). Re-transfer to darkness reverts all BL-triggered processes, finally resulting in PM re-association (middle panel). NPH3 variants either constitutively attached to (red flash, left panel) or constitutively detached from (red arrowhead, right panel) the PM are non-functional. Cycling of NPH3 between the PM and the cytosol is suggested to be essential for proper function.
binding to be required for BL-induced "general" dephosphorylation of NPH3 as well-a hypothesis that will be examined by future research.
Re-transfer of BL-irradiated seedlings to darkness triggers (i) dephosphorylation of S744 linked to 14-3-3 dissociation. 14-3-3 release is expected to result in a (re)exposure of the amphipathic helix, which subsequently enables (ii) re-association with the PM and presumably (iii) re-phosphorylation of NPH3, represented by a reduced electrophoretic mobility ("general" re-phosphorylation) (Fig. 6b, c). Intriguingly, neither NPH3 variants that constitutively localize to the PM nor mutant versions constitutively detached from the PM are capable of restoring the severe defect in hypocotyl phototropism in nph3-7. Complementation of the nph3-7 phenotype could exclusively be observed upon expression of NPH3 variants that exhibit a light regime-driven dynamic change in subcellular localization (Fig. 7a-c). In summary, we propose a model where S744 phosphorylation-dependent and 14-3-3-driven cycling of NPH3 between the PM and the cytosol critically determine NPH3 function in mediating phototropic signaling in Arabidopsis (Fig. 7e).
In the past, it has been hypothesized that the light-induced internalization of phot1-first described in 2002 55 -may be coupled to light-triggered re-localization of auxin transporters. Functionality of phot1, however, was unaffected when internalization of the photoreceptor was effectively prevented by PM tethering via lipid anchoring 56 . Altogether, the change in subcellular localization does not seem to be essential for signaling of phot1, but of its downstream signaling component NPH3 (Fig. 7e). Light-induced and 14-3-3-mediated detachment of NPH3 from the PM might hence account for BL-driven changes in PIN polarity required for hypocotyl phototropism. Plant 14-3-3 proteins have been shown to contribute to the subcellular polar localization of PIN auxin efflux carrier and, consequently, auxin transport-dependent growth 7 . NRL proteins in turn act as signal transducers in processes involving auxin (re)distribution in response to developmental or environmental signals 18 , hence providing a likely link between 14-3-3 and PIN polarity. One subfamily of the NRL protein family consists of MAB4/ ENP-like (MEL) polypeptides, playing a critical role in auxin-regulated organogenesis in Arabidopsis 57-59 . MEL proteins exhibited a polar localization at the cell periphery, which was almost identical to that of PIN proteins 60,61 and were recently shown to maintain PIN polarity by limiting lateral diffusion 29 . Thus, one attractive hypothesis is that certain NRL proteins contribute either to the maintenance or to a dynamic change of the subcellular polarity of PIN auxin carriers, thereby regulating auxin (re)distribution. Given that several NRL proteins are able to interact with 14-3-3 via a C-terminal binding motif ( Supplementary Fig. 5), phosphorylation-dependent 14-3-3 association might constitute a crucial mechanism of regulation for NRL proteins and consequently polarity of PIN proteins.
Methods
Plant materials, transformation, and growth conditions. Arabidopsis thaliana (ecotype Columbia-0 (Col-0)) expressing 14-3-3 epsilon:GFP under control of the native promoter has been described recently 7 . Seeds of A. thaliana nph3-7 (SALK_110039, Col-0 background) were obtained from the Nottingham Arabidopsis Stock Centre. T-DNA insertion was confirmed by genomic PCR analysis and homozygous lines were identified. Stable transformation of nph3-7 followed standard procedures.
Seeds were surface sterilized and planted on solid half-strength Murashige and Skoog (MS) medium (pH 5.8). Following stratification in the dark for 48-72 h at 4°C, seeds were exposed to fluorescent white light for 4 h. Seedlings were then grown in darkness for 68 h at 20°C. Subsequently, the etiolated seedlings were either kept in darkness or irradiated with BL (overhead BL (1 μmol m −2 s −1 ) for up to 40 min or, alternatively, treatment with the GFP laser (488 nm) for up to 11 min during confocal observation of hypocotyl cells, as specified in the figure legends).
Independent experiments were carried out at least in triplicates. Representative images are presented.
Agrobacterium-mediated transient transformation of 3-4 weeks old N. benthamiana plants was performed as described 62 . Agrobacterium tumefaciens strain GV3101, transformed with the binary vector of interest, was resuspended in infiltration solution (10 mM MES pH 5.6, 10 mM MgCl 2 , 150 µM acetosyringone) at an OD 600 of 0.1-0.2 and infiltrated into the abaxial epidermis of N. benthamiana leaves. For co-transformation, a 1 : 1 mixture was used. Freshly transformed tobacco plants were kept under constant light for 24 h, subsequently transferred to darkness for 17 h (dark adaptation), and finally either kept in darkness or irradiated (overhead BL (10 μmol m −2 s −1 ) for up to 40 min or, alternatively, treatment with the GFP laser (488 nm) for up to 11 min during confocal inspection of abaxial leaf epidermis cells, as specified in the figure legends). Independent experiments were carried out at least in triplicates. Representative images are presented.
Cloning procedures. A 2.1 kb NPH3 promoter fragment was PCR-amplified from Col-0 genomic DNA and the cDNA of NPH3 was amplified from Col-0 cDNA. The respective primers were characterized by BsaI restriction sites allowing for the usage of the Golden Gate-based modular assembly of synthetic genes for transgene expression in plants 63 . Following A-tailing, the individual PCR products were directly ligated into the pGEM-T Easy vector (Promega, Madison, USA), yielding level I vectors LI A-B pNPH3 and LI C-D NPH3, respectively. Golden Gate level II assembly was performed by BsaI cut ligation and by using the modules LI A-B pNPH3, LI B-C GFP 63 or LI B-C mCherry 63 , LI C-D NPH3, LI dy D-E 63 , LI E-F nos-T 63 , and LI F-G Hygro 63 . All plasmids were diluted to a final concentration of 100 ng/μl. In a 15 μl reaction, 1 μl of each plasmid was incubated with 0.5 μl of BsaI (Thermo Scientific, Waltham, USA), 0.75 μl T4 ligase (Thermo Scientific), and 1.5 μl ligase buffer. Reactions were incubated in a thermocycler for 25 cycles, cycling between 37°C for 2 min and 16°C for 5 min, followed by 37°C for 5 min, 50°C for 5 min, and 80°C for 5 min. Finally, 0.5 μl T4 ligase was added. Following an incubation at 37°C for 1 h, a 3 μl aliquot was transformed into Escherichia coli TOP10.
Cloning of N-terminally fluorophore-tagged NPH3 variants (GFP and/or RFP) into the destination vectors pB7WGR2 and/or pH7WGF2 65 for stable or transient overexpression followed standard GATEWAY™ procedures. Transgenic plants were selected based on the hygromycin resistance conferred by pH7WGF2 and homozygous lines were established. The 35S-driven PHOT1:GFP 54 and the 35S::MAP:mCherry:SAC1/SAC1 DEAD transformation vectors 33 have been described before, respectively.
Site-directed mutagenesis was performed by PCR. PCR products and products of mutagenesis were verified by sequencing.
A complete list of oligonucleotides used for PCR is provided in Supplementary Table 2.
Expression and purification of proteins. For bacterial expression of the Arabidopsis 14-3-3 isoform omega as RGS(His) 6 -tagged protein, the corresponding cDNA was amplified by PCR and cloned into the expression vector pQE-30 (Qiagen, Hilden, Germany). Transformed E. coli M15 was grown in liquid lysogeny broth (LB) medium containing ampicillin (100 μg/ml) and kanamycin (25 μg/ml) at 37°C until an OD 600 of 0.6. Protein expression was induced by adding isopropyl β-d-1-thiogalactopyranoside (IPTG) to a final concentration of 0.5 mM. Following overnight growth at 16°C, bacteria were collected by centrifugation. The pellet was frozen in liquid nitrogen. Following thawing on ice, the cells were resuspended in lysis buffer (50 mM NaH 2 PO 4 , 300 mM NaCl, 10 mM imidazole pH 8.0 using NaOH) containing lysozyme (2 mg/ml). Cells were lysed by sonication. Purification under native conditions was done by using the cleared M15 lysate and Ni 2+ -NTA agarose (Qiagen) according to the manufacturer's protocol.
For bacterial expression of the Arabidopsis NPH3 C-terminal 51 residues fused to glutathione S-transferase (GST), the corresponding cDNA fragment was amplified by PCR and cloned into the GST expression vector pGEX-4T-1 (Cytiva, Marlborough, USA). Transformed E. coli BL21(DE3) was grown in 2 × yeast extract and tryptone (YT) medium containing ampicillin (100 μg/ml) at 37°C until an OD 600 of 0.6. Protein expression was induced by adding IPTG to a final concentration of 0.1 mM. Following overnight growth at 20°C, bacteria were collected by centrifugation. The pellet was frozen in liquid nitrogen. Following thawing on ice, the cells were resuspended in bacterial protein extraction reagent B-PER (5 ml/g fresh weight) (Thermo Scientific). GST fusion proteins were purified from the cleared bacterial lysate using GSH-Sepharose 4 Fast Flow equilibrated with phosphate-buffered saline (PBS) according to the manufacturer's protocol (Cytiva). Elution of bound proteins was achieved by adding 10 mM reduced glutathione in 50 mM Tris pH 8.0. Free GST protein was expressed and purified to serve as a negative control.
Cell-free protein expression. Reactions were performed using the TNT® T7 Quick Coupled Transcription/Translation System (Promega) with 1 μg of vector (NPH3 or variants in pGADT7), 40 μl TNT® Quick Master Mix, and 1 μl 1 mM methionine in a total volume of 50 μl. Protein expression was carried out at 30°C for 90 min. Immunodetection was performed by using an anti-hemagglutinin (HA) antibody (HA-tag encoded by pGADT7).
Phospholipid-binding assays. For lipid binding assays, either NPH3 variants expressed in a cell-free system or purified recombinant GST fusion proteins were applied. Lipid overlay assays using phosphorylated derivatives of phosphatidylinositol (PIP) strips were performed following the manufacturer's instructions (Echelon, Salt Lake City, USA). In brief, membranes were blocked overnight at 4°C in blocking buffer (4% fatty acid-free bovine serum albumin in PBS-T (0.1% Tween-20 in PBS)). Purified proteins (0.1 μg/ml blocking buffer) or 10-50 μl of the cell-free expression reaction (volume adjusted according to prior immunodetection of individual reactions) were incubated with PIP-strip membranes for 1 h at room temperature and washed three times for 10 min with PBS-T. Subsequently, bound proteins were visualized by immunodetection of either GST (GST fusion proteins) or the HA-tag (cell-free expression).
Liposome-binding assays were conducted essentially as described by ref. 66 with slight modifications. All lipids were obtained from Avanti Polar Lipids (Birmingham, USA). Liposomes were prepared from 400 nmol of total lipids at the molar ratios: PC : PE, 1 : 1; PC : PE : PI4P, 2 : 2 : 1; PC : PE : PA, 2 : 2 : 1 by using a mini-extruder (Avanti Polar Lipids) at room temperature. Following centrifugation at 50,000 × g for 15 min at 22°, the liposome pellets were resuspended in 25 μl binding buffer (150 mM KCl, 25 mM Tris-HCl pH 7.5, 1 mM DTT, 0.5 mM EDTA, supplemented with Complete Protease Inhibitor Mixture (Roche)). Purified GST-NPH3-C51 variants in binding buffer were centrifuged at 50,000 × g to get rid of any possible precipitates. Subsequent to an incubation of liposomes and proteins (500 ng in 25 μl binding buffer) on an orbital shaker platform for 45 min, the samples were centrifuged at 16,000 × g for 30 min at room temperature. The liposome pellet was washed twice with binding buffer. Liposome-bound GST-NPH3-C51 variants were detected by immunoblotting with anti-GST antibodies.
Y2H, SDS-PAGE, and western blotting. For yeast two-hybrid analyses, the individual constructs were cloned into the vectors pGADT7 and pGBKT7 (Takara Bio, Kusatsu, Japan), and co-transformed into the yeast strain PJ69-4A. Activity of the ADE2 reporter was analyzed by growth of co-transformed yeast on synthetic dropout (SD) medium lacking adenine.
SDS-PAGE, western blotting, and immunodetection followed standard procedures. Total proteins were extracted from 3-day-old etiolated Arabidopsis seedlings (50 seedlings) or transiently transformed N. benthamiana leaves (2 leaf discs) by directly grinding in 100 μl 2 × SDS sample buffer under red safe light illumination. Chemiluminescence detection was performed with an Amersham Image Quant800 (Cytiva) system.
In addition, the following antibodies were used in this study: anti-GST (1 : 2000 CoIP and MS analysis. Arabidopsis seeds expressing 14-3-3 epsilon-GFP (endogenous promoter) and, as control, GFP (UBQ10 promoter) were sown on halfstrength MS plates and grown in the dark for 3 days. Subsequently, the etiolated seedlings were either kept in darkness or treated with overhead BL (1 μmol m −2 s −1 ) for 30 min. Three grams of plant tissue were used under red safe light illumination for immunoprecipitation essentially as described in 67 with slight modifications. The seedlings were ground thoroughly in liquid nitrogen and suspended in lysis buffer (50 mM Tris pH 7.5, 150 mM NaCl, supplemented with Complete Protease Inhibitor Mixture (Roche) and PhosSTOP phosphatase inhibitor cocktail (Roche)) containing 1% Triton X-100. After 30 min incubation on ice, cell debris-removed supernatants were incubated with 50 μl GFP-Trap beads (ChromoTek) for 3 h in the cold room with mild rotation. The beads were washed three times with lysis buffer containing 0.1% Triton X-100, followed by washing with lysis buffer. The final precipitate in Laemmli buffer was analyzed by MS at the Proteome Center Tübingen, University of Tübingen. Following a tryptic in gel digestion, liquid chromatography-MS/MS analysis was performed on a Proxeon Easy-nLC coupled to an QExactiveHF mass spectrometer (method: 60 min, Top7, higher-energy C-trap dissociation (HCD)). Processing of the data was conducted using MaxQuant software (vs 1.5.2.8). The spectra were searched against an A. thaliana UniProt database. Raw data processing was done with 1% false discovery rate setting. Two individual biological replicates were performed and proteins identified in only one of the two experiments were omitted from the list of 14-3-3 epsilon-GFP interaction partners. Protein signal intensities of well-known 14-3-3 client proteins (Fig. 3c) were converted to normalized abundance of the bait protein. Fold changes in relative abundance of BL treatment vs. darkness (BL vs. D) were calculated (Supplementary Table 1).
Arabidopsis nph3-7 ectopically expressing GFP:NPH3 and N. benthamiana leaves transiently overexpressing fluorophore-tagged proteins were immunoprecipitated under red safe light illumination according to ref. 68 . Growth and light irradiation of the plants is specified in the figure legends. Immunoprecipitations were performed with 50 mg of tissue. Ground material was resuspended in solubilization buffer (25 mM Tris pH 8.0, 150 mM NaCl, 1% NP40, 0.5% sodium deoxycholate, supplemented with Complete Protease Inhibitor Mixture (Roche) and Phosphatase Inhibitor Mix 1 (Serva)). After 1 h incubation in the cold room with mild rotation, cell debris-removed supernatants were incubated with 20 μl GFP-Trap beads (ChromoTek) for 1 h in the cold room with overhead rotation. The beads were washed twice with solubilization buffer, followed by two washing steps with 25 mM Tris pH 8.0, 150 mM NaCl. Protein blottings were probed directly with an appropriate antibody or, alternatively, used for 14-3-3 farwestern analysis.
In vivo interaction of phot1:GFP and N-terminally RFP-tagged NPH3 variants was tested by using solubilized microsomal proteins obtained from dark-adapted N. benthamiana plants ectopically co-expressing the proteins of interest. Solubilization was achieved by adding 0.5% Triton X-100 to resuspended microsomal proteins followed by centrifugation at 50,000 × g for 30 min at 4°C. The supernatant was added to GFP-Trap Beads (ChromoTek) and incubated at 4°C for 1 h. Precipitated beads were washed six times with 50 mM HEPES pH 7.8, 150 mM NaCl, 0.2% Triton X-100. Finally, proteins were eluted by SDS sample buffer and separated by SDS-PAGE.
Hypocotyl phototropism analysis. A. thaliana seedlings were grown in the dark on vertically oriented half-strength MS plates for 48 h. Etiolated seedlings were then transferred to an light emitting diode (LED) chamber and illuminated with unilateral BL (1 μmol m −2 s −1 ) for 24 h. Plates were scanned and the inner hypocotyl angle was measured for each seedling using ImageJ 69 . The curvature angle was calculated as the difference between 180°and the measured value. For each transgenic line, three biological replicates (n ≥ 30 seedlings per experiment) were performed, alongside with the appropriate controls (Col-0, nph3-7) (see Source Data file, phototropism sheet I to VI).
Confocal microscopy. Live-cell imaging was performed using the Leica TCS SP8 (upright) confocal laser scanning microscope. Imaging was done by using a × 63/ 1.20 water-immersion objective. For excitation and emission of fluorophores, the following laser settings were used: GFP, excitation 488 nm, emission 505-530 nm; RFP, excitation 558 nm, emission 600-630 nm. All confocal laser scanning fluorescence microscopy (CLSM) images in a single experiment were captured with the same settings using the Leica Confocal Software. All experiments were repeated at least three times. Images were processed using LAS X light (version 3.3.0.16799).
Single-cell time-lapse imaging was carried out on live leaf tissue samples from N. benthamiana transiently expressing RFP:NPH3. PM detachment was induced by means of the GFP laser (488 nm) and image acquisition (RFP-laser) was done for the duration of 32 min by scanning 30 consecutive planes along the Z axis covering the entire thickness of an epidermal cell. Z-projection was done for each 3.5 min interval. Five biological replicates were performed.
Data analysis. The angles of phototropic curvature of all analyzed Arabidopsis genotypes (≥ 30 seedlings per genotype) were measured using ImageJ software and analyzed using Graphpad Prism software. Statistical significance of the data was assessed using one-way analysis of variance (ANOVA) followed by post hoc Tukey's multiple comparison test (P < 0.05). Error bars represent standard deviation. The results of all three biological replicates and detail of one-way ANOVA are provided in the Source Data file, phototropism sheet I to VI. For IP-MS data, NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-021-26332-6 ARTICLE NATURE COMMUNICATIONS | (2021) 12:6128 | https://doi.org/10.1038/s41467-021-26332-6 | www.nature.com/naturecommunications Student's t-tests were performed using MS-Excel. For all image quantifications related to single-cell time-lapse imaging, randomly sampled unsaturated confocal images (1024 × 1024 pixels, 246 × 246 μm) were used with an image analysis protocol implemented in the ImageJ software 69 as described 70 . A random image was selected from the data set and parameters such as local threshold, background noise, object size, and shape were determined. The obtained parameters were used for image analysis of the whole data set following exactly the published step-by-step protocol 70 . Unless otherwise stated, graphs present data from a single experiment.
|
2021-04-16T13:24:48.694Z
|
2021-04-11T00:00:00.000
|
{
"year": 2021,
"sha1": "9b7d3929aef7703ed123824e37dbd166c1b9d043",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-021-26332-6.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "418fe0a14f7c00f4af7f98f80458664d2c65e59e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology",
"Chemistry"
]
}
|
246478257
|
pes2o/s2orc
|
v3-fos-license
|
Unraveling the Dichotomy of Enigmatic Serine Protease HtrA2
Mitochondrial high-temperature requirement protease A2 (HtrA2) is an integral member of the HtrA family of serine proteases that are evolutionarily conserved from prokaryotes to humans. Involvement in manifold intricate cellular networks and diverse pathophysiological functions make HtrA2 the most enigmatic moonlighting protease amongst the human HtrAs. Despite perpetuating the oligomeric architecture and overall structural fold of its homologs that comprises serine protease and regulatory PDZ domains, subtle conformational alterations and dynamic enzymatic regulation through the distinct allosteric mode of action lead to its functional diversity. This mitochondrial protease upon maturation, exposes its one-of-a-kind N-terminal tetrapeptide (AVPS) motif that binds and subsequently cleaves Inhibitor of Apoptosis Proteins (IAPs) thus promoting cell death, and posing as an important molecule for therapeutic intervention. Interestingly, unlike its other human counterparts, HtrA2 has also been implicated in maintaining the mitochondrial integrity through a bi-functional chaperone-protease activity, the on-off switch of which is yet to be identified. Furthermore, its ability to activate a wide repertoire of substrates through both its N- and C-terminal regions presumably has calibrated its association with several cellular pathways and hence diseases including neurodegenerative disorders and cancer. Therefore, the exclusive structural attributes of HtrA2 that involve multimodal activation, intermolecular PDZ-protease crosstalk, and an allosterically-modulated trimeric active-site ensemble have enabled the protease to evolve across species and partake functions that are fine-tuned for maintaining cellular homeostasis and mitochondrial proteome quality control in humans. These unique features along with its multitasking potential make HtrA2 a promising therapeutic target both in cancer and neurodegeneration.
signal sequences, insulin-like growth factor-binding domains, and serine protease inhibitor domains (human HtrA1, HtrA3, and HtrA4) implicating intra-and inter-species functional divergence. Furthermore, their catalytic activity that can be allosterically tuned through an intricate rheostatic on/off switch as well as the modulatory protein-protein interaction domain(s) aka PDZ, has garnered much attention for their immense translational possibilities.
Interestingly, unlike eukaryotes and bacteria, archaean genomes are devoid of HtrA homologs (Koonin and Aravind, 2002). Although, all sequenced Nematoda genomes including the model organism Caenorhabditis elegans lack HtrA-like genes, they do encode PDZ-containing proteins (Koonin and Aravind, 2002) thus underscoring the functional relevance of this regulatory domain in various cellular pathways. While bacterial HtrAs have been demonstrated to be involved in protein quality control processes such as protein folding, stress response, and degradation of misfolded cell envelope proteins , this function is manifested in their mammalian counterparts through the elimination of misfolded proteins including growth factors, regulation of cell proliferation, migration and apoptosis (Grau et al., 2005;Hou et al., 2005;Kapri-Pardes et al., 2007;Moisoi et al., 2009).
Among the four human HtrAs (HtrA1-4) that have been identified to date, HtrA2 has been most widely studied due to its enigmatic structural characteristics and profound functional relevance. While HtrA2 is found in the mitochondrial intermembrane space (IMS), its paralogs HtrA1, 3, and 4 are mostly found in the secretory process. Despite similar overall structural signature and conserved protease and PDZ domain architecture, these enzymes show a significant divergence in their N-terminal regions that might be essential for catering to their distinct functional properties For example, the N-terminal regions of HtrA1, 3, and 4 include secretory signals, along with insulin-like growth factor binding motifs and Kazal-type S protease inhibitor domains, while HtrA2 contains a mitochondrial localization signal ( Figures 1A,B).
HtrA2, with a pyramid-shaped trimeric ensemble, is unique among its peers being the only known mitochondrial protease with a PDZ domain that identifies exposed hydrophobic regions of misfolded proteins (Li et al., 2002;Clausen et al., 2011;Singh et al., 2011). Furthermore, with the triggering of apoptotic signal, mature HtrA2 gets released from the mitochondrial IMS into the cytosol at the expense of its first 133 amino acid residues ( Figure 1B). This series of events exposes an N-terminal tetrapeptide motif (AVPS) that binds to the Inhibitor of Apoptosis Proteins (IAPs) and abate their inhibition on caspases thus promoting apoptosis. Furthermore, HtrA2 is known to participate in apoptosis through both caspasedependent and independent pathways, the latter through its serine protease activity (Hegde et al., 2002;Martins et al., 2002;Verhagen et al., 2002). Apart from its prominent role as a proapoptotic molecule, its involvement in neurodegenerative disorders has also been established through a missense mutation (Ser276Cys) in transgenic mice that exhibited motor neuron degeneration 2 (mnd2) implicating a Parkinsonian phenotype in humans (Jones et al., 2003). Further functional and clinical studies established HtrA2's involvement in several neurodegenerative disorders (Inagaki et al., 2008;Kang et al., 2013;Wagh and Bose, 2018;Bose et al., 2021).
STRUCTURAL FEATURES OF HtrA2
Several efforts over the past decade have been made to capture the structural complexity of this proapoptotic enzyme from various perspectives. Shi and co-workers first provided the snapshots of the inactive (S306A) substrate-unbound form of mature HtrA2 in three-dimensional space (Li et al., 2002). The structural data showcased a trimeric pyramidal architecture with the short N-terminal regions upholding the oligomeric ensemble through van der Waals interactions, while three PDZ domains at the base encapsulated the active-sites of the protease domains. The protease domain that embeds a hydrophobic active-site pocket with the catalytic triad (Ser306, His198, and Asp228) forms a compact structural fold comprising seven α-helices and 19 β-strands. Surrounded by several regulatory and specificity loops, this domain is positioned deep within the oligomer at 25 Å above the base of the pyramid ( Figure 1C) suggesting the requirement of substantial conformational changes for substrate binding and subsequent cleavage. The core of the pyramid is flanked by the regulatory PDZ domains that recognize and bind to the C-terminal region of their interacting partners. This is achieved through the canonical PDZ binding groove (YIGV) that is integrated into the PDZprotease domain interface. The structural study also demonstrates that several non-covalent interactions in the substrate-unbound state keep the protease domain in its 'closed' conformation, through inhibitory interference from the surrounding PDZ domains.
Although, this structure provided an excellent overview of the HtrA2 structure, this substrate-unbound form of the protease failed to explain the underlying dynamics of its mode of activation. Most importantly, the model's inability to enumerate the necessity to have a trimeric structure for its enzymatic functions as well as the mode of its distal allosteric regulation, impelled scientists to unravel the minutiae of its interactions from a more physiological as well as quantitative perspectives.
ACTIVE SITE CONFORMATION AND MULTIPLE ACTIVATION MECHANISMS OF HtrA2
The pre-defined conserved domains of HtrA2, along with its regulatory (L1, L3, and LD) and specificity (L3-that accommodates specificity pocket) loops contribute to the activation mechanism of HtrA2 through multiple regulatory nodes ( Figure 1C). Since these dynamic loops were mostly unresolved in the crystal structure, several efforts were made to investigate the multimodal allosteric regulation of the protease as well as understand the intricacies of HtrA2-mediated substrate cleavage (Martins et al., 2003;Jarzab et al., 2016). Because the Frontiers in Molecular Biosciences | www.frontiersin.org February 2022 | Volume 9 | Article 824846 allosteric binding partners are also predominantly its substrates (such as IAPS, GRIM-19, and Dusp-9), therefore the stepwise concerted allosteric mechanism either individually or in collaboration with different activation pathways could not be unequivocally determined using discrete peptide libraries. To circumvent the problem, Bose and co-workers utilized enzymology and biophysical approaches to understand the intricate coordination between the protease domain and other regions of the protein using full-length binding partners and/or substrates. Using β-casein, the generic substrate of serine proteases, Chaganti et al., revisited the pre-existing model of HtrA2 activation and propounded a new hypothesis that relies on inter-molecular protease-PDZ crosstalk for initial substrate binding at the PDZ domain and its subsequent cleavage (Chaganti et al., 2013). This study identified interaction between the PDZ domain of one monomer with the serine protease domain of an adjacent one, which led to the rearrangement of H65 of the catalytic triad in a way to form a proper oxyanion hole. This series of inter-molecular making and breaking of bonds unequivocally demonstrated the requirement of the trimeric architecture for its allosteric propagation and activation by capturing the dynamics of the PDZ-and temperature-mediated activation process. Singh et al., built upon the previous studies on N-terminal mediated activation of HtrA2 (Verhagen et al., 2002) and described the global conformational plasticity and subtle conformational reorientations in the loop regions surrounding the active-site to be involved in this process. Interestingly, using quantitative enzyme kinetics studies, they further demonstrated that the N-terminal mediated activation might also be regulated by PDZ-bound allosteric modulators and vice-versa (Singh et al., 2011;Singh et al., 2014) to bring the protease to the most competent catalytic state.
Although these studies provided a holistic understanding of HtrA's mode of activation through three distinct yet non-exclusive modes, they did not provide the stoichiometric contribution of the PDZ-protease communication in a step-by-step manner. Using molecular dynamics, protein engineering, structural and chemical biology approaches, two different groups Toyama et al., 2021) distinctly established the trans-mediated Upon apoptotic trigger, the mitochondrial localization signal (133 residues) from the N-terminus gets cleaved exposing a tetrapeptide IAP-binding motif (IBM/AVPS), and concomitantly releasing the protease into the cytosol. Subsequent substrate binding at N-and/or C-termini leads to allosteric protease activation and substrate cleavage as described in the text. (C) The three-dimensional trimeric model adopted from the crystal structure (PDB ID: 1LCY) of HtrA2 highlighting the hidden catalytic triad (rainbow spheres) 25 Å above base of the pyramid (left side) while a single monomer has been zoomed into for describing the loops (yellow) and domains (N-terminal region: light purple, SPD: pink, PDZ: orange); the catalytic site has been shown in the inset (right side). L1, L2, L3 and LD are loops; SPD is serine protease domain and Linker represents the flexible region between the SPD and PDZ domain.
Frontiers in Molecular Biosciences | www.frontiersin.org February 2022 | Volume 9 | Article 824846 PDZ-protease collaboration that espouses a unique reciprocative mechanism where the distal PDZ reorients the active site of the adjacent monomer and attunes it for catalysis through a precise synergistic relay of information. This multitiered regulation of HtrA2 activation might be critical toward prevention of untimely proteolysis as well as accurately controlling its involvement in different pathophysiological pathways such as apoptosis, protein quality-control, cancer, arthritis, and neurodegeneration, where it cleaves a wide spectrum of substrates in different subcellular locations. This is substantiated by the identification and characterization of protein-protein interactions involving HtrA2 and its substrates such as Inhibitor of Apoptosis Proteins (IAPs), hematopoietic cell-specific protein-1-(HS1)-associated protein X-1 (Hax-1), Dual-specificity phosphatase-9 (DUSP-9), a gene associated with retinoic and interferon-induced mortality-19 protein and Phosphoprotein enriched in astrocytes-15 (Pea-15) (Chaganti et al., 2019;Acharya et al., 2020;Kummari et al., 2021) that unlike other HtrAs are interestingly not restricted to the C-terminal PDZ domains. The holistic enumeration of HtrA2's activation network has been vividly illustrated in Figure 2 and the mechanism is elaborated in the figure legend. This chain of ground-breaking revelations on the reciprocity of its structural dynamism and multifarious physiological as well as disease-associated functions as discussed below have opened up avenues to regulate HtrA2 functions at various check-points toward devising customized therapeutic strategies.
IS HtrA2 A CHAPERONE?
The neurodegenerative phenotype of mice lacking HtrA2 or harboring the enzymatically inactive mnd2mutant (S276C) . This conformation might be important in some scenarios for preparing the basal protease to readily bind substrates of distinct cellular pathways either/both at the N-terminal or/and the PDZ domains (shown by arrows). (C) Represents N-terminal mediated allosteric activation of HtrA2, where the basal state (I) binds to the N-terminal binding partners (such as IAPs) of HtrA2 that leads to favorable conformational alterations in the distal protease and PDZ domains leading to an active state (E N ' state or N-terminal mediated activated state) as shown in step (V). This conformational state can further be modulated through temperature and/or PDZ domains leading to the most active protease (E act** state) as shown in (VI). This unique model shows intricate crosstalk among distinct activation networks that might or might not be mutually exclusive depending upon specific cellular requirements.
Frontiers in Molecular Biosciences | www.frontiersin.org February 2022 | Volume 9 | Article 824846 implies that HtrA2 protease activity protects neuronal mitochondria (Jones et al., 2003;Martins et al., 2004). It was earlier speculated that HtrA2 monitors and regulates protein folding in the mitochondria in a way DegP does in the bacterial periplasm. Further studies demonstrated that unfolded protein response (UPR) induced by tunicamycin or heat shock as well as etoposide-activatedp53 stress pathwayupregulated expression of HtrA2 protease (Jin et al., 2003). Alike DegP, HtrA2 is also activated by elevated temperatures (Martins et al., 2003). Moreover, both HtrA2 and DegP prefer aliphatic Val or Ile in P1 position for substrate recognition and cleavage (Kolmar et al., 1996). Despite these similarities, HtrA2 shares strikingly higher structural and functional traits with DegS, which argue against its DegP-like chaperoning function, and hints a bearing closer to DegS. In particular, HtrA2 is protease-active at room temperature (Savopoulos et al., 2000), while DegP is activated only at elevated temperatures (Spiess et al., 1999). In addition, while DegP with two PDZ domains, folds into a higherorder hexagonal cage (Krojer et al., 2002), the trimeric HtrA2 and DegS (sans the additional PDZ and the necessary longer LA loop) are unable to prevent the entry of correctly folded proteins into the proteolytic sites Kim et al., 2003;Kim and Kim, 2005;Clausen et al., 2011) thus creating certain equivocacies toward defining its role as a chaperone. Interestingly, the identification of presenilin and amyloid precursor protein as natural substrates of HtrA2 Gupta et al., 2004) necessitates further studies to resolve the ambiguities surrounding HtrA's role in unfolded protein aggregation and quality control.
ROLE OF HtrA2 IN APOPTOSIS
HtrA2 was the first to be identified as an IAP binding protein (Hegde et al., 2002). Its functional similarity with second mitochondria-derived activator of caspase (Smac)/direct IAP binding protein with low pI (DIABLO) established its role as a proapoptotic molecule (Martins et al., 2002;Suzuki et al., 2004). HtrA2, which resides in the mitochondrial IMS is released into the cytosol after the separation of its 133-residue mitochondrial localization signal. This exposes an N-terminal IAP-binding motif (IBM) comprising a tetrapeptide 'AVPS' that is recognized as a binding site for IAPs. Unlike Smac, HtrA2 also cleaves IAPs and hence irrevocably relieves their inhibition on caspases (caspases-3,-7, and -9), thus promoting apoptosis Yang et al., 2003). Conservation of the IBM motif is found across species, where its Drosophila ortholog with two IBM motifs attracts DIAP1, enabling its removal by the serine protease activity (Challa et al., 2007). Likewise, the rhesus monkey and rodent orthologs of the protease have maintained the IBM motif suggesting evolutionary diversification of HtrA2 functions in higher organisms (Vande Walle et al., 2008). Interestingly, the two IAP-related proteins in C. elegans do not appear to be involved in apoptosis regulation (Fraser et al., 1999;Speliotes et al., 2000), suggesting that IAP proteins and the appearance of IAP antagonists like HtrA2 and Drosophila Reaper, Hid, and Grim are recent additions to the apoptotic molecular repertoire. Although human HtrA2 and its evolutionary paralogs bind and degrade many IAP family members, XIAP is found as the most effective amongst them as it engages a second interaction surface that permits strong caspase inhibition (Eckelman et al., 2006). However, to inhibit caspase activation, cIAP1,cIAP2, and XIAP target bound caspases for ubiquitinmediated proteasomal degradation (Vaux and Silke, 2005) thus necessitating HtrA2 to cleave all of them. Apart from N-terminal mediated apoptosis, HtrA2 binds important molecules of the apoptotic pathway through its regulatory C-terminal PDZ domain. The binding of substrates to the hydrophobic YIGV groove allosterically activates the protease for substrate binding and subsequent catalysis. Furthermore, binding to mitochondrial substrates at the early apoptotic stage such as GRIM-19 and Hax-1 might be important toward attuning the mature protease for its proapoptotic functions before it enters the cytoplasm (Cilenti et al., 2004;Ma et al., 2007;Chaganti et al., 2019;Kummari et al., 2021) where it binds several antiapoptotic proteins including IAPs and death effector domain (DED) containing Pea-15 (Trencia et al., 2004). HtrA2 is also capable of inducing caspase-independent apoptosis via its serine protease activity by cleaving several critical cellular molecules such as cytoskeletal proteins (actin, α-/β-tubulin, and vimentin) that are important for upholding cellular integrity (Vande Walle et al., 2007). KIAA1967 and KIAA0251are two newly identified proteins of the apoptotic pathway that have been found to be substrates of HtrA2 (Vande Walle et al., 2007). A caspase-generated cleavage fragment of KIAA1967 was demonstrated to cause mitochondrial clustering and matrix condensation in apoptotic HeLa cells (Sundararajan et al., 2005), whereas KIAA0251 interacts with the endoplasmic reticulum (ER) membrane protein Bap29, a component known to be required for caspase-8 activation in the ER (Breckenridge et al., 2002). Taken together, the substrates found and verified for HtrA2, reveal that this protease is involved in the apoptotic process at the cytoskeleton, translation initiation complex, and organelle dismantling levels.
Multiple modes of activation and a variety of substrates in different subcellular locations make HtrA2 omnipresent in the apoptotic pathway. Furthermore, distal N-/C-termini and heatmediated positive allosteric modulation as well as negative regulation of its proapoptotic functions through phosphorylation at Ser212 (Yang et al., 2016) re-instate its enigmatic role in the cell death network. However, the lack of definitive in vivo models of HtrA2's contribution toward apoptotic pathway might be limited by the number of identified natural substrates to date as well as due to redundancy in its functions in the cell, which requires further investigations.
HtrA2 IN NEURODEGENERATIVE DISORDERS AND CANCER
The first report on HtrA2's involvement in neurodegeneration came into existence with the identification of its interaction with Alzheimer's disease-associated protein, presenilin-1 . This was later substantiated by a homozygous loss-of-Frontiers in Molecular Biosciences | www.frontiersin.org February 2022 | Volume 9 | Article 824846 function mutation (S276C) identified as motor neurodegeneration 2 (mnd2) in mice (Jones et al., 2003), which was further bolstered by the development of homozygous HTRA2 knock-out mice exhibiting Parkinsonian phenotype (Martins et al., 2004) thus assigning HTRA2 gene the PARK13(Parkinson's disease 13) locus (Strauss et al., 2005;Abou-Sleiman et al., 2006). These critical inputs led to the initiation of several clinical studies involving PD cohorts from various populations across the globe to identify the involvement of HTRA2 and its mutations in PD progression and pathogenesis. However, the data obtained were quite contrasting. For example, a Germany-based clinical study that demonstrated heterozygous G399S and A141S mutations (Strauss et al., 2005;Bogaerts et al., 2008), was later impugned by another study from North America (Simon-Sanchez and Singleton, 2008). However, in vivo studies in transgenic mice harboring the G399S mutation (Casadei et al., 2016) and several other independent clinical investigations on non-overlapping rare HTRA2 mutations in Asian and European populations, re-established the correlation between HTRA2 gene and PD risk (Bogaerts et al., 2008;Lin et al., 2011;Wang et al., 2011). Furthermore, to delve into the loss of enzymatic activity of S276C mutation in human HtrA2 and correlate it with PD if any, X-ray crystallographic studies of the mutant were performed to understand the structural correlates of this functional repercussion (Wagh and Bose, 2018). The study provided a structural snapshot of the mutant at an atomic resolution where the inactivity was found to be conferred by loss of water-mediated H-bond between residues S276 and I270 on regulatory L2 and LD loops respectively; however, no clinical study could identify S276C mutation in any PD patient. Recently, another patient-derived research in the Indian population identified a rare likely-pathogenic mutation (T242M), which is critical for altering mitochondrial homeostasis due to loss of GSK-3β-mediated phosphorylation on HtrA2 leading to uncontrolled cell death with PD phenotype . Moreover, another contemporary study demonstrates a connection between neuronal death and selective downregulation revealing its link with Huntington's disease (Inagaki et al., 2008).
Despite these crucial discoveries, several contradictory reports challenge the establishment of HtrA2's role in neurodegeneration. This apparent anomaly in these studies might be due to a lack of focus on close interconnections among several parameters that include alterations in HTRA2, mitochondrial functional aberrations, and neurodegeneration. Therefore, future research endeavors encompassing both genetic and epigenetic interactions underlying the complex pathophysiological network of neurodegenerative disorders might provide a more comprehensive picture of HTRA2's association with these diseases.
While the involvement of HtrA1 in cancer is quite prevalent, there have been only a few direct reports of HtrA2's association with oncogenesis. HtrA2 has been found to be widely expressed in several cancer cell lines where over-expression triggered cell death (Suzuki et al., 2001;Martins et al., 2002). Biopsy sample analyses of specific cancers exhibited altered expression of HtrA2 suggesting its role in those cancers. For example, the level of the protease was found to be substantially less in endometrial and ovarian cancer tissues (Narkiewicz et al., 2008;Narkiewicz et al., 2009). On the other hand, higher HtrA2 expression in prostate tumors implicated its association with the differentiation of prostate cancer cells (Hu et al., 2006). Furthermore, elevated levels of HtrA2 in gastric cancers link it with this malignancy . However, although, the contribution of HtrA2 toward cancer development or regression yet remains to be conclusively elucidated, future studies using multidisciplinary approaches for delineating the HtrA2-associated extensive apoptotic network, and identifying its effect on tumorigenesis might shed more light on this pathophysiological collaboration.
CONCLUDING REMARKS AND FUTURE PERSPECTIVE
Recent progress in the structural and functional characterization of HtrA2 has greatly enhanced our understanding of this fascinating protein. Association of this protease with critical cellular functions such as apoptosis, protein quality control, cell growth, and unfolded protein response implicate it in several diseases including neurodegeneration, arthritis, and cancer. Unfortunately, the complexity of its oligomeric structural constitution and mechanism of activation makes it one of the most complex molecules in the HtrA family of proteases. However, recent advancements in deciphering the multi-layered allosteric modulation of HtrA2 from both structural and functional perspectives provide important cues toward targeting its different functions with specific modulators having desired characteristics.
AUTHOR CONTRIBUTIONS
KB, AC and RB conceived and designed the contents of the review. Manuscript preparation was done by AC, RB and KB. All authors read and approved the manuscript.
|
2022-02-03T14:28:22.466Z
|
2022-02-03T00:00:00.000
|
{
"year": 2022,
"sha1": "13b1cc745640df75c74e791d1d8eeacb258a886c",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmolb.2022.824846/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "13b1cc745640df75c74e791d1d8eeacb258a886c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
17520679
|
pes2o/s2orc
|
v3-fos-license
|
Quality of helping behaviours of members of the public towards a person with a mental illness: a descriptive analysis of data from an Australian national survey
Background Courses such as Mental Health First Aid equip members of the public to perform appropriate helping behaviours towards people experiencing a mental illness or mental health crisis. However, studies investigating the general public’s knowledge and skills in relation to assisting a person with a mental illness are rare. This study assesses the quality of mental health first aid responses by members of the Australian public using data from a national survey. Methods Participants in a national survey of mental health literacy were assigned one of six vignettes (depression, depression with suicidal thoughts, early schizophrenia, chronic schizophrenia, social phobia or post-traumatic stress disorder) and asked an open-ended question about how they would help the character in the vignette. The 6,019 respondents were also asked if and how they had helped a person in real life with a similar problem. Responses to these questions were scored using a system based on an action plan developed from expert consensus guidelines on mental health first aid. Results The quality of responses overall was poor, with participants scoring an average of 2 out of 12. The most commonly reported actions for both questions were listening to the person, providing support and information and encouraging them to seek appropriate professional help. Actions such as assessing and assisting with crisis were rarely mentioned, even for the depression with suicidal thoughts vignette. Conclusions The quality of the Australian public’s mental health first aid knowledge and skills requires substantial improvement. Particular attention should be given to helping people recognise that anxiety disorders such as social phobia require professional help and to improving responses to a suicidal person.
Introduction
People with a mental illness are encouraged to seek professional help as soon as possible to improve their longterm outcomes. However, only a minority of people meeting the criteria for mental illness utilise mental health services in any given 12-month period [1]. Evidence indicates that people are very likely to endorse informal sources of help, particularly speaking with close family and friends about their problem, as helpful, sometimes rating them more positively than trained health professionals [2,3]. Studies also suggest that the social networks of people with a mental illness can facilitate help seeking [4] and recovery, through practising helping behaviours [5,6]. Helping behaviours are actions performed by people within the social network of an individual experiencing a mental health problem to provide support, facilitate treatment seeking and manage symptoms. These can be first aid responses, when the potential helper initially becomes aware that the person requires help, or they can be part of an ongoing process of support and care that is located within a broader social context [7]. Given the crucial role that social networks can play in assisting someone with a mental illness, it is important that members of the public understand how to provide effective help in these situations; however, educational courses providing practical advice on this topic are rare.
One exception is the Mental Health First Aid (MHFA) course, which educates people on how to provide help to someone developing a mental health problem or experiencing a mental health crisis until the crisis resolves or appropriate professional help is received [8]. The course teaches participants to address several types of mental illness and mental health crises using the ALGEE action plan [8], a mnemonic representing all the activities a first aider uses when helping. ALGEE stands for Approach the person, assess and assist with any crisis Listen non-judgementally Give support and information Encourage appropriate professional help Encourage other supports The MHFA course has been extensively evaluated, with the evidence suggesting that participants demonstrate improved knowledge of mental illnesses and mental health first aid skills, more positive attitudes towards appropriate psychological and pharmacological treatments, more confidence in providing support to a person experiencing a mental illness and reduced stigma towards people with mental illness, with these effects lasting up to 6 months after the completion of training [9][10][11][12]. Studies also indicate that people utilise the skills they are taught in the course in real life [13]. This suggests that MHFA training has practical value in promoting effective helping behaviours, benefiting individuals and communities.
Formal investigations of the general population which focus on their current knowledge about appropriate helping behaviours in situations involving mental illness are rare. It is important to understand what capabilities people already have in this area so that researchers and educators have an accurate basis from which to further the public's skills and knowledge and to track improvements or decrements in helping behaviour over time.
One Australian study has investigated helping behaviour towards a hypothetical person with a mental illness in adults. Jorm and colleagues [14] conducted a mental health literacy survey of Australian adults in 2003-2004, assigning participants one of four vignettes and asking an openended question about what they would do to help the person in the vignette if they knew and cared about them. Responses were coded into six categories. The study found that encouraging professional help and listening to/supporting the person were the most common answers across all vignettes. Provision of appropriate assistance was more likely from females, people who correctly identified the disorder in the vignette and people with less stigmatising attitudes. This research suggests that while there is some understanding of how to assist someone with a mental illness in the Australian population, further education is necessary.
A similar study of Australian youth has been undertaken more recently using a different method to assess the quality of their first aid responses. Yap and Jorm explored the links between young people's intentions to help someone experiencing a mental illness and their ensuing first aid actions [15]. Two thousand and five 12-to 25-year-old people were randomly presented with one of four vignettes and asked how they would help the person described. In a follow-up interview 2 years later, participants answered the same question about the same vignette, as well as questions about whether any of their family or friends had experienced a problem like that of the individual in the vignette and what the respondent did to help them. While the previous survey of adults used descriptive methods of analysis, the 526 responses from the youth survey were analysed using a system scoring responses according to their quality based on the MHFA ALGEE action plan. The study noted that responses were generally poorly articulated and not specific enough to warrant high scores using this system. Nevertheless, it was found that first aid intentions predicted behaviour at follow-up, with the exception of encouraging appropriate professional help. This relationship also held for behaviours that were deemed unhelpful by health professionals but which were advocated by young people, such as drinking alcohol to relax or forget the problem [16]. This is the first study to explore the prediction of first aid behaviours from intentions and suggests that using the quality scoring system is a useful and comprehensive way of evaluating levels of mental health first aid skills in the general population.
The current study uses a sample of adults from the 2011 National Survey of Mental Health Literacy and Stigma to quantify Australian adults' competency in providing assistance to both a hypothetical and a real person experiencing a mental illness. It aims to provide a more comprehensive assessment of the public's current knowledge standardised against best practice using the same quality scoring system as Yap and Jorm [15]. The analysis includes a correlation between participants' intentions to provide assistance to the hypothetical person and the helping behaviours performed towards a real person to assess whether knowledge about help giving relates to helping behaviour in real life.
Methods
A computer-assisted telephone interview (CATI) was conducted with a community sample of 6,019 Australians aged 15 and over by the research company The Social Research Centre. The sample was contacted by random digit dialling of landline and mobile phone numbers between January and April 2011. Using this approach enabled a more representative data sample to be obtained, as landline-only sampling is likely to undersample young people [17]. The interview lasted approximately 20 min and no remuneration was provided to participants.
Survey interview
The survey was based on a vignette of a person with a mental illness. After providing demographic information, participants were randomly allocated one of six vignettes: depression, depression with suicidal thoughts, early schizophrenia, chronic schizophrenia, social phobia or post-traumatic stress disorder (PTSD), and randomly presented with either a male ('John') or female ('Jenny') protagonist. Each vignette conformed to DSM-IV [18] and ICD-10 [19] diagnostic criteria. The male versions of the six vignettes are reproduced in Table 1.
Survey respondents were next asked to identify what, if anything, they believed to be wrong with John/Jenny, with unprompted responses recorded. This was followed by questions regarding how John/Jenny could best be helped and the likely helpfulness or harmfulness of several interventions (e.g. health professionals, friends and family, self help strategies, medications and therapies), including an open-ended response to the question: 'Imagine John/Jenny is someone you have known for a long time and care about. You want to help him/her. What would you do?' Interviewers then asked respondents
Depression with suicidal thoughts
John is 30 years old. He has been feeling unusually sad and miserable for the last few weeks. Even though he is tired all the time, he has trouble sleeping nearly every night. John doesn't feel like eating and has lost weight. He can't keep his mind on his work and puts off making any decisions. Even day-to-day tasks seem too much for him. This has come to the attention of John's boss who is concerned about his lowered productivity. John feels he will never be happy again and believes his family would be better off without him. John has been so desperate, he has been thinking of ways to end his life.
Early schizophrenia John is 24 and lives at home with his parents. He has had a few temporary jobs since finishing school but is now unemployed. Over the last six months he has stopped seeing his friends and has begun locking himself in his bedroom and refusing to eat with the family or to have a bath. His parents also hear him walking about his bedroom at night while they are in bed. Even though they know he is alone, they have heard him shouting and arguing as if someone else is there. When they try to encourage him to do more things, he whispers that he won't leave home because he is being spied upon by the neighbour. They realize he is not taking drugs because he never sees anyone or goes anywhere.
Chronic schizophrenia John is 44 years old. He is living in a boarding house in an industrial area. He has not worked for years. He wears the same clothes in all weathers and has left his hair to grow long and untidy. He is always on his own and is often seen sitting in the park talking to himself. At times he stands and moves his hands as if to communicate to someone in nearby trees. He rarely drinks alcohol. He speaks carefully using uncommon and sometimes made-up words. He is polite but avoids talking with other people. At times he accuses shopkeepers of giving information about him to other people. He has asked his landlord to put extra locks on his door and to remove the television set from his room. He says spies are trying to keep him under observation because he has secret information about international computer systems which control people through television transmitters. His landlord complains that he will not let him clean the room which is increasingly dirty and filled with glass objects. John says he is using these "to receive messages from space".
Social phobia
John is a 30-year old who lives alone. Since moving to a new town last year he has become even more shy than usual and has made only one friend. He would really like to make more friends but is scared that he'll do or say something embarrassing when he's around others. Although John's work is OK he rarely says a word in meetings and becomes incredibly nervous, trembles, blushes and seems like he might vomit if he has to answer a question or speak in front of his workmates. John is quite talkative with his close relatives, but becomes quiet if anyone he doesn't know well is present. He never answers the phone and he refuses to attend social gatherings. He knows his fears are unreasonable but he can't seem to control them and this really upsets him.
Post-traumatic stress disorder
John is a 30-year -old who lives with his wife. Recently his sleep has been disturbed and he has been having vivid nightmares. He has been increasingly irritable, and can't understand why. He has also been jumpy, on edge and tending to avoid going out, even to see friends. Previously he had been highly sociable. These things started happening around two months ago. John owns a newsagent shop with his wife and has found work difficult since a man armed with a knife attempted to rob the cash register while he was working four months ago. He sees the intruder's face clearly in his nightmares. He refuses to talk about what happened and his wife says she feels that he is shutting her out.
questions relating to personal and perceived stigma, rating their agreement or disagreement with the statements on a five-point scale. Participants also answered questions about the mental health of the participant's friends and family, firstly establishing if respondents knew anyone with a problem similar to John's/Jenny's. Questions were also asked about the number of affected people, whether the respondent did anything to help the person they knew best, an open-ended question about how they helped the person and whether the close friend or family member sought professional help or treatment. Participants were also asked about their beliefs about the causes of mental illness, their own mental and physical health, and their awareness and knowledge of mental health organisations.
Coding of open-ended responses
The two open-ended questions relating to how the participant would help the person in the vignette and what the participant did to help their close other with a similar problem to John/Jenny were scored via the scoring system used by Yap and Jorm [15]. The system is based on the ALGEE action plan taught in the second edition of the Mental Health First Aid (MHFA) course [8] and is available on request from the authors. The action plan was developed through a series of studies designed to create mental health first aid guidelines for the public [20][21][22][23][24][25]. Responses are awarded a point for each component of the action plan they mention (i.e. Approach the person, Assess and assist with any crisis, Listen nonjudgementally, Give support and information, Encourage appropriate professional help and Encourage other supports) and given an additional point per category where specific details are given (e.g. 'Encourage the person to see a psychologist' would receive two points for Encourage appropriate professional help). Responses can receive a minimum of 0 and a maximum of 2 points per category, giving a total score representing the quality of the response that ranges from 0 to 12.
To ensure the reliability of the assigned scores, a rater first scored 60 sample open-ended responses from a previous trial using the ALGEE scoring system and compared these with consensus scores for the same responses determined by the creators of the MHFA course and the scoring system [9]. This process serves as the gold standard for training raters and assessing scoring validity [9,15]. Inter-rater reliability was calculated using Pearson's r for each category and for the total score, as shown in Table 2. Secondly, the rater and the ALGEE scoring system developers each independently coded 80 randomly selected open-ended responses from the 2011 National Survey of Mental Health Literacy and Stigma. Forty responses were taken from the question 'Imagine John/Jenny is someone you have known for a long time and care about. You want to help him/her. What would you do?' and 40 responses were taken from the question 'What did you do to help the close friend/ family member you know who had a problem similar to John's/Jenny's?' Inter-rater reliability is shown in Table 2 and evidences very high inter-rater reliability overall. A rater then scored each of the responses in each question of interest, clarifying with the scoring system developers as necessary.
Statistical analyses
Pre-weights were initially administered to all data to adjust for the respondents' chances of selection and the dual-frame survey design. A population weight was also applied to account for the survey's over-sampling of university-educated and English-speaking background participants and under-sampling of males and younger adults. The data were analysed using percent frequencies, means and standard deviations. Post hoc nonparametric tests were used to examine differences between vignettes, as the majority of variables were heavily skewed and unable to be transformed due to the categorical nature of the data. Kruskal-Wallis tests were initially employed to assess whether any overall differences between the vignettes existed. Where the overall effect was significant, Mann-Whitney tests were used to compare individual vignettes to all other vignettes to further evaluate the significance and direction of results. Due to the number of post hoc tests run, a Bonferroni correction was applied to the significance level, so that results are reported when p < .008. Effect size estimates for the Mann-Whitney tests, in the form of Pearson's r were also calculated as an additional point of comparison between vignettes [26]. Equivalent parametric post hoc tests were performed for the total scores for the intention and behaviour questions, as these variables were normally distributed. Violations of Levene's test for equality of variances were observed in all but one of the significant post hoc analyses; t test values with unequal variances assumed are reported in these instances. Cohen's d effect sizes are reported for these results. All analyses were performed using SPSS version 20 and Intercooled Stata 12.
Ethics
Oral consent to participate in the study was obtained from all participants prior to beginning the interview. This study was approved by the University of Melbourne Human Research Ethics Committee.
Results
The response rate for the survey was 44%, defined as the number of completed interviews (n = 6,019) out of the number of potential participants who could be contacted and confirmed as in scope (n = 13,636). There were 4,323 interviews conducted on landlines and 1,696 interviews were conducted on mobile phones. There were 2,670 males (44.4%) and 3,349 females (55.6%) sampled. Table 3 shows the number of respondents who received each vignette, the number of people who answered the question 'Imagine John/Jenny is someone you have known for a long time and care about. You want to help him/her. What would you do?' (hereafter referred to as 'intention' or 'the intention question') and the number of people who answered the question 'What did you do to help the close friend/family member you know who had a problem similar to John's/Jenny's?' (hereafter referred to as 'behaviour' or 'the behaviour question'). There were no significant differences between people receiving the different vignettes with regards to age, gender, marital status, education level, country of birth or location.
Results for intention question
Of the 6,019 survey respondents, 5,937 (98.6%) answered the intention question and were scored using the ALGEE criteria. Additional file 1: Table S1 contains the percentage frequencies of all the ALGEE components and their scores by vignette for this question. Kruskal-Wallis tests indicated that significant differences existed among all groups across all ALGEE criteria (p < .008). Significant results are summarised in Table 4.
Approach the person
Across all vignettes, how to approach the person in the vignette was not clearly articulated, with 86.8% of the sample receiving a score of 0 for this criterion and 1.1% receiving a score of 2. The mean score for Approach the person was 0.15 (SD = 0.38). Post hoc Mann-Whitney tests indicated that respondents who did not receive the chronic schizophrenia vignette were significantly more likely to score highly on this criterion.
Assess and assist with any crisis
Almost no respondents detailed assessing and assisting the person in crisis. In the depression with suicidal thoughts vignette, where a crisis is apparent, 19 responses received a score of 2 and 21 answers received a score of 1. This means that 40 of the 1,001 people who answered the intention question for the depression with suicidal thoughts vignette recognised that John/Jenny was experiencing a mental health crisis. However, the number of people mentioning an Assess and assist with crisis action for the depression with suicidal thoughts vignette was more than double that of the other vignettes. Respondents receiving the depression with suicidal thoughts vignette were more likely to score highly on this criterion while people receiving the social phobia vignette were less likely to score highly than those receiving other vignettes. Of the total sample of 6,019, 1.3% scored either 1 or 2 points for this criterion, with the mean score being 0.02 (SD = 0.18).
Listen non-judgementally
Many respondents recognised and articulated the need for helpers to talk and listen to the person in the vignette. Of the respondents, 38.2% overall received scores of 1 or 2 for this component. Of note here is that, while many respondents stated that they would talk and/or listen to the person, only 140 of these people detailed how they would perform this action (e.g. listen empathically, validate feelings), which was necessary to receive 2 points. Participants receiving the depression and PTSD vignettes scored significantly higher, while people assigned the chronic schizophrenia and social phobia vignettes scored significantly lower compared to the other vignettes. The overall mean score for the Listen nonjudgementally criterion was 0.40 (SD = 0.54).
Give support and information
Almost half of the responses to the intention question (47.9%) received scores of 1 or 2 for this criterion. Again, the majority of people provided superficial responses (M = 0.50, SD = 0.56), suggesting that while the public recognises the need to support people with mental illness, they may not necessarily understand what types of support or information can be provided. In the chronic schizophrenia and social phobia vignettes, more people than not stated this as a helping action. Post hoc comparisons confirmed that the people receiving these vignettes were significantly more likely to score highly on this component compared to participants receiving other vignettes. Compared to the other vignettes, participants receiving the depression and early schizophrenia vignettes were significantly less likely to provide answers of a high quality.
Encourage appropriate professional help
Approximately half of the respondents (48.9%) overall supplied an answer that was awarded 1 or 2 points for this component, with a mean score of 0.91 (SD = 0.93) across all responses. Of these responses, 2,331 were allocated 2 points for correctly stating a specific type of professional help, such as a GP, psychologist or psychiatrist. In all but the social phobia vignette, well over half of the responses mentioned encouraging professional help of some description for the person in the vignette. Mann-Whitney tests indicated that people receiving the depression and depression with suicidal thoughts vignettes were significantly more likely to score highly, while respondents to the social phobia vignette were significantly less likely to recommend professional help of any description.
Encourage other supports
The majority of respondents overall (89.3%) scored 0 for this criterion, with only eight people in total receiving a score of 2. Four of the eight responses came from the depression with suicidal thoughts vignette. The mean score for Encourage other supports was 0.11 (SD = 0.32). Mann-Whitney tests indicated that significantly higher scores were achieved by respondents assigned the early schizophrenia and social phobia vignettes, while people receiving the chronic schizophrenia and PTSD vignettes scored significantly lower.
Total score
The scores for each ALGEE category were summed for each response to give a total score. These scores ranged between 0 and 7 out of a possible maximum score of 12.
The majority of people scored between 1 and 3 (M = 2.08, SD = 1.16). Seven people were awarded a score of 7 (0.1% of the total sample), with four of these coming from people receiving the depression with suicidal thoughts vignette, two from the depression vignette and one from the PTSD vignette. A one-way ANOVA suggested that significant differences between vignettes existed (F(5, 5931) = 21.09, p < .008). Participants receiving the depression and depression with suicidal thoughts vignettes scored significantly higher than participants receiving other vignettes, while people who received the chronic schizophrenia and social phobia vignettes were significantly less likely to receive high scores.
Results for behaviour question
Respondents were asked if they knew anyone with a problem similar to John's/Jenny's. Fifty-four percent of the total sample (3248 respondents) said they did. There were 2,649 people who stated that they had done something to help the person, and 2,615 respondents (43.4% of the total sample) provided an answer to the question 'What, if anything, did you do to help the person?' Additional file 1: Table S2 contains the percentage frequencies of all the ALGEE components and their scores by vignette for this question. Kruskal-Wallis tests indicated that significant differences existed among the Listen non-judgementally and Give support and information criteria (p < .008). Significant results are summarised in Table 4 and indicated with a (b) symbol.
Approach the person
This component was poorly articulated, with 91.7% of respondents receiving a score of 0. Nine respondents were awarded a score of 2, representing 0.4% of respondents who answered the question; four of these people received the depression vignette. The mean score for this criterion was 0.09 (SD = 0.29).
Assess and assist with any crisis
Again, the majority of answers did not mention this component (M = 0.02, SD = 0.19). Overall, 25 respondents scored 1 point (1.1%) and 17 respondents scored 2 points on this criterion (0.5%). Seven of the responses awarded a score of 2 received the depression with suicidal thoughts vignette, but people receiving this vignette were not more likely than respondents receiving other vignettes to achieve higher scores on this criterion.
Listen non-judgementally
Across all vignettes, 37.4% of people scored either 1 or 2 points for Listen non-judgementally, with 55 people in total receiving a score of 2. The mean score for this component was 0.39 (SD = 0.53). This reflects similar numbers to the Listen non-judgementally criterion for the intention question. Respondents receiving the PTSD vignette scored significantly higher on this criterion and people assigned the chronic schizophrenia vignette were less likely to score highly.
Give support and information
The majority of answers (66.6%) stated that at least one form of support or information had been provided to the person known in real life. Across all vignettes, the most common score was 1, with the mean score being 0.75 (SD = 0.61). People receiving the PTSD vignette were significantly less likely to provide high-quality support and information compared to those receiving other vignettes.
Encourage appropriate professional help
The majority of respondents for the behaviour question received scores of 0 for this criterion (57.4%). Of the people who received scores of 1 or 2, 891 could correctly name an appropriate source of professional help, giving them a score of 2. The mean score for Encourage appropriate professional help was 0.79 (SD = 0.92).
Encourage other supports
Again, most respondents (89.3%) scored 0 for this criterion, with eight people across all vignettes listing at least two types of other supports they provided or suggested to the person needing help. Five of these scores came from people receiving the social phobia vignette. The mean score for this criterion was 0.11 (SD = 0.33).
Total score
The total scores for the behaviour question also ranged between 0 and 7 across all vignettes. The mean score was 2.15 (SD = 1.15), with 85.4% of scores falling between 1 and 3. Three people received scores of 7 (0.1%); 21 people scored a total of 6 (0.8%). A post hoc ANOVA indicated that there were significant differences between vignettes (F(5, 2,609) = 2.43, p = .03). Respondents who received vignettes other than the chronic schizophrenia vignette attained significantly higher totals.
Correlations between intention and behaviour
The correlation between the intention and behaviour variables was computed using the total scores from each question. The correlation between these variables was 0.203 (p < .01), indicating that intention and behaviour had a significant but low correlation overall. Examination of this correlation as a function of each vignette showed similarly low, significant correlations for depression (0.289, p < .01), depression with suicidal thoughts (0.239, p < .01), early schizophrenia (0.162, p < .01), social phobia (0.221, p < .01) and PTSD (0.151, p < .01), with chronic schizophrenia the only non-significant correlation (0.078, p > .05). Correlations between the intention and behaviour questions by ALGEE component were all significant at the p < .01 level, with correlations ranging between 0.061 (Listen nonjudgementally) and 0.231 (Encourage appropriate professional help).
Discussion
This study aimed to better understand the ability of the Australian population to provide assistance to a hypothetical and a real person with a mental illness and to compare this with current best practice in the field of mental health first aid. The study also investigated the links between what people stated they would do to help someone with a mental illness and how this was reflected in their real-life actions.
Findings from this study
The findings indicate that, overall, responses from the public outlining appropriate helping behaviours towards people with a mental illness score poorly according to the ALGEE system. An average score of 2 indicates that people can state two actions (for example, listening and providing support) or can comprehensively report on one action (such as naming a specific health professional to visit), but otherwise lack the particular knowledge and skills to effectively assist someone with a mental illness. This implies that courses like MHFA can be beneficial in terms of educating the public on simple, beneficial behaviours that will help people with mental illness feel supported, accepted and motivated to seek professional assistance. The most commonly reported action for the intention question was Encourage appropriate professional help, being mentioned in 48.9% of responses. This is a positive outcome, as it suggests that the public is aware that mental illness is best treated with the assistance of health professionals. Interestingly, this trend was not reflected in responses to the social phobia vignette, with only 33.6% of people reporting that they would encourage the person to seek professional assistance. This may be because this disorder was poorly recognised and named as a mental illness by respondents [2], prompting answers primarily focused on providing social support or encouraging social activities. Encourage appropriate professional help was also not the most commonly reported action for the behaviour question (this was Give support and information, with over 66% of people providing this response). This reflects similar findings in Yap and Jorm's youth study [15], suggesting that this is a widespread trend in the Australian population and that behaviours that encourage professional help seeking need to be more actively promoted to the public as a complement to providing support and information.
In contrast, responses classified in the Assess and assist with any crisis category were very infrequently reported across the sample for both the intention and behaviour question. This is especially concerning for the depression with suicidal thoughts vignette, where a life threatening crisis is evident. Compared to mental health professionals, the public are less likely to believe in the helpfulness of asking a person about suicidal thoughts and feelings, and more likely to believe this action is harmful [27], despite evidence suggesting that there are no detrimental effects associated with screening for suicidal intent [28]. Assessing for suicidal intent is an action that requires greater destigmatisation and promotion in the community as a helpful response.
Another notable finding is that people receiving the chronic schizophrenia vignette were significantly more likely to score poorly in several areas, including Approach the person, Listen non-judgementally and Encourage other supports, culminating in significantly lower total scores for both intention and behaviour. This may reflect a greater prevalence of stigmatising attitudes towards people with psychotic disorders compared to people with other mental illnesses within the community, particularly the idea that they are unpredictable and dangerous [29]. This could result in a reduced likelihood of even attempting mental health first aid and suggests that encouraging the community to perceive people with schizophrenia as no different to people with any other mental illness might enhance the quality of mental health first aid responses for this group.
In comparing the responses for the intention and behaviour questions, there were few differences. The most commonly reported actions for both questions were Listen non-judgementally, Give support and information and Encourage appropriate professional help. Mean scores for each ALGEE component were similar when comparing responses to each question (for example, the mean score for Encourage other supports was 0.11 for both the intention and behaviour questions), indicating that, in general, the public seem likely to perform the same actions towards both a hypothetical and a real person. This is further supported by the correlation between the total scores for the intention and behaviour questions, which was low but significant. This is encouraging, as it implies that if members of the public are equipped with the knowledge and skills to adequately assist someone with a mental illness, they are likely to actually perform these actions when necessary. This notion is also supported by well-established psychological theories, such as the Theory of Planned Behaviour [30] which postulates that intentions are the precursor to, and a reasonably accurate predictor of, an individual's actions in a given situation.
Comparison with previous studies
As previously mentioned, two similar studies to this one have been published. Yap and Jorm [15] analysed the first aid intentions and subsequent behaviours of a nationally representative sample of youth using the same scoring system as this investigation. The results of the present study support the general patterns found in the earlier study, in that mean scores and standard deviations were similar for both intention and behaviour (with the means in this study generally higher), the Listen nonjudgementally, Give support and information and Encourage appropriate professional help components were more frequently reported than the other ALGEE criteria, the Approach the person and Assess and assist with any crisis criteria were rarely mentioned and total scores were quite low overall. This suggests that Australian adults' knowledge and behaviour are very similar to Australian youth in relation to helping a person with a mental illness.
Jorm and colleagues' study [14] also used a representative sample of Australian adults, coding their responses using descriptive categories rather than scoring the quality of the responses. Although it is not possible to statistically compare the results of this study and the 2005 investigation due to differences in how the data was collected (phone interview versus face-to-face interview, which yields more comprehensive responses) and analysed (descriptive analysis versus scoring the quality of responses), it is possible to ascertain general data patterns and changes over time. Again, the findings of the two studies are similar, with the Listen nonjudgementally and Encourage appropriate professional help responses being most commonly stated and respondents able to correctly nominate an appropriate health professional to assist the person in the vignette. Assess and assist with any crisis responses were seldom reported; however, the percentage frequencies for this category were substantially higher than those for Give support and information in three of the four vignettes used in the 2005 study. This could be an artefact of the face-to-face interview methodology, where participants tended to give longer responses. Overall, intentions to help the person in the vignette seem to have remained relatively unchanged from 2003-2004 to 2011.
From a broader perspective, the findings of this study lend support to the postulates of social network frameworks such as the Network Episode Model (NEM) [4], which suggest that interactions between individuals and social systems, particularly family and friends, reciprocally influence and shape a person's pathway to health care. Studies in this field reinforce the important role of social networks in recognising, defining and legitimising mental illness [31] and assisting entry into the mental health care system [32]. Findings from both the mental health literacy and NEM literatures strongly indicate that improvements can be made in how effectively and efficiently the public initially responds to mental health problems in people they encounter.
Strengths and limitations of this study
This investigation exhibits several strengths. The National Survey of Mental Health Literacy and Stigma had a large, nationally representative sample, resulting in 5,937 responses to the intention question and 2,615 responses to the behaviour question that could be scored using the ALGEE criteria. The variety of vignettes presented to participants means that responses to the intention question could be compared both within and across a standard set of situations, and this is one of the first studies to examine first aid responses towards people with anxiety disorders. Using a standard scoring system was helpful in ensuring reliability of measurement, as reflected in the high inter-rater reliabilities for each set of responses. Lastly, using the ALGEE scoring system enabled a score to be assigned based on the quality of a person's response and provided a clearer understanding of how the general public's knowledge and skills compared with best practice according to expert consensus guidelines.
This study also has limitations, some of which can be addressed in future research. Firstly, the link between intention and behaviour is unclear. This is partly due the study's methodology, which involved retrospective reporting of behaviour. To answer the behaviour question, any helping actions must necessarily have taken place prior to participating in the survey, thus behaviour actually preceded intention. The behaviour the participant reported on could have been recently performed or have taken place many years ago. Also, this survey only contains data from a single time point, which does not account for what experiences shaped participants' responses when answering these questions. To better understand this relationship, a prospective study, similar to that conducted by Yap and Jorm [15], should be undertaken with an adult sample. Additionally, given that the correlation between these variables is only 0.203, other factors must help to determine whether and what help people provide to a person experiencing mental illness. Supporting this notion is a previously conducted study of predictors of mental health first aid actions in young people suggesting that several characteristics of both the respondent and recipient of aid affect the likelihood and type of helping response provided [33]. Secondly, answers to the behaviour question were not standardised according to a specific scenario and were potentially subject to respondent recall bias; thus, it is difficult to judge how appropriate these responses were to the actual situation and also how closely the person's symptoms matched those presented in the vignette. The nature and severity of the person's problem in real life was unclear, and hence, the appropriateness of the first aid actions cannot be determined. Additionally, it is unknown whether the respondent's actions benefited or harmed the real-life recipient. Future studies could incorporate questions relating to the wellbeing of the person assisted, or attempt to directly question that person to establish the effectiveness of the behaviours performed. Lastly, while the ALGEE scoring system displayed several useful features, there were some aspects of this data that it was unable to capture effectively. For example, a substantial proportion of people stated that while they were currently unsure of how to help the person in the vignette, they would seek advice on what to do from other sources, such as a GP, the internet or someone they knew who had experienced a similar problem. While this is an appropriate response from a person who is uncertain about how to provide effective help, the scoring system cannot adequately categorise and allocate points to it, as it assumes that respondents will take direct action themselves. Modifications to the ALGEE scoring system that incorporate such responses may be warranted.
Conclusions
The results of this study support conclusions drawn from previous research into how members of the public assist a person experiencing a mental illness. Taken together, this body of research indicates that the public's mental health first aid knowledge and skills can be substantially improved. One way to achieve this is through the promotion of programs, like MHFA, which aim to improve the public's ability to recognise and address emerging mental health issues in people they know. Given that research consistently demonstrates a significant correlation between people's intention and behaviour [34], and that the MHFA course is increasing in both visibility and popularity [35], the potential exists for educating many more people about how to appropriately assist someone with a mental illness and subsequently having these competencies communicated to researchers in future studies and demonstrated in real life.
|
2017-04-08T17:56:12.991Z
|
2014-01-18T00:00:00.000
|
{
"year": 2014,
"sha1": "eddf86b8a48e7a454cb1778065ca897d8b24683f",
"oa_license": "CCBY",
"oa_url": "https://annals-general-psychiatry.biomedcentral.com/track/pdf/10.1186/1744-859X-13-2",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "03744f6c1aeabbb032eec9657612e7b4c735858b",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
114448356
|
pes2o/s2orc
|
v3-fos-license
|
Verification of Method for Predicting Thermal Environment in Tunnels by Model Experimentation
To verify the accuracy of the current method for predicting thermal environments in tunnels, a model experimental device was developed. This experimental device was designed to be able to investigate heat transfer in tunnels and heat conduction in the rock and soil surrounding the tunnel by blowing hot air into a thick pipe made of acrylic plastic. Temperature detectors were placed in various positions along the experimental device. The difference in temperatures measured in the model experiment and those obtained through calculation was about 1 ℃ showing that numerical calculations obtained through the current method satisfy accuracy requirements.
Introduction
It is more difficult to remove heat from underground railway tunnels in urban areas through natural ventilation, compared to mountain tunnels. To date, various investigations have been conducted and manifold measures have been implemented to counter this problem. At present, in the case of subway tunnels, it is standard practice to install mechanical ventilation equipment as a cooling measure. At the same time, transportation agencies, including railways, are more than ever compelled to reduce energy consumption. It is therefore important to be able to estimate the capacity of mechanical ventilation equipment in tunnels and cooling equipment in subway stations, accurately. Obtaining accurate estimation depends on being able to predict thermal environments inside tunnels. The Subway Environment Simulation (SES) computer program [1], developed by the US Department of Transportation, is a well-known estimation tool. In Japan, a NewSEAS was developed so that it could be used during the design of Line 12 on the Toei Subway, operated by the Tokyo Metropolitan Government. Because flow and thermal environment simulation programs deal with relatively low train speeds, such as those found on subways, SES uses incompressible flow equations whereas NewSEAS uses two-fluid piston model equations [2] for calculating the speed of airflow induced by trains running in tunnels. RTRI, at the same time, developed a method which could be applied to high-speed rail, such as Shinkansen. In order to take into consideration the propagation of the compression wave generated by a highspeed train entering a tunnel, this method calculates the speed of airflow in the tunnel using a numerical simulation program of pressure transients in railway tunnels [3], [4], which carries out calculations based on the method of characteristics. The obtained wind speed is then used as the input for carrying out heat transfer calculations in railway tunnels using the numerical simulation program for thermal environments in railway tunnels [5].
The first step in examining the calculation accuracy of the numerical simulation program for thermal environments in railway tunnels was to verify that the calculation results from the simulation mostly matched the theoretical values by steady state theory, with the set boundary condition that air flowing into the tunnel had a temporally constant temperature. However, in an actual subway tunnel, the temperature of outside air brought into the tunnel for ventilation would reflect changes caused by overlapping of diurnal and annual variations. The outside air causes the temperature inside the tunnel to vary over time by heat transfer. The second step in verification required examination of the temporal change in temperature in the tunnel. For that purpose, an experimental device was developed to simulate the basic aspects of heat transfer inside a subway tunnel and attempted to verify the calculation accuracy of the simulation program in the scenario where temperature in the tunnel changes over time.
Simulation verification method
When research is conducted using numerical simulations, it becomes important to assess the accuracy of the simulation calculations. This evaluation requires results from the simulation to be compared with field test results (full-scale test), where details of the boundary conditions are made clear. However, due to the length of actual railway tunnels-which form the target of these simulations ranging from several kilometers to several tens of kilometers, and because they are buried deep in the ground, accurate assessment of boundary conditions for the field tests, such as soil temperature, is difficult. It is also difficult to investigate in detail the spatial distribution of the thermodynamic properties of the ground, such as soil and rock surrounding the tunnel, which is composed of a complex overlap of layers with different characteristics. For these reasons, before comparing results from simulation with prior field test or new field test results, it was decided that a model should be used, made of materials with clear thermodynamic properties and with simple shape and boundary conditions, to carry out precise experiments to verify the calculation results of the simulation.
On the assumption that "the temperature of the ground surrounding the tunnel is constant temporally and spatially at a distance of 10 m or more away from the tunnel wall," an experimental device consisting mainly of a cylindrical tunnel model was developed, that simulates the ground within 10 meters of the tunnel wall. The heat transfer inside the actual tunnel was too complex to be simulated in the model. Therefore, the focus was placed on the basic heat transfer phenomena, such as heat transfer due to airflow, heat transfer between the air and the ground, and heat conduction within the ground, and an experimental device was developed to simulate these. The experimental device designed to simulate the ground surrounding the tunnel was made with homogeneous material with known thermodynamic property values. In addition, the device was not designed to simulate heat transfer from water inside the tunnel and in the surrounding ground. Through experiments using this tunnel model, the temporal change in tunnel temperature was measured (dynamic response) when the temperature of the air supplied to the tunnel was changed over time ( Fig. 1). By comparing this to the temporal change in temperature calculated through simulation, it was possible to verify the basic calculation techniques for heat transfer used in the simulation of thermal environments in tunnels. This experiment was conducted using a high measurement accuracy temperature sensor, to be able to gauge the accuracy of calculations through simulation to within approximately a 1 ℃ ( ± 0.5 ℃) margin of error. The model experiment to investigate thermal environments in tunnels, works by blowing air -simulating air from the outside -into a thick acrylic cylinder simulating the tunnel (referred to hereafter as the 'tunnel model'), using a hot air generator (Figs. 2 and 3). The tunnel model is an acrylic cylinder with an outer diameter of 150 mm, an inner diameter of 27 mm, and a length of 5000 mm. It is assumed that the tunnel wall and ground surrounding sections between two stations (assuming a length of 800 m) with small cross-sections (assuming an internal diameter of 4.3 m) in single-track subway tunnels are homogeneous. A reproduction was made of an area at approximately 10 m in depth from the inside surface of the wall on a 1/160 scale. Air was heated and regulated to a set temperature in the generator, and blown by a fan at a constant wind speed. The mouth of the hot air generator was connected to the opening of the tunnel model with a hose to blow the heated air inside the tunnel. Simultaneously, the air surrounding the tunnel model was circulated using another fan. The surface temperature of the outside of the tun- nel model varied over time, influenced by the outside air temperature. However, circulation of the surrounding air made it possible to contain fluctuation of the outside surface temperature in the longitudinal direction of the tunnel model to within 2 ℃ or less. By measuring this temporallyvarying boundary condition and using it as the input to the simulation for comparative verification, it was possible to match experimental and simulation conditions.
Measurement method
A high-accuracy and very stable class A 4-wire resistance temperature detector (Pt 100) was used, to measure the air temperature in the tunnel, the surface temperature outside the tunnel model, and the temperature of the acrylic inside the tunnel model. For example, for a measured temperature of 30 ℃ , the sensor had a tolerance level of ± 0.2 ℃, which meets measurement accuracy requirements for this experiment (±0.5 ℃ margin of error). The air temperature in tunnel was measured at seven points in the longitudinal direction of the tunnel using pipe-style resistance temperature detectors (RTDs) with a diameter of 1.6 mm. The acrylic temperature inside the tunnel model was measured using embedded, film-type RTDs at five points on two cross sections (X=750 and 2250 mm) in the longitudinal direction of the tunnel: three points (r=17.5, 27, and 46 mm) on one cross section and two points (r=17.5 and 27 mm) on the other cross section respectively, in the radial direction of the tunnel. The surface temperature outside the tunnel model was measured at eight points in the longitudinal direction of the tunnel using film-type RTDs. Figure 4 shows the measurement positions of these temperature sensors. An easy-to-use hot wire anemometer was installed at the center of the tunnel on the cross section at a distance X=4250 mm from the upstream edge of the tunnel model, to measure the speed of airflow in the tunnel (Fig. 5). In addition, the wind speed surrounding the tunnel model, circulated using a blower, was measured using a three-dimensional ultrasonic anemometer, because wind direction is influenced by the tunnel model and changes locally and three-dimensionally (Fig. 5).
Experimental procedure
First, a blower was used to circulate the air surrounding the tunnel model, and the temperature of the hot air generator was set to 30 ℃ . Air was blown continually for 5 or more hours. When it was deemed that the tunnel model had reached a thermal equilibrium state, the set temperature was raised to 45 ℃, and air was blown for another hour or more. The reason for blowing air continually for 5 hours from the start of the experiment was based on results from preliminary simulations and experiments which showed that the temperature in the tunnel model reached a steady state if hot air was blown continuously for approximately 5 hours. During this time, the air temperature in the tunnel model, the surface temperature outside the tunnel model, the acrylic temperature inside the tunnel model, and the speed of airflow in the tunnel model were measured at a sampling rate of 5 Hz. The wind speed surrounding the tunnel model was measured separately at a sampling rate of 10 Hz. Figure 6 shows wind speed measurements from inside the tunnel model, taken at the center of the tunnel. It can be said that the speed of airflow in the tunnel model was controlled and almost constant. The speed of airflow in the tunnel model time-averaged for the duration of the experiment was 7.3 m/s. Figure 7 shows wind speed measurements surrounding the tunnel model (average horizontal wind speed per minute). The wind speed surrounding the tunnel model time-averaged for the duration of the experiment was 0.9 m/s.
Temperature measurements
Tunnel model surface temperature measurements are shown in Fig. 8. The graph in this figure shows that the surface temperature of the tunnel model tended to change over time under the influence of outdoor air temperature variation. In this experiment, air surrounding the model was circulated using a blower to reduce variations in the tunnel model surface temperature along the measured area. Thus, 3 to 7 hours after the start of the experiment, the surface temperature of the tunnel model was between 12 ℃ and 14 ℃ , limiting temporal and spatial variation to approximately 2 ℃ .
A representative sample of tunnel model air temperature measurements are shown in Fig. 9. As the set temperature of the hot air from the generator was increased from 30 ℃ to 45 ℃ , 5.56 hours after the start of the measurement, the air temperature at each measured point also rose. The closer the measurement point was to the upstream end of the tunnel model, the more rapidly the air temperature changed. This revealed a tendency indicating that the more distant the measured point was from the upstream end of the tunnel model, the longer it took to reach a thermal equilibrium state. Figure 10 shows the measured acrylic temperature inside the tunnel model at the cross section where X=750 mm. As the set temperature of the hot air from the generator was increased from 30 ℃ to 45 ℃, 5.56 hours after the start of the measurement, the acrylic temperature at each measured point increased. This revealed a tendency indicating that the further the measurement point was from the center, the longer it took for the acrylic temperature to reach a thermal equilibrium state. In addition, compared to the air temperature in the tunnel model, there was a more gradual temporal change in the acrylic temperature when the set temperature was changed.
Overview of the simulation
In the simulation of the thermal environment in the tunnel, it was assumed that the air flow in the tunnel to be one-dimensional (Fig. 11) and the ground surrounding the tunnel was a two-dimensional axisymmetric shape (Fig. 12), based on the method for predicting the thermal environment [6] in undersea tunnels mechanically ventilated at a constant wind speed. The temporal change in temperature in the tunnel was calculated by simultaneously solving the fundamental equation for air temperature in the tunnel (1) and for the ground surrounding the tunnel (2).
Here, the symbols represent the following: A: tunnel cross section, c a : constant-pressure specific heat of air, λ a : thermal conductivity of air, h a : heat transfer coefficient between the air and wall surface, S: perimeter of the tunnel cross section, t: time, U: speed of airflow in the tunnel, x: distance in the longitudinal direction of the tunnel, θ a : air temperature, θ c0 : temperature of the wall surface, ρ a : air density, c c : specific heat of the ground, r: radial distance from the tunnel center, θ c : ground temperature, λ c : thermal conductivity of the ground, and ρ c : ground density. This simulation takes into account the heat transfer from water inside the tunnel and in the ground surrounding the tunnel. In this paper, the functionality of the simulation is restricted, to make it match with the conditions of the experiment, and the influence of water is ignored (equations (1) and (2) represent scenarios where water influence is ignored). Since it is assumed that the ground temperature reaches a 'deep sink' temperature (depth at which ground temperature does not change) at a 10 m (62.5 mm at 1/160 scale) distance from the inner wall of the tun- Flowing waterθ w q x β 1−β nel, the boundary condition of the outer surface of the cylindrical region-which simulates the ground surrounding the tunnel-is set up so that the temperature is temporally and spatially constant. However, the surface temperature of the device developed for this model experiment changed over time, while its spatial variation was within 2 ℃. To meet the experimental conditions, the simulation for comparison with the model experiment used, as the input value, the time series data which were measured in the experiment as the boundary condition for the ground surrounding the tunnel. The tunnel diameter (inner diameter of the cylindrical region) was set to 27 mm; the outer diameter of the cylindrical region, to 150 mm; and the tunnel length, to 5000 mm in accordance with the model experiment device. With regards to the boundary condition of the inner surface of the tunnel cylinder, the transfer of heat to and from the air flowing inside the tunnel was calculated with the heat transfer coefficient h a given by the Petukhov-Popov equation [7] ((3) - (7)). where, K P r 2 11 7 .
Here, the symbols represent the following: d: the inner diameter of the tunnel model, R ed : the Reynolds number based on the inner diameter of the tunnel model and the speed of airflow in the tunnel, and P r : the Prandtl number of air.
Speed of airflow in the tunnel
In the simulation, the airflow in the tunnel is regarded as being one-dimensional, but the actual wind speed in the tunnel model changes in a radial direction. For this reason, the averaged wind speed over the cross-section was determined from the temporally averaged value of the speed of the airflow through the tunnel center, measured in the experiment (Section 4.1) and the wind speed distribution inside the tunnel model measured in the preliminary experiment, and input the obtained result into the simulation as the speed of airflow in the tunnel.
According to the literature [8], in the case of turbulent flows, wind speed U(r), positioned at a distance of r away from the center of a cylindrical tube with inner radius R and a smooth inner surface, can be approximated by the following power law equation (8).
Here, U max represents the wind speed at the center of the tunnel.
In the turbulent pipe flow, n is generally 7, but in reality, it is dependent on the Reynolds number, R e (= (Ud) /υ), where the pipe diameter d (=2R) is taken as the reference length and the wind speed averaged over the pipe cross section U -, as the reference velocity. Through experimentation, J. Nikuradse obtained values of the exponent n for a very wide range of Reynolds numbers [8]. From the interpolation of those experimental values, with U max =4.6 m/s and 10.1 m/s in the preliminary experiment by which the wind speed distribution was measured, the values of n were found to be 6.26 and 6.50, respectively. As shown in Fig. 13, the results of the preliminary experiment are generally consistent with the empirical formula (8).
Assuming that the wind speed inside the tunnel model follows the power law, the relationship between the wind speed averaged over the cross section U -, and the wind speed at the center of the tunnel model cross section U max can be obtained by integrating the function (8) over the cylindrical pipe cross section, as shown by the following equation [9]. Wind speed (m/s)
Surface temperature of the tunnel model
Surface temperatures of the tunnel model measured at 8 points in the experiment (Fig. 8) were used as the surface temperatures for the simulated tunnel model (boundary condition). Boundary conditions for the points lying outside the measured points were obtained by interpolation of data.
Physical property values used in the simulation
The physical properties of both the air inside the tunnel model and of the material used to make acrylic tunnel model used in the simulation, are shown in Tables 1 and 2.
Grid spacing of the simulation
Numerical calculations were made through discretization of equations (1) and (2). Grid points in the X direction of the airflow in the tunnel model and the acrylic portion inside the tunnel model were spaced at equal intervals of 19 mm and the tunnel model was divided into 270 parts in the X direction. The grid point intervals in the r direction of the acrylic temperature inside the tunnel model were generally set to be shorter for the grid points closer to the center and the tunnel model was divided into 16 parts at unequal intervals of 0.2 mm to 8.7 mm in the r direction. The time increments in the calculation were set to 0.2 s.
Verification of the simulation of thermal environments in tunnels
To verify the simulation of thermal environments in tunnels, a comparison was made of calculation results from the simulation with the results from the model experiment. Figures 14 to 16 show the comparison between simulation results and experimental results with respect to air temperatures inside the tunnel model taken at cross sections at distance X from the upstream edge of the tun- Figures 17 and 18 show a similar comparison with respect to the acrylic temperature inside the tunnel model at distances r=17.5 mm and 27 mm from the tunnel center, at the cross section at a distance of 750 mm from the upstream edge of the tunnel model. The data used for comparison were obtained from about 30 minutes before the set temperature of the hot air generator was raised from 30 ℃ to 45 ℃ to about 1 hour following the increase. The difference in air temperature in the tunnel model, and acrylic temperature inside the tunnel, obtained in the model experiments and through simulation was within 1 ℃ . This confirmed the calculation accuracy of the simulation. Additionally, this 1 ℃ margin of error is thought to include the effects of ignoring the impact of temperature changes due to the thermodynamic properties of air.
Conclusion
A model experimental device was developed to verify the essential parts in calculations for the simulation of thermal environments in the tunnels-in other words to verify heat transfer through airflow and ground surrounding the tunnel, and the heat exchange between them. The following became clear from the comparison of experimental results and calculation results simulation replicating the model experiment.
The difference between the calculation results from the simulation of thermal environment in the tunnel and the results from the model experiment is approximately 1 ℃. From this, we see that the margin of error of the numerical analysis by the simulation is about 1 ℃.
For the future, we must examine the factors that could not be verified by the model experiment, such as the influence of ground water and the margin of error owed to prediction accuracy of wind speed within the main tunnel and the tunnel's ventilation shaft.
|
2019-04-15T13:12:11.362Z
|
2017-05-01T00:00:00.000
|
{
"year": 2017,
"sha1": "a61c7b060fa93127a7e7a2bede12270f5e5ff796",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/rtriqr/58/2/58_126/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "85c8b82575f8fdad41e3388d0f34e9190067d87e",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
14185784
|
pes2o/s2orc
|
v3-fos-license
|
A Systematic Review of Failed Anterior Cruciate Ligament Reconstruction With Autograft Compared With Allograft in Young Patients
Context: The advantages of allograft anterior cruciate ligament reconstruction (ACLR), which include shorter surgical time, less postoperative pain, and no donor site morbidity, may be offset by a higher risk of failure. Previous systematic reviews have inconsistently shown a difference in failure prevalence by graft type; however, such reviews have never been stratified for younger or more active patients. Objective: To determine whether there is a different ACLR failure prevalence of autograft compared with allograft in young, active patients. Data Sources: EMBASE, MEDLINE, Cochrane trials registry. Study Selection: Comparative studies of allograft versus autograft primary ACL reconstruction in patients <25 years of age or of high-activity level (military, Marx activity score >12 points, collegiate or semiprofessional athletes). Study Design: Systematic review with meta-analysis. Level of Evidence: Level 3. Data Extraction: Manual extraction of available data from eligible studies. Quantitative synthesis of failure prevalence and Lysholm score (outcomes in ≥3 studies) and I2 test for heterogeneity. Assessment of study quality using CLEAR NPT and Newcastle-Ottawa Scale (NOS). Results: Seven studies met inclusion criteria (1 level 1; 2 level 2, 4 level 3), including 788 patients treated with autograft tissue and 228 with various allografts. The mean age across studies was 21.7 years (64% male), and follow-up ranged between 24 and 51 months. The pooled failure prevalence was 9.6% (76/788) for autografts and 25.0% (57/228) for allografts (relative risk, 0.36; 95% CI, 0.24-0.53; P < 0.00001; I2 = 16%). The number needed to benefit to prevent 1 failure by using autograft was 7 patients (95% CI, 5-10). No difference between hamstrings autograft and patella tendon autograft was noted. Lysholm score was reported in 3 studies and did not differ between autograft and allograft. Conclusion: While systematic reviews comparing allograft and autograft ACLR have been equivocal, this is the first review to examine young and active patients in whom allograft performs poorly.
W hile there is consensus that anterior cruciate ligament reconstruction (ACLR) is the best treatment to provide near normal laxity after anterior cruciate ligament (ACL) rupture in an active person, the superior graft choice, fixation method, and surgical technique continue to be debated. Autograft tissue continues to be the most common choice overall, with regional variations favoring bone-patella tendon-bone (BPTB) over hamstrings in some parts of North America 17 and vice versa internationally. 11 In contrast, the use of allograft tissue is less common. Allograft is preferred by 11% of surgeons in international survey 11 and 22% in the United States. 17 There are advantages and disadvantages to each graft choice. Disadvantages of using autograft tissue include donor site morbidity, such as weakness and loss of knee flexion with hamstring autograft, 53 weakness of the quadriceps mechanism with BPTB, 53 variable graft sizes with hamstring tendon, 47 and patella fracture or anterior knee pain with BPTB. 53 Potential advantages of decreased operative time, consistent graft sizes, and lack of donor site morbidity make allograft tissue an attractive option for surgeons. However, increased cost, delayed incorporation of allograft tissue as compared with autograft, 68 and possible disease transmission 54 are potential disadvantages.
One previous meta-analysis of level 2 and 3 studies comparing primary BPTB autograft and BPTB allograft ACLR found a 5.03 times higher odds of graft rupture for patients undergoing allograft ACLR. However, if irradiated or chemically processed allografts were excluded, they found no statistically significant difference. 34 Another meta-analysis reported a 5% failure prevalence for autografts compared with a 14% failure prevalence for allografts (P < 0.01). 63 Two systematic reviews comparing autograft with allograft ACLR did not find a statistically significant difference in failure prevalence between autograft and allograft ACLR. 9,21 While some studies have reviewed failure prevalence of autograft ACLR and allograft ACLR in patients with a higher level of activity, until recently, there has not been a comparison of allograft and autograft ACLR in young patients. 3,5 In a large prospective, multisite cohort study, Kaeding et al 30 demonstrated a higher revision prevalence for allograft that was most clinically significant in younger patients. From these data, for example, a 14-year-old was estimated to have a 22% risk of revision with allograft compared with a 6.6% chance for autograft.
The purpose of the current systematic review is to determine whether there is a difference in failure prevalence between allograft and autograft ACLR in young and highly active patients.
Literature Search
A literature search of the EMBASE, MEDLINE, and Cochrane trials registry databases (from 1980 to the fourth week of October 2014) was conducted using keywords in combination "auto$", "allo$", and "anterior cruciate ligament" for EMBASE and "autog*", "allog*", and "anterior cruciate ligament" for MEDLINE and Cochrane. The only limit for the search was humans for all databases.
All titles and abstracts were reviewed, and if the study design was comparative and included any clinically relevant outcome (see criteria below), the full article was retrieved for the selection process. Systematic reviews from our search were retrieved, and their references were reviewed for any additional studies that could be included. An automatic alert option for MEDLINE was used that alerted the author by email if any articles were newly available through the database, which satisfied the search keywords in combination. This option was not available in EMBASE.
Eligibility Criteria
For inclusion, a study had to be a therapeutic study design comparing allograft with autograft isolated ACL reconstruction, and either prospective or retrospective (level of evidence [LOE] 1, 2, and 3). The primary outcome of the study had to be failure of ACLR with an acceptable definition such as revision, magnetic resonance imaging (MRI) confirmation of rupture, and Lachman 2+ or instrumented laxity measurement >5 mm sideto-side. Each study had to meet all inclusion criteria including: (1) appropriate study population (competitive athletes [active military, mean Marx score >12, varsity (college), semiprofessional, or professional] or patients <25 years old or stratified age groups for outcomes, if older patients included), (2) correct procedure (unilateral primary ACLR); (3) correct intervention being studied (autograft compared with allograft); (4) any relevant outcomes included (patient-reported outcomes, physical examination, reoperation, or failure); (5) minimum follow-up duration (2 years); and (6) minimum study size (15 patients in each treatment arm). Any study that failed to meet all of the above inclusion criteria was excluded. All case series (LOE 4) were excluded. Average follow-up of 2 years was not sufficient for inclusion. A study was also excluded if data from the same patients were included in another study with longer follow-up, in favor of the latter study. Abstracts presented at conferences but not published in peer-reviewed literature were also excluded. Concurrent meniscal or articular cartilage surgery was not an inclusion/exclusion criteria.
Study Selection
Two reviewers screened the titles and abstracts generated by the literature search for eligibility. If there was any uncertainty or ambiguity regarding eligibility, the study was included for full-text review. The reviewers independently assessed each full report to determine whether inclusion criteria were met. Disagreements were resolved by discussion with the senior author, when necessary. Journal, author name, and institution were not masked at any stage.
Data Extraction
Two reviewers extracted relevant data from each included study and recorded them into worksheet tables. Data collected in the worksheets included first author, journal and year in which the study was published, level of evidence, number of patients, follow-up duration, source of the autograft and allograft, allograft sterilization method if known, percentage of failures for each group, and study definition of graft failure. A comments section was included for any other relevant data particular to each study. All abstracted outcome data were entered into a meta-analysis software package (RevMan version 5.1; The Cochrane Collaboration) for pooled analysis.
Assessment of Risk of Bias in Eligible Studies
The checklist to evaluate a report of a nonpharmacological trial (CLEAR NPT 8 ) was used to evaluate the quality of included randomized controlled trials (RCTs). The CLEAR NPT is a validated quality assessment tool used to examine the adequacy of 10 key elements of an RCT. The Newcastle-Ottawa Scale 86 (NOS) was used to evaluate the quality of eligible prospective and retrospective cohort studies. The NOS assesses each study on 3 domains: selection, comparability, and outcome. Two reviewers independently assessed the methodological quality of eligible studies. Any disagreements were resolved with consensus discussion.
Statistical Analysis
Descriptive statistics were calculated with categorical data presented as frequency with percentages and continuous data as mean ± SD. Weighted means with their corresponding SDs were calculated for all parameters. Pooled risk ratios (RRs) were calculated for dichotomous outcomes, while mean differences were calculated for continuous outcomes. Ninety-five percent CIs were reported for all point estimates. The Cochrane χ 2 test for homogeneity (ie, Q test, P < 0.10) was used to test for heterogeneity, while the I 2 test was used to quantify heterogeneity. 12 To assess for potential publication bias, we constructed a funnel plot for each outcome analyzed (see Appendix Figure 1, available at http://sph.sagepub.com/ content/by/supplemental-data).
We pooled data from eligible studies using a random effects model because of the anticipated heterogeneity across studies with respect to surgical technique, allograft/autograft type, and allograft sterilization method. We planned an a priori subgroup analysis of graft failure prevalence based on sterilization method (irradiation vs no irradiation), autograft type (BPTB vs quadrupled hamstring [QHS]), and level of evidence (1 and 2 vs 3). In circumstances where only a median and interquartile range were provided by the study, established statistical methods were used to obtain imputed means and SDs. 26 In one case, the author was contacted directly via email correspondence to provide SDs where imputation was not possible. 2
General Study Characteristics
A total of 1016 participants were enrolled in the 7 eligible studies, including 463 treated with QHS autograft, 325 treated with BPTB autograft, and 228 treated with various allografts. All 7 studies that met inclusion criteria were conducted in the United States and enrolled patients undergoing ACLR between 1998 and 2012. Four studies involved a single surgeon. 2,3,7,19 The mean age of participants across all studies was 21.7 years. Four of the studies only included patients younger than 25 years. 2,19,20,59 Two studies included patients older than 25 years; however, the results were stratified by age. 3,30 The last study had an average age of 28.6 years, but still met the criteria for inclusion as the study participants were military cadets. 7 Patient sex was reported in 5 studies 2,7,19,20,59 of the 7, among which 281 of 442 patients (64%) were men. The mean follow-up was reported for 4 studies 2,7,19,20 and ranged from 24 to 51 months. The other 3 studies did not report a mean follow-up. All studies reported the graft failure prevalence after ACLR. The specific definitions used to identify graft failure in each included study are listed in Table 1, along with other baseline patient characteristics.
Graft Choice and Treatment
Graft choice was decided by the patient, after a discussion with the surgeon regarding the risks and benefits of both options in 5 studies. 2,3,19,20,30 However, 1 study mentioned that the authors did not recommend allografts to their patients prior to 2002, although they enrolled from 1998 to 2009. 19 The study of military cadets 59 did not comment on graft choice decision as the ACLR was performed prior to matriculation and therefore prior to enrollment in their study. In the RCT, 7 patients were randomly assigned to treatment groups.
Two of the 7 included studies used fresh frozen allografts that did not undergo chemical processing or irradiation. 2,7 Chemical processing using BioCleanse (RTI Biologics) or irradiation with <2 mrad 20 or 1.0 to 1.3 rad 19 was used in 2 studies. Another study 30 predominantly used fresh frozen allografts; however, some patients were also treated with irradiated grafts (<2.5 mrad). The 2 remaining studies 3,59 did not specify how allografts were treated.
Surgical Technique
Drilling of the femoral tunnel was carried out using the transtibial technique in 3 studies, 2,3,7 the 2-incision rear-entry technique in 1 study, 19 the anteromedial portal technique in 1 study, 20 and a combination of techniques in 1 study. 30 Participants had undergone ACLR prior to enrollment in 1 study and technique was not specified. 59
Study Quality
The only RCT included in the current review reported adequate allocation concealment; however, there was some uncertainty regarding the generation of the allocation sequence and whether the intention-to-treat principle was used for statistical analysis. In general, the cohort studies (prospective and retrospective) had well-matched cohort and control groups (within studies). They were comparable with respect to important demographic variables (ie, age) and surgical technique (within studies). The complete results of the methodological quality assessment using the CLEAR NPT and NOS are presented in Tables 2 and 3, respectively.
Secondary Outcomes Lysholm Score
A quantitative synthesis of the included studies did not demonstrate a statistically significant difference in the postoperative Lysholm scores among patients undergoing ACLR with an allograft compared with an autograft (3 studies; mean difference, 1.87 points; 95% CI, −0.44 to 4.18; P < 0.11). This is illustrated in Figure 4.
Other Patient-Reported Outcomes
Although a formal quantitative synthesis could not be performed on the Tegner activity scale, 7,19 International Knee Documentation Committee 2,20 (IKDC), Single Assessment Numeric Evaluation 7 (SANE), and Cincinnati score 2 because of the small number of reporting studies, none individually reported a statistically significant outcome.
discussion
This systematic review identified a clear difference in failure prevalence favoring primary ACLR performed with autograft tissue over allograft tissue in young (≤25 years of age) or highly active patients. The relationship was consistent whether all studies were included (level 3) or only those of highest quality, and demonstrated little publication bias. From these summary data, among patients younger than 25 years, for every 7 patients treated with autograft instead of allograft tissue, 1 failure would be prevented. A lack of data among included studies on other outcomes, including patient-reported outcome measures, precluded meta-analysis of any outcome other than failure in our review. Three studies reported postoperative Lysholm scores, but quantitative synthesis of these data did not reveal any differences of statistical or clinical significance.
The earliest included study 30 was published in 2011 and served as hypothesis-generating for this review. Those results have been confirmed with the inclusion of 6 subsequent studies, many of which were published in the most recent calendar year. Previously published meta-analyses comparing allograft and autograft ACLR, however, offer mixed conclusions. 9,21,27,33,34,63,82,88 Although a higher rerupture prevalence for allograft was reported in some, 34,63,88 no previous analysis used age stratification or age criteria for inclusion. Some authors of prior systematic reviews on this topic have noted this limitation in the literature. 9,15,48 The small absolute difference 30 in revision prevalence between allograft and autograft in older patients may explain why previous systematic reviews that have included studies of patients over a large age range produced mixed results.
There are potential confounding factors in the relationship between age and graft choice. Registry data 28 have suggested that surgeons who use allograft tissue are more likely to be low volume and not fellowship trained. Although there is limited current evidence for a relationship between surgery volume and outcome in ACLR, 45 precedence exists in other areas of orthopaedic surgery. 64,85 The volume of surgeries performed at the centers in each of the included studies from this review is unknown.
One of the largest controversies in allograft ACLR relates to the treatment of the tissue. Some have suggested that the studies that have shown greater failure with allograft tissue either did not include sterilization method or used irradiated or chemical sterilization methods that could lead to higher failure. In our review, we noted a difference between irradiated allograft and autograft tissue that achieved statistical significance, and also a difference between nonirradiated grafts and autograft but that did not achieve statistical significance. Caution should be used in this interpretation, however, as this synthesis included a very small number of studies. Furthermore, the 2 studies included in this review that used irradiated grafts were small and both were level 3. Other literature has supported better clinical outcomes with irradiated grafts, 60 and 1 recent systematic review of soft tissue grafts that was not stratified by age showed no difference between nonirradiated allografts and autograft ACLR. 36 Considering the difference we have demonstrated between allograft and autograft in the young or highly active population, we believe the burden of proof remains on the fresh-frozen allograft user to demonstrate safety in a high-level clinical study.
Delayed revascularization and recellularization of allograft tissue in vivo may be one explanation for our study's findings. Animal models have demonstrated delayed revascularization 67 and poorer performance of allograft ACLR. 29 Delayed revascularization has also been demonstrated in humans using contrast-enhanced MRI in allograft ACLR compared with autograft ACLR at 6-month follow-up. 55,91 Disadvantages of this study must be considered, many of which relate to the available data on this topic. Our inclusion criteria was for young patients and those with high activity level, but not specifically for other factors that may increase risk of failure such as those with poor rehabilitation or muscular control. Two of the included studies 7,19 used only revision as the definition of failure, and this may have biased the results. We acknowledge there is no consensus definition of failure.
conclusion
The differences in failure prevalence that we observed between allograft and autograft reconstruction among young and highly active patients should provide caution to those involved in the orthopaedic care of these patients. There is a paucity of data in this patient population to determine whether this difference between autograft and allograft persists based on allograft sterilization methods.
|
2018-04-03T05:55:17.991Z
|
2015-03-24T00:00:00.000
|
{
"year": 2015,
"sha1": "025a5ef3308927e857fddb901cd87d7c461177df",
"oa_license": "implied-oa",
"oa_url": "https://europepmc.org/articles/pmc4482307?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "af498241a2d2fc432cab743d658f372b6681b1dd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
54616910
|
pes2o/s2orc
|
v3-fos-license
|
Nicotinamide Promotes Cell Survival and Differentiation as Kinase Inhibitor in Human Pluripotent Stem Cells
Summary Nicotinamide, the amide form of vitamin B3, is widely used in disease treatments and stem cell applications. However, nicotinamide's impact often cannot be attributed to its nutritional functions. In a vitamin screen, we find that nicotinamide promotes cell survival and differentiation in human pluripotent stem cells. Nicotinamide inhibits the phosphorylation of myosin light chain, suppresses actomyosin contraction, and leads to improved cell survival after individualization. Further analysis demonstrates that nicotinamide is an inhibitor of multiple kinases, including ROCK and casein kinase 1. We demonstrate that nicotinamide affects human embryonic stem cell pluripotency and differentiation as a selective kinase inhibitor. The findings in this report may help researchers design better strategies to develop nicotinamide-related stem cell applications and disease treatments.
INTRODUCTION
Balanced cellular metabolism and signaling regulation are both essential for mammalian cells to survive, proliferate, and function. Nutrients are conventionally thought to act only as enzyme cofactors or energy sources. However, recent advances demonstrate that specific nutrients could also be involved in functions beyond nutritional support, such as epigenetic regulations and kinase cascades (Blaschke et al., 2013;Chen et al., 2013;Gao et al., 2016;Liu et al., 2017;Lu and Thompson, 2012;Sciacovelli et al., 2016;Tzatsos and Kandror, 2006;Yuan et al., 2013). In this report, we explored nicotinamide's role in stem cell regulation, and showed its regulatory roles as a kinase inhibitor.
Nicotinamide is the amide form of niacin, and both of them belong to the vitamin B3 family. They are the precursors of nicotinamide adenine dinucleotide (NAD), which acts as coenzyme in multiple cellular processes, including energy metabolism and DNA repair. Nicotinamide can be converted into nicotinamide mononucleotide (NMN) by nicotinamide phosphoribosyltransferase (NAMPT), which is then turned into NAD + by nicotinamide mononucleotide adenylyltransferase (NMNAT) (Maiese et al., 2009). The normal plasma concentration of nicotinamide and niacin is around 5 mM (Odum and Wakwe, 2012). Deficiencies in nicotinamide and niacin could lead to decreased NAD + production and cause pellagra, which affects the skin, digestive system, and CNS (Prakash et al., 2008). Nico-tinamide, but not niacin, is also an inhibitor of sirtuin and poly(ADP-ribose) polymerase (PARP), which regulate protein deacetylation and DNA repair (Avalos et al., 2005;Jackson et al., 2003;Kuchmerovska et al., 2004;Saldeen and Welsh, 1998).
In this study, we set to explore the roles of common vitamins in human pluripotent stem cells (hPSCs), and identified nicotinamide as a regulator of hPSC pluripotency, survival, and differentiation. Nicotinamide promoted hPSC cell survival and differentiation. Further analysis showed that nicotinamide promoted cell survival as a Rho-associated protein kinase (ROCK) inhibitor, while it also inhibited other kinases including casein kinase 1 (CK1) and a few others. Finally, we demonstrated that nicotinamide also initiated differentiation as a kinase inhibitor. Our study revealed the mechanisms underlying nicotinamide's key functions, and expanded our understanding of its application in cell culture practices.
RESULTS
Nicotinamide Promotes hPSC Survival after Individualization through the Regulation of ROCK Pathway hPSCs are vulnerable to cell death after individualization (Chen et al., 2010;Ohgushi et al., 2010). To identify the function of vitamins in stem cell regulation, we tested a set of 12 vitamins at three doses (based on their concentration in DMEM/F12) on cell survival after dissociation in H1 human embryonic stem cells (hESCs) ( Figure S1A). Nicotinamide was the only vitamin that promoted hESCs survival after individualization, while high concentrations of retinol and cholecalciferol inhibited cell survival ( Figure S1A). The effect of nicotinamide was dose dependent. Nicotinamide promoted survival of individualized cells at 5 and 10 mM, but at 25 mM showed significant toxicity to hESCs ( Figure 1A). We then examined cell apoptosis during passage, and found that 10 mM nicotinamide significantly reduced the Annexin V-positive and propidium iodide-negative cells (Figures S1B and S1C). It suggested that nicotinamide suppressed apoptosis, and the observation was consistent with the improved cell survival by nicotinamide. Microscopy images showed that nicotinamide also suppressed the cell blebbing phenotype after dissociation in a dose-dependent manner ( Figures 1B and 1C). The beneficial effect was also observed in other pluripotent stem cells (Figures S1D-S1F) as well as on different coating surfaces (Figures S1G and S1H).
To understand nicotinamide's role in cell survival, we tested modulators of a few known nicotinamide targets, including sirtuin inhibitors (EX527 and SirReal2) and PARP inhibitor (ABT888). However, neither single inhibitor nor their combination demonstrated the ability to improve cell survival ( Figure S1I). It indicates that nicotinamide could function through some other pathways to promote cell survival. It is well known that individualized hESCs were killed through ROCK/actomyosin activation (Chen et al., 2010;Ohgushi et al., 2010). We compared the impact of nicotinamide on cell survival with ROCK inhibitor Y27632. After cell individualization and passaging, nicotinamide improved cell survival with similar efficiency as ROCK inhibitor. However, no additive beneficial effect was observed when they were applied together (Figure 1D), which suggested that nicotinamide and ROCK inhibitor possibly functioned through the same pathway. We then analyzed the impact of nicotinamide on the ROCK pathway. ROCK directly phosphorylates myosin phosphatase-targeting protein (MYPT) at Thr696, and also regulates the phosphorylation of myosin light chain (MLC) directly or indirectly through MYPT (Totsukawa et al., 2000). After dissociation, the phosphorylation of MLC and MYPT increased, and Y27632 and nicotinamide suppressed the phosphorylation of both MLC and MYPT significantly ( Figure 1E). This impact of nicotinamide was dose dependent ( Figure 1F). Immunostaining results showed that both nicotinamide and Y27632 decreased the colocalization between p-MLC (Ser19) and actin filament after hESC dissociation ( Figure 1G). These data indicated that nicotinamide was a modulator of the ROCK pathway.
Nicotinamide Is a Direct ROCK Inhibitor Independent of NAD Pathway Nicotinamide is the precursor of NAD + and NADH, so we tested whether nicotinamide improved cell survival through NAD metabolites. Niacin, NMN, NAD + , and NADH were added to individualized cells, but none of them had significant effect on cell survival (Figure 2A), and these molecules did not block the cell blebbing after individualization ( Figure S2A). NAMPT converts nicotinamide into NMN, but NAMPT inhibitors did not alter nicotinamide impact on cell survival (Figures 2B and S2B). Niacin also had no impact on the phosphorylation of MLC and MYPT ( Figure 2C). These results suggested that the effects of nicotinamide on cell survival and ROCK pathway regulation were possibly independent of the NAD pathway, and nicotinamide itself might be the direct effector.
To study how nicotinamide inhibited ROCK, we tested the protein level of ROCK1 and ROCK2 during dissociation, and found that nicotinamide had no impact (Figure S2C). Then we evaluated the activity of ROCK1 and ROCK2 in vitro with different doses of nicotinamide and niacin ( Figures 2D and 2E). Surprisingly, the addition of nicotinamide significantly suppressed ROCK1 and ROCK2 activity in a dose-dependent manner, but niacin had almost no effect ( Figures 2D and 2E). Computational simulation demonstrated that nicotinamide could potentially interact with key amino acid functional groups in the active site of ROCK2 ( Figure S2D). The binding constant assay also confirmed the inhibition of ROCK1 and ROCK2 by nicotinamide ( Figures 2F and 2G).
Nicotinamide Regulates More Than the ROCK Pathway ROCK inhibitor Y27632 increases the cloning efficiency of hPSCs (Chen et al., 2010), so we examined the impact of nicotinamide on cloning efficiency. Compared with Y27632, nicotinamide-treated cells showed much smaller improvement in cloning efficiency ( Figure 3A), even though both reagents had similar impact on 24-hr cell survival ( Figure 1D). High concentrations of nicotinamide decreased the cell growth rate of hESCs ( Figure 3B) and (G) Confocal images of individualized hESCs treated with 10 mM nicotinamide (Nam) or 10 mM ROCK inhibitor Y27632 (ROCKi). Red, phalloidin 594; green, p-MLC (Ser19). Scale bar, 10 mm. Data are shown as means ± SEM. *p < 0.05 compared with control.
reduced the mRNA level of NANOG and POU5F1 ( Figures 3C and 3D), which indicated that nicotinamide possibly induced hESC differentiation. Among the set of 12 vitamins examined in the differentiation of hESCs, nicotinamide was the only one that affected the pluripotency of hESCs ( Figures S3A and S3B). Taken together, this evidence suggests that nicotinamide may have additional functions on pluripotency other than regulating the ROCK pathway.
To study the other functions of nicotinamide in hESCs beyond ROCK inhibition, we analyzed the global gene expression profile after 24 hr of nicotinamide and ROCK inhibitor treatment (Table S1). Hierarchical clustering analysis showed that nicotinamide treatment was not clustered with the ROCK inhibitor group ( Figure 3E). Compared with control, nicotinamide increased the expression of 371 genes, and decreased the expression of 640 genes after a 24-hr treatment. However, only a small portion of these genes were shared by the cells treated with ROCK inhibitor ( Figure S3C). The KEGG analysis showed that the genes downregulated by nicotinamide were enriched in path-ways associated with pluripotency of stem cells, phosphatidylinositol 3-kinase, metabolism, transcription, and cancer ( Figure 3F), and the gene expression patterns were different compared with the genes downregulated by the ROCK inhibitor ( Figure S3D). The genes upregulated by nicotinamide were also enriched in different pathways from those upregulated by the ROCK inhibitor ( Figures S3E and S3F). These data indicated that nicotinamide had multiple functions in hESC regulation.
Nicotinamide Affects hESC Differentiation in Multifaceted Manner
Because nicotinamide was a direct ROCK inhibitor at high concentration, we hypothesized that nicotinamide might be able to inhibit other kinases. Considering that most nicotinamide effects appeared at 10 mM in cell culture, we measured cellular nicotinamide amounts when 10 mM of nicotinamide was added in the medium. Liquid chromatography-mass spectrometry (LC-MS) results demonstrated that cellular nicotinamide concentration was around White, FK866 alone; red, FK866 together with 10 mM nicotinamide; blue, FK866 together with 10 mM Y27632. (C) Niacin had no effect on the phosphorylation of MLC and MYPT. Individualized hESCs were treated with niacin at the indicated concentrations for 1 hr (n = 3). (D and E) The impact of nicotinamide on ROCK activity in vitro. ROCK1 (D) and ROCK2 (E) activity was determined by ELISA under nicotinamide (Nam, red curve) or niacin (gray curve) treatment (n = 3 technical replicates). The results were repeated twice. 0.2 mM Y27632 (ROCKi) was used as positive control (black square). (F and G) Binding constant measurements for the interactions of nicotinamide with ROCK1 (F) and ROCK2 (G). The x axis indicates the nicotinamide concentration (mM) in log10 scale (n = 2 technical replicates). Data are shown as means ± SEM. *p < 0.05 compared with control.
1.5 mM after 1 hr of incubation ( Figure 4A). Based on the above information, the KINOMEscan assay in a competition-binding method was used to screen the interaction between nicotinamide and the active sites of 97 kinases (Davis et al., 2011;Egan et al., 2015;Fabian et al., 2005;Somoza et al., 2015), and the screening was performed at 1 and 3 mM. We found that multiple kinases were significantly inhibited by nicotinamide ( Figure 4B; Table S2). Nicotinamide inhibited 96.7% of kinase-ligand interaction of ROCK2 at 3 mM, which is consistent with the in vitro kinase assay ( Figure 2D). It also inhibited 92.3% of kinase-ligand interaction of CK1d ( Figure 4B; Table S2). The kinase The cellular concentration of nicotinamide determined by LC-MS after 1 hr of treatment. hESCs were cultured in normal E8 medium without additional nicotinamide (Control), or E8 medium with additional 10 mM nicotinamide (Nam) (n = 3). (B) The kinase screening profile for nicotinamide, obtained using the DiscoverRx KINOMEscan service. Nicotinamide was screened at 1 and 3 mM for its ability to inhibit the binding of 97 kinases to substrates in the assay. % Ctrl represents the results of primary screen on binding interactions, and lower numbers indicate stronger hits (see also Table S2). (C-E) Binding constant measurements for the interactions of nicotinamide with CK1d, (C) CK1a (D), and CK1 3 (E). The x axis indicates the nicotinamide concentration (mM) in log10 scale (n = 2 technical replicates).
(legend continued on next page) binding constants of nicotinamide with CK1d, CK1a, and CK1 3 were 352.512, 546.580, and 612.076 mM, respectively ( Figures 4C-4E). b-Catenin is the substrate of CK1a, and is specifically phosphorylated at Ser45 (Amit et al., 2002;Liu et al., 2002). We examined b-catenin phosphorylation (Ser45) in hESC, and found that nicotinamide and CK1 inhibitor significantly suppressed Ser45 phosphorylation ( Figures 4F and 4G). We also evaluated the impact of nicotinamide on the CK1a activity in vitro using the bioluminescent kinase assay, and the result showed that nicotinamide inhibited CK1a in a dose-dependent manner ( Figure 4H). Based on the findings in the kinase screen, we examined whether any of the nicotinamide-inhibited kinases or other targets could affect differentiation in hESCs. hESCs were treated with a set of small molecules that modulated the activities of nicotinamide targets, and allowed to spontaneously differentiate for 3 days. Similar to nicotinamide, CK1 inhibitor D4476 significantly reduced the mRNA level of pluripotency markers (NANOG and POU5F1); in contrast, other inhibitors in ROCK, PARP, and sirtuin pathways did not have a significant impact ( Figures 4I and 4J). In embryoid body differentiation, nicotinamide inhibited the expression of meso-endoderm marker genes (MIXL1, TBXT, EOMES, and SOX17), and induced the expression of ectoderm marker genes (PAX6 and NEUROD1). CK1 inhibitor D4476 demonstrated a similar effect ( Figure S3G). We also confirmed that both nicotinamide and CK1 inhibitor D4476 blocked meso-endoderm differentiation in BMP4-induced differentiation ( Figures S3H-S3J). These results suggest that nicotinamide could lead to hPSC differentiation through the inhibition of CK1.
Nicotinamide was reported as an inducer of retinal pigment epithelium (RPE) differentiation (Buchholz et al., 2013), so we explored whether nicotinamide affects RPE differentiation through CK1 inhibition ( Figure 4K). Consistent with previous reports, nicotinamide increased the expression of early eye field markers LHX2, PAX6, and RAX on day 6 of RPE differentiation. ROCK inhibitor, SIRT2 inhibitor, PARP inhibitor, and niacin alone had little effect, while joint treatment with ROCK inhibitor and SIRT1 inhibitor improved the mRNA level of LHX2, PAX6, and RAX, even though the level was much lower than with nicotinamide ( Figures S4A-S4C). At the same time, CK1 inhibitor D4476 significantly induced the expression of early eye field markers LHX2, PAX6, and RAX ( Figures 4L-4N). The positive impact of nicotinamide, CK1 inhibitor, and CK1/ROCK dual inhibition on RPE differentiation was further confirmed by LHX2 immunostaining (Figure 4O) and flow cytometry analysis ( Figure S4D). Similar results were obtained with H9 ( Figures S4E-4G) and human induced pluripotent stem cell lines NL1 ( Figures S4H-S4J) and NL4 ( Figures S4K-S4M). These results indicate that the effect of nicotinamide on RPE differentiation potentially relies on its inhibition on ROCK and CK1 pathways.
DISCUSSION
Nicotinamide is widely used in disease treatments and stem cell applications, but many of its effects cannot be explained by its role in nutritional regulation. We demonstrated that nicotinamide regulates stem cell survival and differentiation through the inhibition of specific kinases. Besides its complicated role in metabolism, DNA repair, and epigenetic modification, nicotinamide can modulate various cellular functions through kinase cascades. This is consistent with the diverse applications related to nicotinamide.
Nicotinamide has long been used in stem cell culture to improve stem cell performance. Nicotinamide enhanced cell survival and reprogramming, but its function was attributed to its role in the sirtuin pathway and nutritional regulation (Avalos et al., 2005;Son et al., 2013). Our study showed that nicotinamide was a ROCK inhibitor. ROCK inhibitors are known to suppress actomyosin contraction, improve cell survival, and enhance reprogramming (F) The phosphorylation of b-catenin was decreased by 10 mM nicotinamide or 5 mM CK1 inhibitor D4476 treatment for 6 hr (n = 3) as shown by western blot. (G) Quantification of the western blot results by densitometry (n = 3). (H) The impact of nicotinamide on CK1a activity in vitro. CK1a activity was determined using the bioluminescent kinase assay under nicotinamide (Nam, red curve) treatment (n = 3 technical replicates). The results were repeated three times. 50 mM D4476 (CK1i) was used as positive control (black square). (I and J) Analysis of NANOG (I) and POU5F1 (J) mRNA expression by qPCR after 3 days of spontaneous differentiation. Data are normalized to control (n = 3). Nam, nicotinamide 10 mM; ROCKi, Y27632 10 mM; Niacin, 5 mM; SIRT1i, Ex527 10 mM; SIRT2i, SirReal2 500 nM; PARPi, ABT888 50 nM; CK1i, D4476 5 mM. (K) A schematic diagram showing the protocol to differentiate hESCs toward early eye field. (L-N) Nicotinamide and CK1 inhibition promoted the expression of early eye field markers. Expression of LHX2 (L), RAX (M), and PAX6 (N) were analyzed by qPCR (n = 3). Nam, nicotinamide 10 mM; ROCKi, Y27632 10 mM; CK1i, D4476 5 mM.
efficiency (Chen et al., 2010;Ohgushi et al., 2010;Watanabe et al., 2007). It is possible that nicotinamide benefits the stem cell culture through its role as a ROCK inhibitor. It is noteworthy that nicotinamide is used in many organoid culture systems, which are also benefited by ROCK inhibition in organoid formation (Miyoshi and Stappenbeck, 2013). The positive effect of nicotinamide on organoid culture may also be related to its role as a ROCK inhibitor.
Nicotinamide is used in many different stem cell differentiation platforms, and our data show that this function is likely not based on its inhibition of ROCK. The kinase screen data in this report showed that nicotinamide also inhibited CK1 and other kinases that are associated with pluripotency. In the limited tests, we showed that some of nicotinamide's impacts on differentiation could potentially be explained by its ability to inhibit CK1 pathways. In RPE differentiation, combination of ROCK and CK1 inhibitors achieved similar effects as nicotinamide alone, which supports our argument that nicotinamide might drive differentiation through CK1 modulation.
Unlike nicotinamide, niacin does not inhibit either ROCK or CK1, even though both belong to the vitamin B3 family. It is important to consider their differential impact on cellular functions when people plan to use vitamin B3 for specific treatments. The kinase pathways affected by nicotinamide could provide valuable references for relevant clinical applications.
We noticed that nicotinamide was effective in kinase inhibition only at high concentrations, and there is an obvious dose-dependent effect. The high concentration of nicotinamide is often used in disease treatment and cell culture. However, the level of nicotinamide in the serum is much lower. This suggests a dual role of nicotinamide controlled by its cellular concentration. Low-level nicotinamide is sufficient to meet cellular needs as a nutrient, but high concentrations lead to kinase inhibition and subsequently affect survival and differentiation. This partially explains why nicotinamide's effect on kinase activity was not previously revealed.
In summary, this report revealed nicotinamide as a kinase inhibitor regulating stem cell survival and differentiation. These results have practical implications for nicotinamide-related treatments, and provide another angle to further improve its applications.
EXPERIMENTAL PROCEDURES
Experimental procedures are also provided in Supplemental Information.
hPSC Culture and Survival Assays
The use of hESCs and hiPSCs was approved by the Institutional Review Board at the University of Macau. hESC culture and survival assays were carried out as described previously (Chen et al., 2010). See Supplemental Information for more details.
hESC Differentiation to Early RPE Lineage hPSCs were passaged 1:6 on Matrigel (Corning Life Sciences) in E8 medium with 10 mM Y27632 and changed to fresh E8 after cell attachment for 24 hr. Then the differentiation was induced following the methods reported previously with slight modifications (Buchholz et al., 2013). The detailed method is in Supplemental Information.
Statistical Analysis
Data are shown as means ± SEM of at least three independent experiments unless otherwise specified, and Student's t test was used for statistical analysis. p values < 0.05 were considered significant.
ROCK2 2ETR
CK1δ 5IH5 D C Figure S2, Related to Figure 2. hESC survival is not improved through SIRT, PARP or NAD pathway. (A) Nicotinamide decreased the percentage of blebbing cells, but niacin, NMN, NAD+ and NADH had no effect. hESCs were dissociated by TrypLE for 5 minutes, neutralized by 0.5% BSA, and then treated with 10 mM nicotinamide (Nam), 5 mM Niacin, 5 mM NMN, 5 mM NAD , or 5 mM NADH for 30 minutes. The percentage of blebbing cells was normalized by the total cell number (n = 5 images). Data are representative of three independent experiments. (B) Nicotinamide phosphoribosyltransferase (NAMPT) inhibitor STF118804 did not block the effect of nicotinamide on cell survival (n = 3 independent experiments). White, STF118804 alone; red, STF118804 together with nicotinamide; blue, STF118804 together with ROCK inhibitor. Nam, nicotinamide 10 mM; ROCKi, Y27632 10 μM. (C) Nicotinamide and ROCK inhibitor did not change the protein level of ROCK1 and ROCK2 after the indicated time of individulization (n = 3 independent experiments). Nam, nicotinamide 10 mM, ROCKi, Y27632 10 μM. (D) Summary of the binding of nicotinamide to ROCK2 and CK1δ predicted by structure modeling. Data are shown as mean ± SEM. *, p < 0.05 compared with control. (C) Venn diagram representing the number of differentially expressed genes in microarray analysis. hESCs were treated with Nicotinamide (10 mM) and ROCK inhibitor (Y27632 10 μM) in E8 medium for 24 hours. Green, genes up-regulated by ROCK inhibitor treatment; blue, genes up-regulated by nicotinamide treatment; pink, genes down-regulated by nicotinamide treatment; yellow, genes down-regulated by ROCK inhibitor.
(D-F) Bubble plot of enriched KEGG pathways from ROCK inhibitor down-regulated genes (D), nicotinamide up-regulated genes (E) and ROCK inhibitor up-regulated genes (F). Rich factor is the ratio of the treatment-regulated gene number to the total gene number of a certain pathway. A Q value is the corrected p value ranging from 0 to 1. The color and size of the dots indicated the range of the Q-value and the number of genes mapped to the specific pathways.
(H-J) Nicotinamide and CK1 inhibitor decreased the gene expression of TBXT (H), MIXL1 (I) and SOX17 (J) in BMP4 induced meso-endoderm differentiation. hESCs were treated with 20 ng/ml BMP4 in E8 medium for 2 days, and then the expression level of meso-endoderm marker genes were analyzed by Q-PCR, normalized to GAPDH and to BMP4 control (n = 3 independent experiments). Data shown as mean ± SEM. *, p < 0.05, ***, p < 0.001 compared with control.
Human PSC Culture
Human ESCs (H1 and H9) and human iPSCs (NL1 and NL4) from NIH were cultured in E8 medium on Matrigel-coated plates for 3-4 days, and then passaged with DPBS/EDTA. The details of cell maintenance were described previously (Chen et al., 2010). In this study, most experiments were conducted on H1 hESC line unless otherwise stated, and some key experiments were also confirmed in H9 and iPSCs.
Survival Assays
The assay was performed as previously described with some modifications (Chen et al., 2010). Briefly, human ESCs and iPSCs were cultured for 3 days, dissociated with TrypLE, and neutralized with the medium containing 0.5% BSA. Cells were then harvested and counted, and 20,000-40,000 cells were plated into each well of 24-well plates containing 500ul medium and different reagents. After 24 hours, cells were dissociated with TrypLE, neutralized with 10% FBS, and counted by flow cytometry.
Cell growth and Cloning assay
The assay was performed as previously described with some modifications (Chen et al., 2010). Briefly, human embryonic stem cells (hESCs) (H1 cells were used unless otherwise stated) were cultured for 3 days, dissociated with TrypLE, and neutralized with medium containing 0.5% BSA. Cells were then harvested, counted, and seeded at the density of 500 per well in 12-well plates. The cloning efficiency was measured after 7 days.
To test effect of reagents on growth, hESCs were cultured for 3 days, dissociated with TrypLE, and neutralized with medium containing 0.5% BSA. Cells were then harvested and split at a ratio of 1:12 dilution in 24-well plates. Treatments were added to attached cells after 24 hours, and continued for indicated periods of time, and cell counts were determined using flow cytometer and compared to control before treatment.
hESC differentiation
For 3-day spontaneous differentiation, hESCs were dissociated by DPBS-EDTA and passaged at a 1:12 dilution into a 12-well plate coated with Matrigel. After 24 hours of cell attachment, the maintenance medium (E8) was replaced with 1ml of E6 medium (E8 minus TGFβ and FGF2), and the medium was changed every 1 or 2 days for 3 days.
The embryoid body (EB) formation was performed as previously described with some modifications (Lin and Chen, 2008). Briefly, 70% confluent hESCs were dissociated with DPBS/EDTA, and passaged 1:2 into each well of AggreWell-800 (Corning) into E8 medium with 10 μM Y27632 for 24 hours. EBs were cultured in E8 medium for another 2 days with half-change of fresh medium every day. On the third day, EBs were removed from microwells by gently pipetting, and transferred into poly-HEMA coated 12-well plates in E6 medium to culture for 14 days. The medium was changed every 2 days.
For meso-endoderm differentiation, hESCs were passaged at a 1:6 ratio by DPBS-EDTA into a 12-well plate coated with Matrigel. After 24 hours of cell attachment, medium was changed to E8 medium with 20 ng/mL BMP4 for 2 days. RNA was harvested for Q-PCR analysis.
hPSC differentiation to early RPE lineage RPE differentiation was induced following the methods reported previously with slight modifications (Buchholz et al., 2013). hPSCs were passaged 1:6 onto Matrigel (Corning) in E8 medium with 10 μM Y27632 and changed to fresh E8 medium after cell attachment for 24 hours. From day 0 to 2, 50 ng/ml Noggin (R&D Systems), 3 μM IWP2 (Selleck), 10 ng/ml IGF1 (R&D Systems) and other chemicals were added to E6 medium. From day 2 to 4, 10 ng/ml Noggin, 3 μM IWP2, 10 ng/ml IGF1, 5 ng/ml FGF2 and other chemicals were added to E6 medium. From day 4 to 6, 3 μM IWP2, 10 ng/ml IGF1 and 20 ng/ml Activin A (R&D Systems) were added to E6 medium. Total RNA was harvested to analyze the gene expression. From day 6 to 12, the cells were cultured in E6 medium, and then harvested for flow cytometry analysis. At day 8, the differentiated cells were passaged by dispase II, cultured for another 4 days, and then harvested for immunostaining.
Flow cytometry
At day 12 of RPE differentiation, the differentiated cells were dissociated with TrypLE, and fixed with 1% paraformaldehyde in PBS at 37℃ for 10 minutes. After washing with PBS, cells were permeabilized with 0.1% Trition X-100 in PBS for 10 minutes at room temperature. Primary antibody mouse anti-human LHX2 was incubated with cells at a 1:100 dilution in 1% BSA in PBS for 1 hour at room temperature. After washing, Alexa Fluor® 488 conjugated goat anti-mouse secondary antibody was used at 1:1000 dilution in 1% BSA for 1 hour at room temperature. After washing, cells were resuspended in PBS for flow cytometry analysis using BD Accuri C6. Undifferentiated hESCs were stained with LHX2 as negative control for gating.
Annexin V and propidium iodide staining
The annexin V and propidium iodide staining was performed following the protocol of the kit (ThermoFisher). Briefly, hESCs were dissociated with TrypLE to single cells, and seeded to Matrigelcoated 12-well plates with or without 10 mM nicotinamide. After 8 hours, cells were harvested, washed with PBS, and then re-suspended in 100 μL 1x annexin-binding buffer containing 5 μL annexin V and 100 μg/mL propidium iodide. After incubating for 15 minutes, 400 μL annexin-binding buffer was added, and samples were kept on ice for flow cytometry analysis using BD Accuri C6.
ROCK activity measurement
The assay was done following the instruction from the Rho-associated kinase (ROCK) activity assay kit (Millipore). Briefly, the reaction mixture was prepared which included 1 mU ROCK1 or ROCK2, 0.5 mM ATP, 75 mM MgCl 2 and assay dilution buffer; ROCK inhibitor or nicotinamide was added, and the reaction was incubated at 30℃ for 30 minutes. After washing, anti-p-MYPT1 (Thr696) antibody was added, followed by incubation at room temperature for 1 hour. After washing, secondary antibody was added (room temperature for 1 hour). Then TMB/E substrate was added and allowed to develop for 1-5 minute before the reaction was stopped. The absorbance was measured at 450 nm.
Kinase screen and Kd determination
The kinase screen and Kd determination were performed by DiscoverX. Briefly, specific kinases were expressed in BL21 strain or HEK-293 cells, and then labeled with DNA tag for Q-PCR detection.
Magnetic beads coated with streptavidin were incubated with biotinylated molecule ligands. The affinity resins were then blocked and washed before using for kinase assays. Binding reactions were performed by combining test compounds, affinity beads with ligand, and kinases. Kds were determined using 9-point 3-fold dilution series with DMSO control point. The reactions were performed in 384well plate, and incubated at room temperature for 1 hour. After washing and elution, the kinase concentration in the eluates of beads was determined by q-PCR. % Ctrl calculation:
CK1 activity measurement
The assay was performed following instructions from the kit (Promega). Briefly, the reaction was prepared with 25 μL total volume containing 10 ng CK11, 2.5 μg Casein, 10 μM ATP, CK1 inhibitor D4476 or different doses of nicotinamide in reaction buffer, and reacted at room temperature for 60 minutes. Then 25 μL ADP-Glo TM reagent was added, and incubated at room temperature for 40 minutes. After that, 50 μL Kinase detection reagent was added and incubated at room temperature for 30 minutes. Luminescence was read by PerkinElmer Victor X3 Microplate Reader.
Percentage of CK1 activity calculation --100 Test compound: nicotinamide or D4476 Background signal: reaction without CK11 Negative control signal: reaction without any inhibitor.
Immunostaining and Actin Staining
Cells were fixed with 4% Paraformaldehyde in PBS for 10 minutes at room temperature. After washing, fixed cells were permeabilized with 0.3% Triton-X 100 in PBS for 20 minutes at room temperature. After washing with PBS, cells were blocked with 1% BSA in 0.1% Triton-X100 / PBS, and incubated with the primary antibodies (1:200 in PBS containing 1%BSA and 0.1% Triton-X100) overnight at 4 ℃. The stained cells were washed with 0.1% Triton-X100 / PBS three times at room temperature, and then incubated with the secondary antibodies (1:1000) for 1 hour at room temperature. For actin staining, diluted phalloidin solution (1:100) was added to the cells, and incubated for 30 minutes at room temperature following the protocol of manufacturer (ThermoFisher).
After washing, cells were stained with hoechst (1:10000). The samples were mounted with Vectashield (Vector Laboratories), and imaged with Carl Zeiss Confocal LSM710.
Western blot
Briefly, 30ug protein extracted from H1 cells was loaded into the lanes of a SDS-PAGE gel, and transferred to PVDF membranes after electrophoresis. The membranes were blocked with 5% non-fat milk in TBS-T for 1h at room temperature, and then incubated with primary antibodies overnight at 4℃. After washing, the membranes were incubated with secondary antibodies conjugated to horseradish peroxidase for 2 hours. The immuno-complexes were detected by the enhanced chemiluminescence method (ThermoFisher). The density of signals was quantified with Image J.
Quantification of intracellular nicotinamide
H1 cells were passaged and then cultured in 6-well plates with E8 medium (Ct) or E8 supplemented with 10 mM nicotinamide for 1h. After 1h, attached cells were dissociated by TrypLE, neutralized with 10% FBS/DMEM, and then counted by hemocytometer. Cells were also collected for nicotinamide concentration quantification by LC-MS/MS. Sample preparation is based on published procedures (Ying et al., 2012;Zhang et al., 2016). Briefly, spent media was removed, and cells were rinsed with 0.5 ml/well 0.9% (w/v) saline twice. Then 1 ml/well -80 o C 80% methanol was added to quench metabolism, and cells were scrapped off. The metabolite-containing mixtures were put on ice and then centrifuged at 2000 x g for 15 min. The supernatant was collected and then evaporated by nitrogenblowing. Samples were re-suspended using 50μl 50% acetonitrile. Waters Xevo TQD coupled with Waters Acquity UPLC system was used for quantification of nicotinamide.
Microarray
The experiment procedure of microarray was performed as previously described (Liu et al., 2018).
RNAiso Plus was used to extract total RNA from the cells. Then RNA was converted to cRNA using SuperScript III kit and TargetAmp™-Nano Labeling Kit (Epibio) following the manufacturer's instructions. The HumanHT-12 v4 Expression BeadChip Kit (Illumina) was used for sample hybridization.
|
2018-12-12T19:53:53.374Z
|
2018-11-29T00:00:00.000
|
{
"year": 2018,
"sha1": "8c9b45cddfd5df1da4a2650bed31bed7c8b42cce",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S2213671118304466/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "440ec2634e2a970e84f36b9daa1e374d8871959e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
255255856
|
pes2o/s2orc
|
v3-fos-license
|
Evaluation of the Anti-inflammatory Effect of Intra-articular Injection of Chondroitin Sulfate and Sodium Hyaluronate in Mechanically Induced Temporomandibular Joint Injury in Rabbits
BACKGROUND: Degenerative arthritis is the most common form of arthritis, usually affecting the hands, feet, spine, knees, and temporomandibular joint (TMJ) as well. TMJ degenerative arthritis causes symptoms of painful joints, loss of joint function, limited mouth opening, joint instability, and clicking. Non-surgical symptomatic treatments can successfully be used to treat patients with degenerative arthritis. AIM: This study aimed to evaluate the anti-inflammatory effect of intra-articular injection of chondroitin sulfate and sodium hyaluronate in mechanically induced acute injury in TMJ of rabbits. METHODS: An animal study was conducted, all rabbits received a mechanical injury using a contra-angle handpiece with a speed of 120 rpm of a fissure bur 4 mm in diameter and 4 mm in depth extending to subchondral bone. Thirty-two rabbits were randomized into four groups: Control group, sodium hyaluronate “SH” group, chondroitin sulfate “CS” group, and “CS-SH” group. After 7 days, rabbits in the control group, “SH” group, “CS” group, and “CS-SH” group were respectively treated with normal saline, sodium hyaluronate, chondroitin sulfate, or combination of CS&SH injection in the TMJ. All animals were treated once weekly for 3 weeks. A histological evaluation was performed. RESULTS: Histological findings showed a significantly reduced inflammatory cell, bone resorption, and fibrosis in the CS-HA-treated group. CONCLUSION: CS-SH injection has an anti-inflammatory effect on TMJ degenerative arthritis and aids in the reparative process.
Introduction
Degenerative arthritis is a type of arthritis caused by inflammation, breakdown, and degeneration of the cartilage of the joints. It is the most common form of arthritis, usually affecting the hands, feet, spine, knees, and temporomandibular joint (TMJ) as well, it is also known as degenerative joint disease [1]. TMJ degenerative arthritis affects the cartilage, subchondral bone, synovial membrane, and other hard and soft tissues causing changes such as TMJ remodeling, articular cartilage deterioration, abrasion, and local thickening and remodeling of the underlying bone [2]. Management of degenerative arthritis is largely symptomatic. Studies have shown that non-surgical treatment can successfully be used to treat patients with osteoarthritis [3], [4], [5].
Hyaluronic acid is a polysaccharide, it is the main component of the cartilage and the synovial fluid; it is responsible for the mechanical properties of the joint by allowing shock absorption, cartilage protection, and lubrication [6]. In osteoarthritis patients, synovial hyaluronate is depolymerized and is cleared at higher rates compared to normal subjects due to inflammation [7]. Intra-articular HA injection is an effective tool in reducing the pain and symptoms associated with internal derangement of TMJ [8].
Chondroitin sulfate -a sulfated glycosaminoglycan -is an important structural component of the extracellular cartilage matrix. It is an inhibitor of extracellular proteases involved in the metabolism of connective tissues and stimulates proteoglycan production by chondrocytes in vitro; it also inhibits cartilage cytokine production and increases the intrinsic viscosity of the synovial liquid [9]. Some authors found that intra-articular injection of chondroitin sulfate stimulated the chondrocyte metabolic activity and was possibly helpful to decrease the degenerative process [10], [11].
Since 2002 https://oamjms.eu/index.php/mjms/index Preliminary clinical trials were in favor of the effectiveness of intra-articular injection of sodium hyaluronate combined with chondroitin sulfate as a viscosupplementation for the degenerative osteoarthritic TMJ and this combination can be used as a safe and effective treatment for all cartilage lesions [10], [12].
Materials and Methods
An animal study was conducted in the following design.
Animals
All experimental procedures were approved by the Research Ethics Committee of the Faculty of Dentistry, Suez Canal University (Ismailia, Egypt), (I.R.B. no. 76/2018) and provided ethical guidelines for the study.
The sample size was based on a previously published study with a similar experimental design [13].
Thirty-two mature (aged 6 months or more) male New Zealand rabbits were employed in this study. They were selected according to weight (2.5−3 kg). Rabbits were housed in clean well-ventilated stainless steel cages, at temperature of 25 ± 3°C throughout the experiment and were left 1 week for acclimatization, no special feeding were provided other than the known protocol in the animal house at the Faculty of Medicine -Suez Canal University.
Induction of full-thickness osteochondral defect
Full-thickness osteochondral defect -to simulate osteoarthritic changes -was created in the left TMJ of all rabbits by the following technique: [12], [14].
•
General anesthesia was induced with an intramuscular injection of 50 mg/kg ketamine HCl (ketamine HCl 200 mg/mL, injectable solution, 10 mL, by NexGen pharmaceuticals Co., NY, USA) and was maintained with an intramuscular injection of 5 mg/kg xylazine HCl (Xylazine HCl 125 mg/mL, injectable solution, 100 mL, by NexGen Pharmaceuticals Co., NY, USA) and 0.3 mg/kg ketamine HCl.
•
One side temple area of the rabbits was shaved, scrubbed with 10% povidone iodine, and draped in a sterile fashion.
•
The left TMJ area incision was performed ( Figure 1). Flap was closed using absorbable suture material, then animals were left for 1 week for complete soft-tissue healing.
Experimental study design
• Animals were divided into four groups, each group consisted of eight rabbits. • Group (1): Saline control group; 0.1 ml saline -to simulate no treatment -was injected intraarticular in TMJ once per week for 3 weeks.
Histopathological sample preparation
On the 6 th week post-injection [14]; to insure reasonable cartilage and bone healing; rabbits in all groups were euthanized by over dose of ketamine HCl injection. Osteochondral tissues were separated and fixed in a 10% phosphate-buffered paraformaldehyde solution. Tissues were dehydrated, embedded in paraffin, and sectioned at 4 µm and stained with hematoxylin and eosin (H&E) (Life Chemicals group, Alexandria, Egypt). After sacrificing and taking the specimens, the dead experimental animals were disposed of by burning in the Animal Ashing Unit of the Faculty of Medicine -Suez Canal University. Then, specimens were examined blindly under a light microscope to evaluate the severity of osteochondral defect, condylar cartilage tissue, and subchondral bone.
Results
Rabbits with full-thickness osteochondral defect of control group that received normal saline showed marked inflammatory infiltrate formed of lymphocytes and many plasma cells associated with marked fibrosing reaction, bone resorption, few degenerated osteocytes, and few hyperplastic osteoblasts and scattered osteoclasts. These deleterious effects associated with full-thickness osteochondral defect were attenuated in severity with hyaluronic acid and/or chondroitin sulfate administration. However, treatment with combined hyaluronic acid and chondroitin sulfate was more beneficial in attenuating the severity of osteochondral defect with minimal to no residual inflammation, minimal fibrosis, and osteoblastic rimming compared to the control group (Figure 3a-d and Table 1).
Inflammatory cells
Inflammatory cells were present in 100% in control group, in 62.5% in sodium hyaluronate group, in 25% in chondroitin sulfate group, and in 12.5% in sodium hyaluronate and chondroitin sulfate group. This difference between groups was statistically significant (p = 0.001) (Figure 4).
Bone resorption
Bone resorption was present in 87.5% in control group, in 62.5% in sodium hyaluronate group, in 37.5% in chondroitin sulfate group, and in 12.5% in sodium hyaluronate and chondroitin sulfate group. This difference between groups was statistically significant (p = 0.019).
Fibrosis
Fibrosis was present in 87.5% in control group, in 62.5% in sodium hyaluronate group, in 37.5% in chondroitin sulfate group, and in 12.5% in sodium hyaluronate and chondroitin sulfate group. This difference between groups was statistically significant (p = 0.019).
Osteoclasts
Osteoclasts were present in 87.5% in control group, in 75% in sodium hyaluronate group, in 25% in chondroitin sulfate group, and in 12.5% in sodium hyaluronate and chondroitin sulfate group. This difference between groups was statistically significant (p = 0.007).
Osteoblasts
Osteoblasts were present in 37.5% in control group, in 62.5% in sodium hyaluronate group, in 75% in chondroitin sulfate group, and in 87.5% in sodium hyaluronate and chondroitin sulfate group. This difference between groups was not statistically significant (p = 0.259).
Discussion
The experimental study was based on 32 male rabbits that were subjected to induction of osteochondral defect, we were investigating the effect of intra-articular injection of sodium hyaluronate + chondroitin sulfate on inflammatory reaction and cartilage formation on the defect, through histopathological observation. Many osteoarthritic animal models used surgical approaches to initiate joint degeneration, and each method is designed to study a specific aspect of the injury or subsequent disease development, the present experimental study design was in accordance with multiple researches in terms of creating mechanical osteochondral defect in animals, either in dogs [11], sheep [15], or in rabbits [13], [16], [17], [18] as in this case, we selected New Zealand rabbits for being an easily available and less aggressive animals [19], [20], although the horse articular cartilage is the most comparable to humans, and they have been used to study articular cartilage repair and osteochondral defects [21], [22], yet rabbits remain easier for manipulation and have affordable cost.
In all groups, there was an inflammatory infiltrate formed of lymphocytes and many plasma cells associated with marked fibrosing reaction, bone resorption, few degenerated osteocytes, and few hyperplastic osteoblasts with scattered osteoclasts, nevertheless, these deleterious effects associated with full-thickness osteochondral defect were attenuated in severity with hyaluronic acid and/or chondroitin sulfate administration.
In hyaluronic acid and chondroitin sulfate sections, there was minimal to no residual inflammation in the form of very few scattered inflammatory cells within marrow spaces and in surrounding tissues with minimal fibrosis and the inflammatory cells had a statistically significant difference compared to other groups (p = 0.001), these results go along with multiple findings of previous researches that proved that CS and/or the sulfated disaccharides appear to elicit an anti-inflammatory effect at the synovial membrane and chondrocytes levels [23], [24], as explained by Omata et al. [25] that chondroitin had a biological effect on animal models through significant inhibition of osteoarthritic edema, synovitis, and destruction of the articular cartilage as well as reduced CRP and IL-6. Moreover, Bauerova et al. [26] proved that CS reduced the production of pro-inflammatory cytokines, CRP, phagocytic activity, and the intracellular oxidative burst of neutrophils.
In vivo in different experimental arthritis, the number and severity of articular symptoms decrease after CS administration. In bones, CS accelerates the mineralization process and bone repair, which agrees with the present results. For new bone tissue (woven and mature), present results showed bone matrix with almost no bone resorption with regularly arranged osteocytes and few regenerated osteoblasts as well as normal cartilage mineralization (calcified cartilage) which was found to be higher in the HA+CS group compared to the HA, CS, and control groups, although the difference was not statistically significant, it might need longer time to show these changes in bone. Several recent studies have shown almost similar results [13], [18].
Previous animal model-based studies have examined the beneficial effect of CS intra-articular injection in degenerate cartilage, their results reflected the presence of less obvious degenerative injuries, assuming that chondroitin sulfate stimulated the reparative process or delayed the disease evolution [11], [27]. Moreover, sodium hyaluronate has a viscosupplementation effect when used by intraarticular access, which improves the viscosity of the synovial fluid, decreases attritions and erosions, and also decreases the impact on the injured cartilage [28]. On the other hand, Smith et al. [29] did not observe any improvement in the cartilage morphology when he used sodium hyaluronate as intraarticular injection.
These overall findings coincide with the present results, as they showed that inflammatory cells and osteoclastic activities were minimal in the group that received both SH and CS. On the other hand, fibrosis, bone resorption, and osteoblasts findings among groups did not show any significant difference.
Conclusion
This study shows that intra-articular injection of a combination of chondroitin sulfate and sodium hyaluronate has an anti-inflammatory effect on the degenerative osteoarthritis of joints and it aids in the reparative process as well, as shown histopathologically.
Data Availability
The data that support the findings of this study are available from the corresponding author on reasonable request.
|
2022-12-30T16:17:16.904Z
|
2022-11-26T00:00:00.000
|
{
"year": 2022,
"sha1": "17e701d1cd064c7bf87e88daf00fe539d8ab750e",
"oa_license": "CCBYNC",
"oa_url": "https://oamjms.eu/index.php/mjms/article/download/11170/8248",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f43f337aa8df7556281d21604460e95dc0c5b28e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
164968955
|
pes2o/s2orc
|
v3-fos-license
|
To mathematical modeling of deformation of micropolar thin bodies with two small sizes
We consider some problems of modeling the deformation of micropolar thin bodies with two small sizes. Using the three-dimensional equations of motion, the constitutive relations and the boundary conditions of the micropolar elasticity theory [1, 2], we got the equations of motion, the constitutive relations and the boundary conditions of the micropolar theory of thin bodies under the parametrization of the thin body domain with two small sizes [3]. The boundary conditions and various representations of the system of equations of motion and the constitutive relations of physical content in moments with respect to the Legendre polynomials are obtained. Note that using this method of construction the thin bodies theory with two small sizes, we get an infinite system of ordinary differential equations. This system contains quantities which depends on one variable, namely, depends on the parameter of the base line. Thus, decreasing the number of independent variables from three to one we increase the number of equations to infinity, which, of course, has its obvious practical inconveniences. In this regard, we reduce an infinite system to the finite system. The initial-boundary value problems are formulated. To satisfy the boundary conditions on the front surfaces we constructed correcting terms [4]. As a special case, we considered a prismatic body. We used the tensor calculus to do this research [5–8].
Abstract. We consider some problems of modeling the deformation of micropolar thin bodies with two small sizes. Using the three-dimensional equations of motion, the constitutive relations and the boundary conditions of the micropolar elasticity theory [1,2], we got the equations of motion, the constitutive relations and the boundary conditions of the micropolar theory of thin bodies under the parametrization of the thin body domain with two small sizes [3]. The boundary conditions and various representations of the system of equations of motion and the constitutive relations of physical content in moments with respect to the Legendre polynomials are obtained. Note that using this method of construction the thin bodies theory with two small sizes, we get an infinite system of ordinary differential equations. This system contains quantities which depends on one variable, namely, depends on the parameter of the base line. Thus, decreasing the number of independent variables from three to one we increase the number of equations to infinity, which, of course, has its obvious practical inconveniences. In this regard, we reduce an infinite system to the finite system. The initial-boundary value problems are formulated. To satisfy the boundary conditions on the front surfaces we constructed correcting terms [4]. As a special case, we considered a prismatic body. We used the tensor calculus to do this research [5][6][7][8].
1. To parametrization of a thin body domain with two small sizes Let the thin body have a cross-section in the form of a rectangle, which sides are much smaller than its length (third dimension). Let the considered domain does not have the line of symmetry.
The position vector of an arbitrary point of the thin body domain is represented as [3] r(x , where r = r(x 3 ) is the vector parametric equation of the baseline, x 3 ≡ s is a natural parameter, h I )/2, r I = h I e I , < I = 1, 2 >, r 3 = ∂ 3 r is the unit tangent vector to the baseline, e 1 and e 2 are the unit vectors of the principal normal and binormal to the baseline, respectively. So, e 1 , e 2 , r 3 is the natural trihedral. Obviously, the ratio (1) for −1 ≤ x I ≤ 1 and x 3 ∈ [0, l), where l ≤ ∞, is the vector parametric equation of a thin body with two small sizes. Moreover, if l < ∞, then we have a thin body of finite length, and if l = ∞, then we have a thin body of infinite length.
Since the number of pages is limited, we will not dwell on the issues of parameterization of the thin body domain. All calculations can be found in [3]. We also will not discuss problems of the theory of moments of (m,n)-th order. We only give a definition of this moment with respect to the system of Legendre polynomials. The calculation of the moments of some functions under this parametrization can be found in [3].
Let {u k } ∞ k=0 be some orthogonal system of the Legendre polynomials on the segment [−1, 1], and F(x 1 , x 2 , x 3 ) is a tensor field.
Definition Moment
Thereafter, we will call (m,n) M (F) the operator of moments of (m, n)-th order with respect to the Legendre polynomials. In this case, the first index m refers to the variable x 1 , and the second index n refers to the variable x 2 . Taking into account the orthogonality of the system of Legendre polynomials and the definition of (2), it is easy to see that we will have 2. Various representations of the system of equations of motion in moments with respect to the system of Legendre polynomials Taking into account representations of gradient and divergence under this parametrization (see [3]) we can write the equations of motion of the moment theory in the following forms: where C is the 3d rank discriminant tensor, P q and µ µ µ q are the components of stress tensor and couple stress tensor, respectively, u is the displacement vector, ϕ ϕ ϕ is the rotation vector, ρ is the density, J is the 2d rank inertia tensor, 2 ⊗ is the inner 2-product [3,5,7,9]. Applying the moment operator of (m, n)th-order to (5), we get
On the boundary conditions under the parametrization with an arbitrary base line of a thin body domain with two small sizes
S 2 ) be the front surfaces of a thin body with two small sizes, which are determined by (1) when x 1 = −1 (x 1 = 1) and by arbitrary x 2 and x 3 , and x 2 = −1 (x 2 = 1) and arbitrary x 1 and x 3 , respectively. Let S 1 (S 2 ) be the left (right) element of a covering determined ) are known vectors of stress and couple stress on the surfaces known vectors of stress and couple stress on the surfaces as known vectors of stress and couple stress on the end-faces S 1 S 2 .
The boundary conditions of physical content on the front surfaces of the thin body can be represented as Here we introduced notations We can see that based on (7) the boundary conditions of physical content on the front surfaces in moments will be written in the form Applying the moment operator of (m, n)th-order to boundary conditions, we will obtain the required boundary conditions of the physical content in moments on the end-faces of a thin body in the form ; P → µ µ µ. The boundary conditions of the thermal content can be considered in the same way as (9)- (11).
Now it is not difficult to write down some of the required representations of the system of motion equations in moments with respect to the system of Legendre polynomials. Indeed, from (6) we get Further, from (6) we will have the following required representation of the system of motion equations in moments: where we introduced notations The relations (12) - (13) are different representations of the system of motion equations in moments with respect to the system of Legendre polynomials of the thin body with two small sizes. Using (4) we can get other representations of the system of equations in the moments [3,10]. But we will not dwell on this.
Note that each equation (without taking into account the boundary conditions) (12), with fixed values of m and n, contains an infinite number of terms. And each equation (using the boundary conditions) (13) contains the finite number of terms. Moreover, each equation contains moments of the contravariant component P3 = g3 [k 1 h 1 + x 1 h 1 ] s and replacing P3 and µ µ µ3 with P3 (r) and µ µ µ3 (r) into (12) and (13) respectively, where we will obtain the representations of the system of motion equations in moments of rth approximation. In addition, let us fix some non-negative integers M and N . Then, choose the first (M + 1)(N + 1) equations (m = 0, M , n = 0, N ) from the system of equations. In equations containing an infinite number of terms, we neglect the moments of unknown quantities whose order is greater than M (by the first index) and N (by the second index). Thus, we obtain representations of the system of motion equations in moments of (M, N )th approximation. If the representations of the system of motion equations in moments are obtained from the corresponding representations of the rth approximation, then such representations are called representations of the system of motion equations in moments of (r, M, N )th approximation. Above mentioned applies to the representations of the heat inflow equation and the constitutive relations.
Presentations of constitutive relations of micropolar theory of thermoelasticity in moments for a thin body with two small sizes
The constitutive relations (CR) for non-isothermal processes [1,3,11,12] have the form where A , B , C , D are the fourth-rank tensors, called tensors of elastic moduli, b = A 2 ⊗a +B 2 ⊗d , ⊗ d are the tensors of thermomechanical properties, u is the displacement vector, ϕ ϕ ϕ is the internal rotation vector, ϑ = T − T 0 is temperature drop, a and d are the tensors of thermal expansion. Particular cases of CR can be viewed in [2,13].
Therefore, the relations (15) can be written in the appropriate form and then presented in moments. From (15) it is clear that to represent them in moments it is sufficient to write down the gradient of a vector in moments. If the body is homogeneous with respect to x I , I = 1, 2, then applying the (m, n)th-order moment operator to (15), we will have (m,n) Note that (16) are valid for any system of polynomials (Legendre, Chebyshev) on given segment. Solving, for example, some boundary-value problem of (0, M, N ) approximation, we will obtain approximate expressions for the fields of displacement vector and stress tensor, respectively, in the following form [3] 17) Consequently, the expressions (17) satisfy the boundary conditions on the end-faces. For example, if the displacement field is given on the end-faces, then the constructed approximate solution (the first relation (17)) is consistent with the kinematic boundary conditions Obviously, the issue arises how accurately the relations (17) satisfy the boundary conditions of physical content on the front surfaces. Generally, these relations will not be fulfilled with the required accuracy. Thus, we add the term U 0 (x , x 3 , t) to the approximate expression for the displacement vector u (M,N ) (x , x 3 , t) (corrective factor) for satisfying the boundary conditions of physical content on the front surfaces. These problems are considered in detail in [3].
Note that, similarly to the above, we can consider various representations of the heat influx equation, the Fourier thermal conductivity law, and the boundary conditions for the thermal content in moments. We also note that different methods for reducing the infinite system to finite in case of thin bodies with one small size can be viewed in [3,11], and for thin bodies with two small sizes can be seen in [3,10,14]. It is not difficult to obtain initial conditions for the kinematic and thermal contents in moments, and to formulate the statements of problems in moments [3,10]. In order to shorten the text we will not dwell on these issues.
We also note that applying the canonical representations of material tensors and tensorblock matrices, the above relations can be represented in terms of eigenvalues and eigenvectors (eigentensor columns) of material tensors (tensor-block matrices). The application of eigenvalue problems of tensors and tensor-block matrices in mechanics is an upcoming trend, which deserves the attention of researchers. Some issues related to this direction are presented in the papers [7,9,[15][16][17]. Finally, we note that from the point of practice the next works [18][19][20][21] have merited attention. Note also, that the authors studied the dynamics and design of the power unit with hydraulic piston drive in last two papers.
Conclusion. Some problems of modeling the deformation of thin bodies with two small sizes are considered. The equations of motion and the constitutive relations, and also the boundary conditions are obtained. Definitions of the moment (m, n)th order of a certain quantity are given with respect to the system of Legendre polynomials. The boundary conditions and the various representations of the system of equations of motion and the constitutive relations in moments are obtained.
|
2019-05-26T14:19:31.337Z
|
2019-04-01T00:00:00.000
|
{
"year": 2019,
"sha1": "7b522518973b92c063c616e28782f11a1d3de448",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1205/1/012040",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "5044fe06d61cb3a5b25ad0e88c8f2ecc1dd14038",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
251741077
|
pes2o/s2orc
|
v3-fos-license
|
Computational valency lexica and Homeric formularity
Distributional semantics, the quantitative study of meaning variation and change through corpus collocations, is currently one of the most productive research areas in computational linguistics. The wider availability of big data and of reproducible algorithms for analysis has boosted its application to living languages in recent years. But can we use distributional semantics to study a language with such a limited corpus as ancient Greek? And can this approach tell us something about such vexed questions in classical studies as the language and composition of the Homeric poems? Our paper will compare the semantic flexibility of formulae involving transitive verbs in archaic Greek epic to similar verb phrases in a non-formulaic corpus, in order to detect unique patterns of variation in formulae. To address this, we present AGVaLex, a computational valency lexicon for ancient Greek automatically extracted from the Ancient Greek Dependency Treebank. The lexicon contains quantitative corpus-driven morphological, syntactic and lexical information about verbs and their arguments, such as objects, subjects, and prepositional phrases, and has a wide range of applications for the study of the language of ancient Greek authors.
Verbal valency and valency lexicons
Valency, intended as the number and type of arguments occurring with predicates, is a central property of verbs (Tesnière 1969: 238 ff.). The English ditransitive verb give, for example, requires three arguments: one expressing the person giving, one expressing the object given, and one expressing the recipient. If we know that an English sentence contains an active form of the verb give, we can expect to find its three arguments realized in the sentence, as we can see in (1) or (2). The transitive verb print, on the other hand, only requires two arguments (the person or object printing, and the object being printed), so we can expect to see two arguments if a sentence contains the verb print in its active form (3). On the other hand, an adjunct like yesterday can occur with most verbs and its presence cannot be expected based only on the presence of a verb like give or print (1 and 2).
He gave the receipts to the customers. (2)
They printed the paper yesterday. (3) The concept of valency is central to Dependency Grammar and is known with different terms in different linguistic subfields, including "argument structure" and "subcategorization". In this article, following McGillivray (2014: 31 ff.), we adopt an operational definition of valency, based on corpus and distributional methods, and take a theory-agnostic view on this topic. In this paper we describe AGVaLex, a corpus-driven valency lexicon for ancient Greek, and illustrate its value for historical linguistics scholarship through a case study on Homeric formulae. The lexicon was created automatically from the dependency syntax annotation of the Ancient Greek Dependency Treebank 2.0 (https://github.com/PerseusDL/treebank_data, Celano 2019) by adapting existing database queries written for Latin and described in McGillivray (2014: 31-60) and McGillivray & Vatri (2015).
Our focus on ancient languages, and particularly on ancient Greek, a "large-corpus language" (Mayrhofer,1980;Untermann, 1983), offers us the opportunity to test the effectiveness of corpus methods on a language for which no native speakers are available. This has received an increasing level of attention in recent years, in conjunction with the development and analysis of large-scale annotated corpora for corpus languages (see for example McGillivray 2014 for an overview on Latin).
The lexicon has a number of advantages and reuse potential. Thanks to its automatic creation procedure, the lexicon can be regenerated if new annotated data become available or if the annotation is corrected, which enhances its potential applications in future research. Moreover, unlike traditional dictionaries and handmade valency lexicons, computational valency lexicons like the one we present here provide a quantitative and systematic account of the valency properties of verbs reflecting the corpus they are extracted from. They can tell us whether a verb, for example, is found with a particular argument pattern, and how many times this occurs in the corpus. They can also give us information about the distribution of authors, genres and works of these patterns, and whether there is a change over time. The lexicon provides information about the number and type of arguments of all verbs occurring in the treebank. Its entries are equipped with morpho-syntactic information, namely the case of nouns and the mood of verbs, the gender and number of nouns and adjectives, and the voice of verbs. This information is highly valuable to investigate a range of linguistic questions, from the analysis of word order patterns to the study of individual verbs' constructions. The lexicon's valency patterns also display the lemmas of the arguments, which allows for lexical-semantics studies, for example to investigate the semantic fields of the subjects of objects of verbs and how they vary by author or work.
Applications to ancient Greek
Valency lexicons are an extremely useful tool in linguistic research on verbal complementation patterns. For ancient Greek, an important recent contribution on the topic is Keerksmaekers (2020), who analyses language change in a corpus of Greek papyri. This study, however, also showcases how much corpus pre-processing is necessary for this kind of scholarship; AGVaLex will offer an important tool for researchers wishing to conduct research on similar topics without collating their own corpus.
To illustrate this application, we propose a case study taken from Rodda (2021), on linguistic variation in archaic Greek epic poetry. The language of early Greek epic relies extensively on formulae, repeated constructions with limited syntactic and semantic flexibility; generations of researchers have investigated the precise extent of this flexibility and how it relates to issues of oral performance and language change (Rodda 2021 provides an extended bibliography; see particularly Hainsworth 1968 for an important example of this approach, and Friedrich 2019 for some criticism). Our study will show how the application of a welldeveloped pre-existing resource such as AGVaLex allows for new approaches to this crucial question in Homeric studies.
Previous work
Dictionaries typically display some information about verbal valency in their lexical entries. This is usually in the form of the grammatical case of arguments and the prepositions introducing the arguments themselves. For example, the dictionary entry for the verb αἱρέω 'to take' in the Brill Dictionary of Ancient Greek (Montanari 2015) shown is not proportional to the frequency of the constructions, and in many cases more uncommon constructions are given disproportionally more space in the entry. This is confirmed by the introduction to the Thesaurus Linguae Latinae (Bayerische Akademie der Wissenschaften, 2002), 1 for instance, and is common practice in other lexicographic resources.
Over time, dedicated valency lexicons have been created for specific languages. For example, Happ (1976) presents the only hand-made valency lexicon for Latin and was derived from a manual analysis of 800 verbal occurrences in Cicero's Orationes. Such resources offer highquality information derived from a detailed manual analysis and are therefore very reliable. However, they suffer from the lack of completeness which we observed earlier, and which affects other handmade resources like traditional dictionaries. The increased availability of such large syntactically annotated corpora has made it possible to develop methods for extracting valency information automatically, thus supporting the creation of corpus-driven computational resources aiming at a systematic account of the valency behaviour of the verbs in the corpora. Typically, such resources are drawn from corpora provided with morpho-syntactic annotation, which usually follow the Dependency Grammar paradigm. As the annotation marks predicates and their arguments, it is then possible to automatically identify them and extract them in the form of a table or database. Passarotti et al. (2016). Computational valency lexicons have several advantages over their manual counterparts. Because they directly rely on corpus data, they can easily show quantitative information such as the frequencies of each pattern for each verb, and link those back to the original corpus occurrences. They can also be easily expanded as the corpora they are based on grow, because they have been created programmatically.
Only one corpus-based valency lexicon is currently available for ancient Greek, as far as we are aware: HoDeL 2 (The Homeric Dependency Lexicon 2; Zanchi et al. 2018 andZanchi 2021), a project run at the University of Pavia. 2 HoDeL was automatically extracted from the syntactically annotated portion of the AGDT containing the Homeric poems. As explained in its guidelines, 3 for every verb in the Homeric poems, it includes its arguments, i.e. those dependents that are tagged as subjects, objects, object complements and predicate nominals. In the guidelines the authors point out that the lexicon does not contain referential null arguments and a number of consistency issues that affect the annotation of the treebank. Therefore, HoDeL has been manually edited to correct some annotation errors in the corpus, particularly on lemmatisation.
The online tool Myria (https://relicta.org/myria/), developed and maintained by Toon Van Hal and Alek Keersmaekers, is described as "a treebank-based vocabulary tool". It is part of the Pedalion project (Keersmaekers et al. 2019), a project based at KU Leuven which aims to improve the automatic syntactic analysis of ancient Greek. Myria can display information about ca. 6000 Greek words occurring at least 50 times in a corpus of literary texts (8th century BC to 1st century AD) from the Perseus and First One Thousand Years of Greek projects. For each of these words, "collostructures" are displayed: these are a pre-theoretical mix of collocations (e.g., the verb αἱρέω "to take" can occur "combined with an adverb", or "coordinating with another verb") and constructions (the same verb can occur "combined with an accusative noun", or "combined with a preposition in the genitive", specifically ἀντί "over, against, in exchange for"). The information in Myria is self-avowedly incomplete and being continuously updated.
The lexicon
AGVaLex was created from the Ancient Greek Dependency Treebank (AGDT 2.0; Celano 2019). AGDT 2.0 contains 557,922 tokens from the works listed in Table 1 The xml files of the so-called analytical layer of annotation of the treebank contain dependency-based syntactic trees. The treebank files were first converted into a tab-separated format via a Perl script, and then imported into a MySQL database; a series of MySQL query scripts then produced several database tables making up the lexicon. The scripts were adapted from the work done to create the Latin Dependency Treebank valency lexicon described in McGillivray (2014: 31-60). Specifically, we extracted all dependents of verbal forms labelled as 'SBJ' (subjects), 'OCOmp' (object complements), 'PNOM' (predicate nominals) and 'OBJ', which includes all other arguments, i.e. nouns and pronouns in the accusative, dative and genitive cases, prepositional phrases, infinitive verbs, subordinate clauses that can function as verbal objects such as accusative + infinitive constructions. The lexicon contains both dependents which are direct children and indirect children of verbal forms via preposition (AUXP), conjunction (AUXC), coordination (COORD) and apposing (APOS) notes. It is important to note that, because it was extracted from an annotated corpus, the lexicon does not include referential null arguments, i.e. those arguments that are required by a verb's valency structure, but are not lexically realised. 4 Figure 1. A selection of sixentries from AG VaLex.
Figure 1 displays six entries from AG VaLex. Each entry (or database record) corresponds to a verbal token occurrence in the AGDT and each column corresponds to each of eight different attributes of the token, which we can categorize into three main groups: 1. Metadata: the columns "author", "title", "subdoc", and "sentence_id" contain, respectively, the name of the author, the title of the work, the passage where the verb token occurs and the identifier of the sentence in the treebank. 2. Verb token attributes: the columns "verb" and "voice" display the verb's lemma and voice, respectively. 3. Argument patterns: the columns "frame" and "frame_fillers" contain the valency information, as explained in more detail below.
</sentence>
According to the treebank annotation guidelines (Celano 2014), subjects are tagged as 'SBJ', other verb arguments are tagged as 'OBJ', predicate nominals are tagged as 'PNOM' and object complements are 'OCOMP'. All these elements can depend on a coordination node (tagged as 'COORD') and therefore take the suffix '_CO', or an apposition node (tagged as 'APOS') and then the suffix '_AP'. For a full explanation of the annotation, see Celano (2014). In the example sentence the verb form ἀνθίσταται governs the subject δέος (tagged as 'SBJ') and the direct object σοὶ (tagged as 'OBJ'), as indicated by the "head" attribute which links each of these two notes to the verbal form (tagged with the identifier "5").
The AGDT has 548,782 word tokens of which 95,841 have been tagged with the part of speech 'verb' or 'participle'; these correspond to 36,964 verb types. AGVaLex was extracted from this treebank and contains 71,887 entries, one for each of the verb tokens occurring with at least one argument in this corpus. Table 1 display some basic statistics of the lexicon.
Entity Count
Verb tokens (lexical entries) 72,067 Unique verb lemmas 5077 Unique frames 7100 Unique frames with lexical fillers 43631 Table 1. Basic statistics of AGVaLex.
The treebank contains texts of 15 authors and 31 works. Table 2 shows the number of lexical entries for each author. Table 3. Most frequent valency frames, with their frequency in the lexicon. Table 3 shows the 20 most frequent frames in the lexicon. The most frequent frame is the pattern "active_OBJ[accusative]" which corresponds to constructions with accusative direct objects. Note that subjects in ancient Greek are not always expressed explicitly so this frame includes those cases in which the predicate is, for example, the first person singular and the subject is not expressed lexically.
Comparison with traditional lexicographical resources
A practical way to show the usefulness of the lexicon is to compare it with a commonly used scientific dictionary. We chose to compare it with the relatively recent Brill Dictionary of Ancient Greek (Montanari 2015, from here on GE, for Greek-English, the sigla provided by the editor), rather than the older LSJ (Liddell et al. 1996), as the Brill Dictionary highlights valency information more clearly, especially for high frequency verbs. So, for instance, the various constructions for τίθημι 'to set' are provided in cursive at the start of the dictionary entry, together with their most common translations in bold, before examples are provided in the body of the entry. Not all entries for verbs have an initial summary of their constructions, but even those which do not have it still highlight information about syntactic dependencies in cursive throughout the body.
In order to compare the constructions listed in the dictionary with those in the lexicon, we chose a small set of 5 transitive verbs, from the larger dataset that will be used in Section 4. These are very high frequency verbs with a reasonable variety of constructions: αἱρέω 'to take,' δίδωμι 'to give,' φέρω 'to bear,' βάλλω 'to throw,' and τίθημι 'to set;' given how common they are, they all have a summary of constructions in GE.
For each of these verbs, we noted down the dependency information that is given in GE, without taking note of diathesis (active vs. middle vs. passive), as the dictionary does not always break down meanings by diathesis unless a specific passive or middle meaning is involved. We then searched AGVaLex for all dependencies that are recorded for each verb, and noted which ones do not appear in the dictionary, as well as where they are attested. We made this choice because constructions that occur in a range of authors are arguably more likely to be recorded in a dictionary than constructions that are unique to one author, even in a partial sample like the one that forms the basis of the AGVaLex.
The results of this comparison are summarised in Table 4 below. The final column in this table contains the number of "collostructions" reported for the same verb in Myria (https://relicta.org/myria/, on which see 2 above). There is only limited overlap between the way Myria categorises collostructions and the way VaLex does (again, see section 2); therefore, the numbers are only reported for reference. As the table shows, while AGVaLex lists significantly more constructions than the dictionary, most of them are very rare and/or unique to one author, which makes them less relevant to a lexicographical resource that is meant to represent 'standard' Greek, with only limited reference to special usage. In addition to this, a large proportion of constructions that are unique to one author are unique to Homer, a phenomenon that sometimes has to do with Homeric syntax preserving traces of an archaic stage of development (see e.g. Hackstein 2010); for instance, as many as 48 of the 67 constructions listed for βάλλω involve prepositions that will at a later stage of the development of the Greek language be incorporated into the verb itself, creating individual compound verbs that are listed as separate dictionary entries. The issue of preverbs and their lexicalisation has been explored through computational valency lexica for Latin (McGillivray 2014, ch. 6), but no discussion of the issue in ancient Greek using similar methods exists.
Verb
That said, even for such a small sample as the one we tested, AGVaLex does sometimes bring useful additional information. For instance, we can hypothesise that a small cluster of constructions for δίδωμι 'to give' plus dative and infinitive, which only appears in Herodotus (two times), Hesiod (once), Homer (27 times) and Athenaeus (two times), is a feature of the Ionic dialect which is shared by all these authors. 6 Sometimes, the dictionary misses out on the fact that a verb can occur with a whole extra case: αἱρέω 'to take' occurs with an object in the genitive 15 times in Homer, 7 a construction that is not reported in GE. This sort of information is useful for traditional textual criticism, which often requires answering questions such as 'can this verb occur with x case?' AGVaLex offers a convenient database on which to search for answers to these questions, without having to manually check thousands of occurrences of one verb in a corpus of text.
The results above, of course, are not meant to show that the valency lexicon is superior to a published dictionary. Each lexical resource has its own purpose, but the valency lexicon does prove its worth in a test of its completeness against a common lexicographical resource, as well as its possibilities in relation to common philological aims like textual criticism, as detailed above.
In addition to the features described above, AGVaLex allows the user to retrieve summary data by construction (e.g., searching for all verbs that have the preposition ἐν 'in' plus the dative case), a type of search that cannot be performed even with an online dictionary such as GE, which has a limited user interface. On the other hand, AGVaLex is not suited to a language learner wanting to know about common constructions in an easily understandable way, and does not provide translations. The valency lexicon could also be used profitably to look for examples of specific structures for the purpose of writing a dictionary. As it also records the lexical information for nouns that enter in specific constructions with a verb, it is extremely useful for the type of semantic studies that will be described in Section 4.
A note on the Homeric Dependency Lexicon, the only other available verbal valency database for ancient Greek as described in Section 2. Since AGVaLex covers a much broader corpus than HoDeL, it makes little sense to directly compare the number of constructions retrievable by the two tools. We can, however, look at the number of Transitive Verb + Object phrases retrieved directly from Homer and Hesiod in Section 4, Table 5 below, and compare that with the data in HoDeL. For instance, for the verb αἱρέω 'to take', a script written for this purpose and running on the same AGDT treebank that HoDeL is based on retrieved 56 types, not tokens, of direct objects in the accusative; a HoDeL search of dependencies of the same verb retrieves 22 argument types tagged with the OBJ dependency and the ACC case. The discrepancy can be partially explained with the fact that the sample in Section 4 also includes the Hesiodic poems, but it still seems that the ad hoc script captured more occurrences than HoDeL; the outputs of the script have also been screened by hand.
Aims and context of the study
The case study introduced here aims to assess the scope of semantic variation in a sample of epic formulae, and then to compare the results with a baseline corpus (for the importance of this step see Wulff, 2008). We will use Distributional Semantics to quantify semantic variation. The target of analysis is a sample of formulae made of a transitive verb and its direct object in the accusative (from here on, TrV+Obj formulae), selected exclusively on the basis of frequency. These are phrases of the type μῆνιν ἄειδε (mênin[acc.sing] aeide[pres.imp.2s]), "sing the wrath", the first two words of the Iliad. We will look at the semantic range of the objects of these phrases to answer questions about the semantic behaviour of these objects depending on formulaic status: do formulaic verb objects display higher or lower restrictions to their semantic range compared to non-formulaic material? Specifically, does the existence of a formulaic phrase with a specific object promote the creation of quasi-synonymous or semantically related formulae (which would increase the average similarity of the objects) or does it have a pre-emptive effect, analogous to what we observe for idioms, where the existence of an idiom with a certain meaning actually discourages the creation of synonymous idiomatic phrases (Suttle & Goldberg, 2011)?
The study of formulaic variation has been a major topic in Homeric studies at least since the 1960s (Hoekstra, 1965(Hoekstra, , 1969Hainsworth, 1968;Postlethwaite, 1979;Friedrich, 2019). Formulae allow for a limited amount of linguistic variation, a trait which they share with idioms and other multi-word expressions in everyday language (Kiparsky, 1976). Most recently, the behaviour of formulae has been described under the linguistic framework of Construction Grammar (Goldberg, 1995): formulae are indissoluble pairs of form and function, and the restrictions to their shape are part of this system (Bozzone, 2014;Antović & Cánovas, 2016).
We use Distributional Semantics to model the range of meaning of the formulae and nonformulaic material in this case study. As a corpus-based approach, Distributional Semantics is also particularly suited to the study of dead languages such as ancient Greek, where no speaker input can be sought. In Distributional Semantics, the meaning of a word is defined as a function of its collocates in a corpus: words that share a linguistic context are also related in meaning (Harris, 1954;Fabre & Lenci, 2015). Shared linguistic contexts are modelled mathematically via word vectors which encode the frequency of co-occurrences between each word in the corpus and each of the others (with the possible exception of semantically empty "stop-words"). These vectors form a distributional space model of meaning (DSM); the distance between the vector associated to each word and the vector associated to another represents the similarity between the words' meanings.
Data and methods
The data on TrV+Obj formulae was extracted by running a Python script 8 on texts from the Ancient Greek and Latin Dependency Treebank (AGLDT: Bamman & Crane, 2011), a syntactically parsed corpus that is part of the Perseus Project. The TrV+Obj pairs were extracted from the four main archaic Greek epic texts: Homer's Iliad and Odyssey and Hesiod's Theogony and Works and Days (from here on, "the epic corpus"). Two formular editions (Pavese & Venti, 2000;Pavese & Boschetti, 2003), which are designed to mark material in the target texts as formulaic or non-formulaic based on their frequency in the texts, were used to establish which of these automatically extracted phrases are properly formulaic, i.e. repeated in the traditional language. Out of the 6764 formulaic TrV+Obj pairs that were thus extracted, only the objects of those verbs that occur at least 50 times in the epic corpus were selected, for a total of 26 verbs and 2703 tokens (ranging from 335 to 50).
The non-formulaic data for comparison was extracted from AG VaLex. All texts from the lexicon's database were included apart from those which overlap with the epic corpus, i.e. the Iliad and the Odyssey. We looked up each of the 26 target verbs in the lexicon, and manually selected the accusative objects from the existing data.
The analysis below is not on tokens, but on types (for the reasons see Barðdal, 2008), i.e. on unique object lexemes of each transitive verb. We therefore discarded any verbs that had less than 10 object types in either the epic corpus or the comparison corpus, which reduces the sample to 15 verbs. The final list of verbs, with their type frequency, is provided in Table 5; see Section 3.1 for a comparison between these numbers and the numbers that can be extracted from HoDeL. For each verb, therefore, we have a list of object types in the epic corpus and one in the baseline corpus, for a total of 30 lists. To assess their semantic similarity, we measured the cosine distance between the objects in each list and their respective centroids in the semantic space (see again Rodda, Probert & McGillivray 2019 for another example of this approach). This gives us 30 distributions of distances, which can be compared to each other or assessed for the influence of other factors.
Results
To assess the relationship between formulaic and non-formulaic verb phrases, we compared the semantic range of objects in the epic corpus vs. the baseline corpus for each verb. The results of this comparison are detailed in the boxplot in Figure The distributions of distances for each pair were compared using the Kolmogov-Smirnov test in R (R Core Team, 2017). Two significance thresholds were set: p < 0.05 for high significance (**) and p < 0.1 for low significance (*). The results are summarised in Table 6. There are relatively few significant differences here, even with a higher than usual significance threshold. The four verbs that show a significant difference are ἔχω ekhō "have" (our most frequent verb), χέω kheō "pour", τίκτω tiktō "give birth" and ἵημι hiēmi "send out" (three verbs with much lower type and token frequency). For the first three, the median similarity is higher in the baseline than the formulaic corpus; the variance is always higher in the formulaic corpus.
Verb
In other words, there is only a very limited effect of formularity on semantic range, but as far as an effect can be observed, it appears to go in the direction of constructional pre-emption: objects of formulaic phrases tend to show lower semantic similarity. This is somewhat surprising, as discussions of formulaic systems (from Parry 1930 and1932 onwards) have stressed the fact that having a range of expressions that are similar in meaning but have different metrical shapes, a result which could be easily obtained by varying lexical items and using synonyms or near-synonyms. It is possible that the definition of formularity adopted in this study, which was based on simple repetition and did not take metre into account, does not capture subtleties in the actual relation between verbs and objects which could help explain our results. It is also possible that a different approach to the data analysis would reveal a different patternfor instance, if we set out to look for individual clusters of closelyrelated words among the objects of a formulaic verb, rather than measure their semantic proximity to a centroid in the semantic space. All of these avenues remain open for further analysis. What we can say for certain is that there appears to be more to be explored when it comes to the semantics of verbal constructions in early Greek epic.
Discussion and conclusions
We have presented AGValex and illustrated, via the case study in Section 4, how it can be used to explore crucial issues in ancient Greek linguistics, including issues that are primarily of interest to literary scholars, who are particularly likely to appreciate a pre-compiled dataset that can be applied in their work. The limited space devoted to the application of AGVaLex in Section 4 should not obscure the fact that the existence of the database in practice enabled this research in the first place: gathering the data on TrV + Obj constructions in Homer and Hesiod required weeks of work, 10 which would have needed to be scaled up to the entirety of the baseline corpus, a practically insurmountable task. While the results of the case study should be seen as preliminary when it comes to furthering our understanding of semantic variation in formulae, they show the promising value of the Distributional Semantics approach and of the use of a comparison database to assess how formulaic behaviour differs from non-formulaic usage.
A resource such as AGVaLex, if maintained and kept up to date, can enable research that would otherwise require more time and computational power than the average literature scholar can be expected to apply. As the availability of syntactically annotated corpora expands, these resources can be integrated into AGVaLex, ensuring the widest distribution of the data.
|
2022-08-24T01:16:18.072Z
|
2022-08-23T00:00:00.000
|
{
"year": 2022,
"sha1": "335f8b932996c1602ad8bc7a3ef1d8eed028b51c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "335f8b932996c1602ad8bc7a3ef1d8eed028b51c",
"s2fieldsofstudy": [
"Linguistics",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.